text
stringlengths 1.36k
1.27M
|
|---|
Members Present:
Shain Vowell, Commissioner – Chairman
Catherine Denenberg, Commissioner
Bob Smallridge, Commissioner
Jerry White, Commissioner
Michael Foster, Commissioner
Sabra Beauchamp, Commissioner
Shelly Vandagriff, Commissioner
Tracy Wandell, Commissioner
Meeting Facilitator: Robby Holbrook, Finance Director
TRANSFERS (Approved through Consent Agenda)
THE 1st ITEM, to be presented to the Anderson County Budget Committee, was a written request from Art Miller, Dental Clinic, that the following TRANSFER in General Fund 101 be approved.
Increase Expenditure Code:
101-55160-307-2100 Communications $400.00
Decrease Expenditure Code:
101-55160-435 Office Supplies $400.00
Justification: Cell phone cost per month.
Motion by Commissioner Bob Smallridge, seconded by Commissioner Catherine Denenberg, and passed to approve the transfer requests.
THE 2nd ITEM, to be presented to the Anderson County Budget Committee, was a written request from Steve Payne, EMA, that the following TRANSFER in General Fund 101 be approved.
Increase Expenditure Code:
101-54410-599 Other Charges $5,000.00
Decrease Expenditure Code:
101-54410-719 Other Equipment $5,000.00
Justification: To purchase supplies and materials for the remainder of the fiscal year.
Motion by Commissioner Bob Smallridge, seconded by Commissioner Catherine Denenberg, and passed to approve the transfer requests.
THE 3rd ITEM, to be presented to the Anderson County Budget Committee, was a written request from John Vickery, Fleet Services, that the following TRANSFER in General Fund 101 be approved.
Increase Expenditure Code:
101-54900-338 Repairs and Maintenance Vehicles $20,000.00
101-54900-707 Building Improvements 22,453.61
$42,453.61
Decrease Expenditure Code:
101-54900-453-1000 Vehicle Parts-Other Departments Maintenance
Justification:
101-54900-338: Due to the availability of new ambulances, we have to keep our fleet longer. This money is to help with repairs to these vehicles.
101-54900-707: Reconstruction of offices from storm damage, funds will be transferred back into 101-54900-453-1000 when we receive check from insurance.
Motion by Bob Smallridge, seconded by Commissioner Catherine Denenberg, and passed to approve the transfer requests.
APPROPRIATIONS REQUIRING FULL COMMISSION APPROVAL
THE 4th ITEM, to be presented to the Anderson County Budget Committee, was a written request from Mark Stephens, Election Commission, that the following TRANSFER (PAYROLL) in General Fund 101 be approved.
Increase Expenditure Code:
101-51500-169 Part Time Help $5,000.00
101-51500-189 Machine Techs 600.00
$5,600.00
Decrease Expenditure Code:
101-51500-193 Election Workers $5,600.00
Justification: This transfer is necessary to cover the additional labor for our Election Technicians during the November Election, and for additional part time funds to offset the reliance of part time employees used when the office was down one full employee.
Motion by Commissioner Michael Foster, seconded by Commissioner Bob Smallridge, and passed to refer to the Anderson County Board of County Commissioners with a recommendation for approval.
Motion Failed
Voting Yes: Commissioner Michael Foster and Commissioner Shane Vowell
Voting No: Commissioner, Tracy Wandell
Commissioner, Shelly Vandagriff
Commissioner, Sabra Beauchamp
Commissioner, Bob Smallridge
Commissioner, Catherine Denenberg
Absent, Jerry White
THE 5th ITEM, to be presented to the Anderson County Budget Committee, was a written request from Mayor Frank, County Mayor’s Office, that the following APPROPRIATION in General Fund 101 be approved.
Increase Expenditure Code:
101-51300-399-ORRC1 ORRCA Website Hosting $560.00
Increase Revenue Code:
101-48990-ORRC1 Other Local Revenue-ORRCA Website $560.00
Justification: Mayor Frank is currently serving as chairwoman of Oak Ridge Reservation Communities Alliance. One of the duties of the Chair is responsibility for the ORRCA website, https://orrcatn.com, and regular posting of DOE/TDEC correspondence for the public. This amendment is to set up specific codes to pay for web hosting for the site. Expenses will be reimbursed through a grant that funds ORRCA.
Motion by Commissioner Catherine Denenberg, seconded by Commissioner Michael Foster, and passed to refer to the Anderson County Board of County Commissioners with a recommendation for approval.
THE 6th ITEM, to be presented to the Anderson County Budget Committee, was a written request from Roger Lloyd, Buildings & Grounds, that the following APPROPRIATION in General Fund 101 be approved.
Increase Expenditure Code:
121-58836-335-GA014 County Buildings-Repair & Maintenance $18,635.00
Decrease Reserve Code:
121-34512 ARP Funds $18,635.00
Justification: Anderson County owns 11-bell Slover Chimes atop the Courthouse. Verdin Co. renovated the bells in 1997, which included current control system. In 2016, the chimes began experiencing problems and a couple of on-site calls were conducted in attempts to keep older system operational. Chimes need new Master Control system. New system can be connected to LAN, allowing scheduling of chimes. Proposal is to design, fabricate, and install new Master Control to include digital bell ringing controller and new electric panel. This request is a one-time cost, but post installation, our plans are to contract for annual maintenance of $1,345 that we would
cost, but post installation, our plans are to contract for annual maintenance of $1,345 that we would incorporate into each annual budget. This would include cleaning/servicing of each bell and striker, adjustment and testing of until and bells.
Motion to approve by Commissioner Catherine Denenberg, seconded by Commissioner Shain Vowell.
Motion to amend from fund 101 to fund 121 by Commissioner Michael Foster, seconded by Commissioner Catherine Denenberg.
Motion as amended passed unanimously.
**THE 7th ITEM**, to be presented to the Anderson County Budget Committee, was a written request from AC Library Board, Rocky Top Library, that the following **APPROPRIATION** in Library Fund 115 be approved.
**Decrease Reserve Code:**
115-34535-3001 Dedicated Reserve $810.00
**Increase Expenditure Codes:**
115-56500-317-3001 Data Processing Services $347.00
115-56500-435-3001 Office Supplies $463.00
$810.00
*Justification:* I need to make purchases with my LSTA grant money. Deep Freeze protection for public computers for 3 years, a flatbed scanner for Patron use and a new computer monitor.
Motion by Commissioner Tracy Wandell, seconded by Commissioner Catherine Denenberg, and passed to refer to the Anderson County Board of County Commissioners with a recommendation for approval.
**THE 8th ITEM**, to be presented to the Anderson County Budget Committee, was a written request from AC Library Board, Janine Brewer, that the following **APPROPRIATION** in Library Fund 115 be approved.
**Decrease Reserve Code:**
115-34535 AC Library restricted for other purposes $212.00
**Increase Expenditure Code:**
115-56500-351 Rental of PO Box $212.00
*Justification:* Rental of PO Box at Clinton Post Office, 1 year (12/01/2022-11/30/2023)
Motion by Commissioner Tracy Wandell, seconded by Commissioner Catherine Denenberg, and passed to refer to the Anderson County Board of County Commissioners with a recommendation for approval.
THE 9th ITEM to be presented to the Anderson County Budget Committee, was a written request from Annette Prewitt, County Commission, that the following APPROPRIATION in General Fund 101 be approved.
Increase Expenditure Code:
101-51100-399 Other Contracted Services $1,750.00
Decrease Reserve Code:
101-39000 Unassigned Fund Balance $1,750.00
Justification: To pay the other half of Open Meeting subscription fees for the voting system for 1/1/23 through 6/30/23.
Motion by Commissioner Bob Smalridge, seconded by Commissioner Tracy Wandell, and passed to refer to the Anderson County Board of County Commissioners with a recommendation for approval.
THE 10th ITEM, to be presented to the Anderson County Budget Committee, was a written request from John Vickery, Fleet Services, that the following APPROPRIATION in General Fund 101 be approved.
Increase Expenditure Codes:
101-54900-435 Office Supplies $4,212.62
101-54900-424 Garage Supplies 763.40
101-54900-453 Vehicle Parts 1,358.76
101-54900-446 Small Tools 142.94
101-54900-451 Uniforms 2,594.77
101-54900-709 Data Processing Equipment 4,319.92
101-54900-790 Other Equipment 1,756.35
$15,148.76
Increase Revenue Code:
101-49700 Insurance Recovery $15,148.76
Justification: This is to move insurance money (from the flood damage in July 2022) into the line items above for replacement of the items that were lost in the flood.
Motion by Commissioner Tracy Wandell, seconded by Commissioner Michael Foster, and passed to refer to the Anderson County Board of County Commissioners with a recommendation for approval.
THE 11th ITEM, to be presented to the Anderson County Budget Committee, was a written request from Gary Long, Road Superintendent, that the following APPROPRIATION in Highway Fund 131 be approved.
Increase Expenditure Code:
131-68000-714 Highway Equipment $200,000.00
Decrease Reserve Code:
131-34550 Restricted For Hwy $200,000.00
Justification: In case Highway Department needs more equipment.
Motion by Commissioner Tracy Wandell, seconded by Commissioner Catherine Deneburg, and passed to refer to the Anderson County Board of County Commissioners with a recommendation for approval.
THE 12th ITEM, to be presented to the Anderson County Budget Committee, was a written request from Jeff Cole/Regina Copeland, County Clerk/Trustee, that the following APPROPRIATION in General Fund 101 be approved.
Increase Revenue Code:
101-49700 Insurance Recovery $23,356.00
Increase Expenditure Codes:
101-52500-790 County Clerk-Other Equipment $8,431.00
101-52400-790 Trustee-Other Equipment 14,925.00
$23,356.00
Justification: See attached incident report that is a result of freezing temperatures that caused frozen water pipes and power outages. All equipment and hardware lost.
Motion by Commissioner Tracy Wandell, seconded by Commissioner Catherine Deneburg, and passed to refer to the Anderson County Board of County Commissioners with a recommendation for approval.
THE 13th ITEM, to be presented to the Anderson County Budget Committee, was a written request from Sheriff Russell Barker, Sheriff’s Department, that the following APPROPRIATION in Fund 122 be approved.
Increase Expenditure Code:
122-54150-718 Motor Vehicle $45,000.00
Decrease Reserve Code:
122-34525-1000 Restricted For Public Safety $45,000.00
Justification: This money will be used to purchase two replacement vehicles for narcotic investigations. Once they are purchased we will surplus current vehicles in the narcotics unit. The monies from the sale of those vehicles will go back in the restricted code.
Motion by Commissioner Catherine Deneburg, seconded by Commissioner Michael Foster, and passed to refer to the Anderson County Board of County Commissioners with a recommendation for approval.
**SECTION A, FY 23/24 Budget**
Finance Director Robby Holbrook presented the FY2023-2024 Budget Guidelines, Forms, and Calendar for Committee Approval.
Motion by Commissioner Sabra Beauchamp, seconded by Commissioner Catherine Denenberg, and passed to approve the Budget Guidelines, Forms, and Calendar as presented.
Motion passed unanimously via voice vote.
**SECTION B, Highland Communications Site-Prep Costs/Mayor Frank**
Clerical error. Item removed from agenda.
**SECTION C, New Business**
1. Finance Director Robby Holbrook presented an appropriation request on behalf of County Clerk Jeff Cole and Trustee Regina Copeland.
Motion by Commissioner Tracy Wandell, seconded by Commissioner Catherine Denenberg, and passed to refer to the Anderson County Board of Commissioners with a recommendation for approval.
This is reflected as the “12th item” above.
2. Finance Director Robby Holbrook presented an appropriation request on behalf of the Sheriff’s Office.
Motion by Commissioner Catherine Denenberg, seconded by Commissioner Michael Foster, and passed to refer to the Anderson County Board of Commissioners with a recommendation for approval.
This is reflected as the “13th item” above.
Meeting adjourned.
Robby Holbrook, Finance Director
|
Willing, Morris and Swanwick, Philadelphia mercantile house, succeeded the early Willing, Morris and Company (organized in 1754 by Thomas Willing and Robert Morris), and carried on business generally throughout the period of the American Revolution.
Members of the firm were
Thomas Willing (1731-1821)
Robert Morris (1734-1806)
John Swanwick (1740-1798)
THOMAS WILLING participated actively in the civic and political life of Philadelphia for many years. He was in 1764-1767 a member of the Provincial Assembly of Pennsylvania, in 1767 a justice of the Supreme Court of Province. In 1765 he signed the Philadelphia non-importation agreement directed against the Stamp Act. In 1774 he was President of the first Provincial Congress of Pennsylvania, keeping in touch with the First Continental Congress, and in 1775 he was a member of the Second Continental Congress. When, two years later, the British occupied Philadelphia, he refused to take the oath of allegiance to King George III. In 1781 he was chosen President of the newly organized Bank of North America, and was subsequently responsible for the success of that institution during the economic depression of 1785-1785 and the "bank war." A supporter of the movement for the Constitution of 1787, he became in 1791 President of the First Bank of the United States and continued in that office until 1797. Then retiring from that Bank, he remained active in business until 1807, by which time he had augmented his estate to the value of $1,000,000.
ROBERT MORRIS, member of the mercantile house from 1754 to 1793, was during that 39-year interval its most active director. In 1765 he signed the Philadelphia non-importation agreement. In 1775 the Pennsylvania Assembly made him a member of the Council of Safety, and he continued in that membership during 1776. In September, 1775, the Secret Committee of Continental Congress contracted with his firm for the importation of arms and munitions; and in the same year he was elected to serve in the last Pennsylvania Provincial Assembly.
In January, 1776, he was appointed member of the Committee of Secret Correspondence of the Continental Congress, and his firm continued importing supplies for the then-forming Continental Army. In August, 1776, MORRIS signed the Declaration of Independence; and, when in December of that year Congress fled from Philadelphia, he continued there, carrying on the work of the Secret Committee. In 1777 this committee was re-named, first the Committee of Foreign Affairs, then the Committee of Commerce, Morris continuing
with it and frequently serving as its banker. In 1778 he signed the Articles of Confederation, and in August was made chairman of the Congressional Committee on Finance, to serve in that capacity until November. In 1781 he was chosen by Continental Congress to be Superintendent of Finance, in which office he remained until 1783. During that same interval he was one of the heaviest subscribers to the Bank of North America, an institution aided into being by a timely loan of $200,000 in specie brought in by the French fleet. He participated in the Constitutional Convention of 1787, and was a member of the United States Senate, 1789-1795.
JOHN SWANWICK, son of a Philadelphia loyalist, was himself a supporter of the cause of Independence, a gentleman interested in literature, and the author of a volume of poetry. Letters indicate that he was particularly active in the firm from 1784 to 1786. In the 1790's he was elected as a Democrat to the 4th and 5th Congresses of the United States, and he served from March 4, 1795, to the time of his death in 1798.
The papers of the WILLING, MORRIS AND SWANWICK Company are chiefly associated with the firm as a unit or with ROBERT MORRIS or JOHN SWANWICK individually, none of the papers emanating from THOMAS WILLING exclusively. Largely they represent correspondence with TENCH TILGHMAN AND COMPANY, merchants of Baltimore, Maryland, or with JONATHAN HUDSON, merchant of the same city. Among them are occasional papers associated with ROBERT MORRIS' career as Superintendent of Finance and in similar capacities.
They embrace folders as follows:
A. Folder 1: Mercantile Correspondence
ROBERT MORRIS, chiefly from Philadelphia, to JONATHAN HUDSON, Merchant, Baltimore, 1774 and 1777-1781
Morris' letters, beyond an occasional allusion to Continental Congress and eminent figures of the day like Washington, Benjamin Harrison, Jr., and General Nelson, are concerned with transactions in such commodities as wheat, flour, bacon, pork, beef, salt, tobacco, loaf sugar, powder, brimstone, flannels and blankets. Various trading houses in New England, Maryland and Virginia, including two in which Morris held partnerships, Jonathan Hudson and Company and Peter Whiteside and Company, are referred to; arrangements for the payment and transfer of gold, currency, drafts, and the like are announced; boats, sloops, ships, brigs, schooners, engaged in coastwise shipping or trade with the West Indies and with Europe are mentioned with owners' and officers' names; There is an occasional reference to Morris' deep absorption in public business, but predominately the matter of these letters is private trade.
1. Dec. 12, 1774 (Signed with initials "R. M.")
2. May 20, 1777 (Letters of J. H. sent to General Washington and to the Paymaster General)
3. Aug. 26, 1777
4. Jan. 23, 1778
5. May 21, 1778 (From Manheim, Pennsylvania, the British holding Philadelphia at this time)
6. Jul. 10, 1778
7. Sep. 15, 1778
8. Oct. 20, 1778
9. Nov. 20, 1778
10. Nov. 30, 1778
11. Jan. 26, 1779
12. Feb. 9, 1779
13. Feb. 23, 1779 (Public business obliging Morris to neglect his own affairs)
14. Feb. 25, 1779
15. Apr. 6, 1779
16. Apr. 9, 1779
17. May 12, 1779
18. May 18, 1779
19. Aug. 19, 1779
20. Sep. 24, 1779
21. Sep. 28, 1779
22. Nov. 15, 1779
23. May 2, 1780
24. May 15, 1780
25. May 23, 1780
26. Aug. 22, 1780
27. Sep. 6, 1780
28. Sep. 19, 1780
29. Dec. 11, 1780 (In the hand of John Swanwick, but signed by Morris, who is unwell at the time)
30. Mar. 5, 1781
31. Jul. 30, 1781
*Note: This collection includes some items that were apparently acquired after the preparation of this list and are therefore not noted here; these are in an unnumbered folder.*
A..Folder 2: Robert Morris, from Philadelphia to Thomas R. Tilghman, Baltimore, July 9, 1786, on notes and articles of account
B..Folder 1: Robert Morris to Monsieur Dumas (Count Mathieu Dumas, aide to Rochambeau, acquaintance of Benjamin Franklin, Agent of the United States in Holland), March 24, 17--
B..Folder 2: Robert Morris, Superintendent of Finance to Continental Congress, four letters:
1..To His Excellency, the President of the Supreme Executive Council of Pennsylvania, July 9, 1781, on supplies, rations or forage, to be delivered in this State for the service of the United States
2..To His Excellency, the President of Pennsylvania, July 14, 1781, on transportation of supplies and on his purchases in New Jersey and in New York of flour for the use of the United States Army at posts in Pennsylvania
3..To His Excellency the President of the Supreme Executive Council of the State of Pennsylvania, July 16, 1781, on obligations, in supplies and dollars, of Pennsylvania to the United States
4..To Joseph Clay, Esq., May 17, 1783, on official procedures in the settlement of accounts in Army supply
B..Folder 3: Robert Morris, Superintendent of Finance, to the President of Congress, August 28, 1781: Statement of Accounts, clerical copy, 9 pp.
B..Folder 4: Robert Morris: Memorial to the Congress of the United States for liquidation and settlement of accounts for supplies between Pennsylvania and the United States incurred during Morris' Superintendency of Finance and not settled before his release from that office, c. 1782-1783
B..Folder 5: Robert Morris to James Wilson, Esq., from New York, August 23, 1789, on the postponement of a visit to Philadelphia because of an unanticipated request of the President of the United States to meet the Senate in their Chamber; on political aspirants; on the House of Representatives and their amusing themselves with Amendments to the Constitution "for which I blame Madison;" on his putting into motion the "idea of a permanent Residence...but Delaware must I think be the place"
B..Folder 6: Robert Morris, from New York, March 12, 1790, to John Ross Esq., on a general account connected with a Contract made with "the Secret Committee" (of Continental Congress) on February 5, 1776
C..Folder 1: Robert Morris: Memorandum of an Agreement, December 31, 1793, between Robert Morris and John Warder in re sale of shares of stock of the Bank of the United States, consented to by the assignees of John Warder, Messrs Mordecai Lewis, James C. Fisher, and Robert Morris; with subsequent agreement, April 10, 1794, between Robert Morris on the one part and John Nicholson and James Greenleaf in re the said shares of stock.
D..Folder 1: Appearance Bond, January 16, 1777, of Cadwallader Morris to appear before the Pennsylvania Council of Safety on or before January 26, 1777
D..Folder 2: John Swanwick (1740-1798) to the President and Supreme Executive Council of Pennsylvania: A Petition for permission to his mother to visit her attained 'Loyalist' husband at Elizabethtown. (Granted Nov. 6, 1781)
E..Folder 1: Andrew Limozin, wine exporter of Havre de Grace (France), December 30, 1783, to Willing, Morris and Swanwick, on a consignment of Burgundy wine
E..Folder 2: Clarke and Manwaring, Baltimore, to Willing, Morris and Swanwick...two letters, October 21 and 29, 1785
Willing, Morris and Swanwick to Clarke and Manwaring,...one letter, ---- l, 1785:
On the arrival of seaworthy ships, their burdens, their availability for transatlantic cargoes, etc.
F..Folder 1: John Swanwick to Tench Tilghman Esq., Baltimore, June 11, 1784-February 3, 1786,...seven letters chiefly on business and accounts:
✓ 1..Jun. 11, 1784
✓ 2..Oct. 9, 1784, introducing a young gentleman from Liverpool, England
✓ 3..Jan. 10, 1785
✓ 4..Aug. 18, 1785, from Skeweth Tavern, on a misadventure on a coach from Baltimore to Bush Town and his indignation
✓ 5..Jan. 10, 1786
✓ 6..Jan. 10, 1786, on current legislation of the Assembly related to taking away the charter of the Bank of North America
✓ 7..Feb. 3, 1786
F..Folder 2: John Swanwick to Tench Tilghman and Company, July 26, 1784, on accounts
F..Folder 3: Willing, Morris and Swanwick to Tench Tilghman, Esq., Baltimore, February 10, 1786, on an account connected with the late House of Francis and Tilghman
F..Folder 4: Mercantile and Financial Correspondence
WILLING, MORRIS AND SWANWICK, Philadelphia Merchants, to Tench Tilghman and Company, Merchants, Baltimore, 1783-1784...63 Letters
These letters, most frequently in the hand of John Swanwick, but signed chiefly with the name of the Company, relate to trade in such commodities as wheat, corn, flour, tobacco, Barbadoes rum, Lisbon wine, wine, molasses, tea and articles of grocery. They bear upon business transactions of mercantile houses in England, Holland and Portugal, as well as in coastal towns and cities of the United States. Comment occurs in them on accounts, drafts, bills of exchange, shares of bank stock with powers of voting related to them, and the like.
1..Dec. 5, 1783
2..Jan. 4, 1784
3..Jan. 23, "
4..Feb. 6, "
5..Feb. 23, "...R. Morris advises T. T. to subscribe for 6 shares of bank stock (Bank of North America)
6..Mar. 3, "
7..Mar. 24, 1784...R. Morris to be spoken with in re the 10 shares of bank stock ordered by T.T.; attempts for the 'New Bank' frustrated
8..Mar. 29, "...The 10 shares of bank stock subscribed for
9..Mar. 30, "
10.Apr. 24, "
11.Apr. 27, "
12.Apr. 27, "
13.Apr. 29, "
14.May 3, "...Resolution passed that, however many shares a bank stock holder shall possess, he may not have more than 20 votes
15.May 10, "...On use of power of attorney from T. T. in a bank election
16.May 14, "
17. May 17, "
18.May 20, "
19.Jun. 3, "
20.Jun. 8, "
21.Jun. 15, "
22.Jun. 24, "
23.Jun. 29, "
24.Jul. 5, "
25.Jul. 19, "
26.Jul. 22. "
27.Jul. 27, "
28.Jul. 30, "
29.Jul. 30, "
30.Aug. 2, "...Credit entered on T. T.'s dividends from bank stock
F..Folder 4 (cont'd):
✓ 31. Aug. 9, 1784
✓ 32. Aug. 10, "
✓ 33. Aug. 13, "
✓ 34. Aug. 17, "
✓ 35. Aug. 20, "
✓ 36. Aug. 24, "
✓ 37. Sep. 2, "
✓ 38. Sep. 9, "
✓ 39. Sep. 13, "
✓ 40. Sep. 16, "
✓ 41. Sep. 23, 1784
✓ 42. Sep. 23, "
✓ 43. Oct. 1, "
✓ 44. Oct. 7, "
✓ 45. Oct. 12, "
✓ 46. Oct. 13, "
✓ 47. Oct. 14, "
✓ 48. Oct. 14, "
✓ 49. Oct. 21, "
✓ 50. Oct. 30, "
✓ 51. Nov. 8, 1784...Accompanied by a letter from Scott, Pringle and Co. of Madeira, Sep. 23, 1784
✓ 52. Nov. 10, "...Accompanied by a letter of the same date from Antonion Jose da Cumha Basto
✓ 53. Nov. 10, "
✓ 54. Nov. 16, "
✓ 55. Nov. 22, "
✓ 56. Nov. 23, "
✓ 57. Nov. 26, "
✓ 58. Nov. 30, "
✓ 59. Dec. 6, "
✓ 60. Dec. 14, "
✓ 61. Dec. 15, "
✓ 62. Dec. 20, "...You have never sent me your power of attorney to receive your dividends for your 20 shares of bank stock
✓ 63. Dec. 30, "...Power relative to bank stock enclosed
F..Folder 5: Mercantile and Financial Correspondence
Willing, Morris and Swanwick, Philadelphia Merchants, to Tench Tilghman and Company, Merchants, Baltimore, 1785-1786...80 Letters
These letters, frequently in the hand of John Swanwick, but signed most often with the name of the Company, relate to trade in such commodities as tobacco, wheat, corn, flour, teas, aniseed, wine, Teneriffe wine, Madeira wine, Sherry wine, claret, rum brandy, Jamaica spirits, salt, merchandise. They bear upon transactions of business houses in England, Holland, France, the Canary Islands, the West Indies; in Philadelphia and Baltimore; in Pennsylvania, Maryland and Virginia. Notes, bills of exchange, notes of hand, complex matters of debit and credit, the tightness of money markets are in frequent mention. Dividends in the bank stock of T. T. and Co. in the Bank of North America are accounted for. The failure, or the nearness to failure, of mercantile houses is from time to time enlarged upon. Ships with the names of officers and occasional instructions come often into mention.
✓ 1..Jan. 5, 1785
✓ 2..Jan. 18, "...Credit given T. T. for dividends received from bank stock! $320
✓ 3..Jan. 22,
✓ 4..Jan. 28, "...Draft in favor of Robert Morris from T. T.
✓ 5..Feb. 10, "...Mr. Kennison of Liverpool in a very bad way."
✓ 6..Feb. 15, "...The matter of the Certificates & Funding Bill, as it is called here, is not yet decided."
✓ 7..Feb. 21, "...The misfortunes of Mr. Middleton."
✓ 8..Mar. 15, "...Bills of exchange.
✓ 9..Mar. 18,
✓ 10.Mar. 21, "..An Act for striking paper money has been passed.
✓ 11.Mar. 30,
✓ 12.Mar. 31, "...Mr. Kennison of Liverpool has failed;" W., M. and S. and T. T. likely both to be sufferers.
✓ 13.Apr. 19, "...Power of attorney from the assignees of Richard Middleton; bills of exchange being bought on better terms
✓ 14.Apr. 22,
✓ 15.May 3,
✓ 16.May 12, "...Croxdale's bankruptcy known.
✓ 17.May 16,
✓ 18.May 23,
✓ 19.May 26, "...Walter Roe's claims for damages against W., M. and S.
✓ 20.Jun. 4, "...Hope for favorable action on a bill of exchange
✓ 21.Jun. 7, "...L 1275.9.1 in notes of hand.
✓ 22.Jun. 15, "...L 200 Stg for Mr. Middleton's assignees.
✓ 23.Jun. 21,
✓ 24.Jun. 21,
✓ 25.Jun. 28,
June 4, 1785
At the age of 17 years, 6 months, 2 days.
F..Folder 5 (cont'd);
✓ 26. Jul. 4, 1785...Worry about credit.
✓ 27. Jul. 11, "
✓ 28. Jul. 25, "...L 13 of dividends of Isaac Canbibber on bank stock with W.M. and S.
✓ 29. Jul. 29, "...Some problems in foreign exchange
✓ 30. Aug. 2, "
✓ 31. Aug. 8, "
✓ 32. Aug. 23, "...Hope that "a very shameful" delay may teach foreign houses more experience in trusting their effects in the future into such precarious hands."
✓ 33. Aug. 25, "...Bill of exchange problems.
✓ 34. Aug. 30, "
✓ 35. Aug. 31, "...Disorder in the affairs of Lacaze and Mallet, merchants in France
✓ 36. Sep. 3, "
✓ 37. Sep. 5, "
✓ 38. Sep. 13, "...Our Assembly has passed the Act for Repeal of the Bank, which goes on notwithstanding
✓ 39. Sep. 20, "
✓ 40. Sep. 27, "
✓ 41. Sep. 30, "
✓ 42. Oct. 20, "
✓ 43. Nov. 1, "
✓ 44. Nov. 10, "
✓ 45. Nov. 15, "...Arrangements with T. T. and Co. for recognizing drafts of Thomas Giese of Lisbon
✓ 46. Nov. 21, "
✓ 47. Nov. 29, "
✓ 48. Dec. 1, "
✓ 49. Dec. 2, "
✓ 50. Dec. 6, "
✓ 51. Dec. 8, "...Bill of exchange for 1120 Spanish dollars.
✓ 52. Dec. 13, "
✓ 53. Dec. 19, "
✓ 54. Dec. 27, "
55. Dec 27, 1785
✓ 55. Dec. 27, "
56. Dec. 28, "
✓ 56. Dec. 28, "
57. Dec. 29, "
✓ 58. Jan. 3, 1786
✓ 59. Jan. 21, "...Manual Noah has lately failed.
✓ 60. Jan. 26, "
✓ 61. Jan. 28, "
✓ 62. Jan. 30, "
✓ 63. Feb. 12, "
✓ 64. Feb. 18, "...A bill of exchange drawn by our R. Morris on Messrs Laval and Wilfelheim of Paris" has been "protested" and returned
✓ 65. Feb. 20, "
✓ 66. Feb. 28, "
✓ 67. Mar. 13, "
✓ 68. Mar. 17, "
✓ 69. Mar. 20, "...Mr. Brodhurst's bill of exchange, dated Knitsford, Nov. 23, 1785
✓ 70. Mar. 29, "
F..Folder 5 (cont'd):
✓ 71. Mar. 31, 1786
✓ 72. Apr. 10, "....Remittance to the assignees of Mr. Middleton
✓ 73. Apr. 19, "....Bank stock cannot be sold but at a discount of 7½ or 8 per cent
✓ 74. Apr. 25, "
✓ 75. May 6, "...(To Thomas R. Tilghman) Involved problems here connected with Paper Money. (Mr. Tench Tilghman dead)
✓ 76. May 12, "....Remittance to Mr. Middleton's assignees; bills of exchange
✓ 77. May 17, "
✓ 78. Jun. 6, "
✓ 79. Jun. 26, "....Messrs Dumeste & Bentalowe's bill on the Cape has been duly paid for 1250 dollars
✓ 80. Jul. 12, "..."We have paid L 31.10.9 duties on the invoice of the Fly to the debit of T. T. & Co., and we have received L 90 for T. T. & Co.'s dividends of bank stock
Two items purchased from Benjamin Autographs, New York:
1. Letter to Jona Hudson, August 11, 1777
2. An account, 1779
Letters and papers of Robert Morris (1734-1806) relating chiefly to business transactions of the Philadelphia mercantile firm of Willing, Morris and Swanwick. Among the letters are those written to Tench Tilghman & Co., Baltimore, pertaining to shipping, and export and import of commodities such as tobacco, salt, rum, wheat, and flour, 1785-86; letters of Robert Morris to Jonathan Hudson, Baltimore, 1777-1790; to the Supreme Executive Council relative to army supplies for the State of Pennsylvania and others referring to the Bank of North America and to the scarcity of hard money in 1781; letters concerning insurance on cargoes, 1784-85; letters of John Swanwick, 1781-86; copy of an agreement among Morris, John Nicholson, and James Greenleaf, to buy certain securities of John Warden, 1794.
The names of Whiteside and Isaac Vanbibber, ship owner, are frequently mentioned in the correspondence.
Except where otherwise noted, items listed below and appearing on this microfilm are located in Manuscript Group 134: Willing-Morris-Swanwick Company Records, at the Pennsylvania State Archives.
**Unnumbered Folder: 18 items**
- Stephen Ceronio to Willing, Morris, and Co., May 20, 1776
- Robt Morris to Bingham and Harrison, Feb. 27, 1777
- Robt Morris to Jona Hudson, Aug. 11, 1777
- Stacey Hepburn in Account with Robert Morris, Jan. 12, 1781
- Willing, Morris, and Swanwick to Tench Tilghman, Jan. 5-Apr. 8, 1784 (2 items)
- Willing, Morris, and Swanwick to Tench Tilghman and Co., Aug. 26, 1784-Oct. 30, 1785 (6 items)
- J. Kinsey to Willing, Morris, and Swanwick, Feb. 6, 1786
- Willing, Morris, and Swanwick to Tench Tilghman and Co., Feb. 15-Mar. 6, 1786 (3 items)
- J. Swanwick to John Nicholson, Oct. 3, 1786
- Thos FitzSimons to Willings and Francis, June 10, 1800
**A. Folder 1: 31 items**
- Willing, Morris, and Co. to Jonathan Hudson, Dec. 12, 1774
- Robt Morris to Jonathan Hudson, May 20, 1777-July 30, 1781 (30 items)
**B. Folder 1: 1 item**
- Robt Morris to Dumas, Mar. 24, 17[ ]
**Correspondence with Robert Morris: 6 items**
Formerly located in MG-134: B, Folders 2-4, these items have been transferred (on the basis of internal evidence) to Record Group 27: Records of the Supreme Executive Council.
- Robt Morris to the President of Pennsylvania (President, Supreme Executive Council), July 9-16, 1781 (3 items)
- Robt Morris to the President of Congress, Aug. 28, 1781
- Robt Morris to Joseph Clay, May 17, 1783
- Robert Morris, Memorial to Congress, n. d.
**B. Folder 5: 1 item**
- Robt Morris to James Wilson, Aug. 23, 1789
**B. Folder 6: 1 item**
- Robt Morris to John Ross, Mar. 12, 1790
**C. Folder 1: 1 item**
- Robt Morris, Articles of Agreement, Dec. 31, 1793-Apr. 10, 1794 (1 item)
**E. Folder 1: 1 item**
- AndW Limozin to Willing, Morris, and Swanwick, Dec. 30, 1783
**E. Folder 2: 3 items**
- Clarke and Manwaring to Willing, Morris, and Swanwick, Oct. 21-29, 1785 (2 items)
- W.M.&S. to Clarke and Manwaring, Nov. 1, 1785
**F. Folder 1: 1 item**
- J. Swanwick to Tench Tilghman, Jan. 10, 1786
F. Folder 2: 1 item
J. Swanwick to Tench Tilghman and Co., July 26, 1784
F. Folder 3: 1 item
Willing, Morris, and Swanwick to Tench Tilghman, Feb. 10, 1786
F. Folder 4: 5 items
Willing, Morris, and Swanwick to Tench Tilghman, Feb. 23-Mar. 29, 1784 (3 items)
J. Swanwick to Tench Tilghman, May 3-10, 1784 (2 items)
F. Folder 5: 17 items
Willing, Morris and Swanwick to Tench Tilghman & Co., Feb. 10, 1785-Apr. 19, 1786 (16 items)
Willing, Morris, and Swanwick to Thos R. Tilghman, May 6, 1786
|
ZEROES OF $L$-SERIES IN CHARACTERISTIC $p$
DAVID GOSS
ABSTRACT. In the classical theory of $L$-series, the exact order (of zero) at a trivial zero is easily computed via the functional equation. In the characteristic $p$ theory, it has long been known that a functional equation of classical $s \mapsto 1 - s$ type could not exist. In fact, there exist trivial zeroes whose order of zero is “too high;” we call such trivial zeroes “non-classical.” This class of trivial zeroes was originally studied by Dinesh Thakur [Th2] and quite recently, Javier Díaz-Vargas [DV2]. In the examples computed it was found that these non-classical trivial zeroes were correlated with integers having bounded sum of $p$-adic coefficients. In this paper we present a general conjecture along these lines and explain how this conjecture fits in with previous work on the zeroes of such characteristic $p$ functions. In particular, a solution to this conjecture might entail finding the “correct” functional equations in finite characteristic.
1. INTRODUCTION
The debt all mathematicians owe to Euler is obvious and universally known. Euler’s instincts and mathematical taste have had the most profound effect on all subsequent generations of researchers. Nowhere is this more evident than in Euler’s fantastic contributions to number theory and, in his work on number theory (see, for instance, [Du1]), nothing is more fabulous than Euler’s investigation into what we now call the Riemann zeta function $\zeta(s) = \zeta_{\mathbb{Q}}(s) := \sum_{n=1}^{\infty} n^{-s}$. In fact, Euler was the first to appreciate that $\zeta(s)$ had a functional equation, (see [Ay1] for a wonderful discussion of Euler’s insights).
The fact that there are infinitely many primes goes back to Euclid. Euler gave an elegant refinement of this result by establishing that
$$\sum_{p \text{ prime}} \frac{1}{p}$$
diverges thereby giving some indication of how well spaced the primes actually are. Euler’s proof of this fact uses the “Euler product” associated to $\zeta(s)$ as well as the divergence of the harmonic series $\zeta(1) = \sum_{n=1}^{\infty} \frac{1}{n}$.
Euler was also fascinated by the special values of $\zeta(s)$ at all the other integers, both positive and negative, and he devoted much energy to their computation. In fact, it was by studying these values that Euler discovered the above mentioned functional equation. A modern number theorist cannot read these discoveries of Euler without wonder at the sheer beauty and audacity of Euler’s methods. While the rigorous deduction of the functional equation of $\zeta(s)$ had to wait until Riemann and the methods of complex analysis, Euler’s argument makes thrilling use of both convergent and divergent series (see Section 2 below).
In the process of evaluating $\zeta(s)$ where $s$ is an integer, Euler discovered the “trivial zeroes” of $\zeta(s)$ at the negative even integers. These zeroes play a crucial role in Euler’s calculations
Date: February 1, 2006.
This work is in honor of my mother Barbara Goss Alter.
as they form the numerator of a rational quantity Euler needs to compute and where his methods (and those of everybody else since) fail to compute the denominator in closed form (see Remark 1 and Equation 6). Moreover, the functional equation of $\zeta(s)$ then allows one to then show that these trivial zeroes are simple. As is well-known in modern number theory, the functional equation of $\zeta(s)$ is merely the beginning of a vast undertaking whose goal is to show that all arithmetically interesting Dirichlet series ultimately behave in similar ways.
Euler’s work on $\zeta(s)$ has always been an inspiration in our work on $L$-series in characteristic $p$. We will briefly review the definition of such characteristic $p$ functions in Section 3 where the base ring $A$ of the theory of Drinfeld modules plays the role of the integers $\mathbb{Z}$ in classical theory. In particular, in the prototypical case of $A = \mathbb{F}_r[T]$, $r = p^m$, $p$ prime, (which, like $\mathbb{Z}$, is also Euclidean) one is able to describe a function $\zeta_A(s) := \sum_{n \in A \text{ monic}} n^{-s}$ (see Chapter 8 of [Go2]) which is a very close cousin of $\zeta(s)$. Indeed, using the period of the Carlitz module instead of $2\pi i$, one could readily establish an analog of Euler’s calculation of $\zeta(2n)$ $n = 1, 2, \cdots$ for those positive $j$ which are “$A$-even” (i.e. $j$ divisible by $r - 1 = \# A^*$). At the negative integers, $-j$ ($j \geq 0$) one obtains divergent sums of the form $\sum_{n \in A \text{ monic}} n^j$ which, upon regrouping according to the degree of $n$, become finite. When $j$ is again $A$-even, this sum is 0 giving a “trivial zero.”
Thus, on the surface, the special values of $\zeta_A(s)$ behave very similarly to those of $\zeta_\mathbb{Q}(s)$, and so a first hope would be to follow Euler and guess at “the functional equation” for $\zeta_{\mathbb{F}_r[T]}(s)$. This fails to work for two basic reasons: 1. Obviously the mapping $s \mapsto 1 - s$ is a bijection between even and odd integers; this fails for $A$-odd and $A$-even numbers when $r \neq 3$; 2. Even with $r = 3$ there are two distinct analogs of Bernoulli numbers in the characteristic $p$ theory; this results in considerably more complicated quotients than in classical theory.
This state of affairs persisted until the mid 1990’s when two seemingly independent developments occurred. In the first [Wa1] (see also [DV1]) Daqing Wan computed the Newton polygons associated to $\zeta_A(s)$ when $r = p$ (later extended to all $r$ by B. Poonen and J. Sheats [Sh1]) thereby establishing that the absolute value of a zero uniquely determines it (including the multiplicity of the zero). Therefore the zeroes of $\zeta_{\mathbb{F}_r[T]}$ lie “on the line” $\mathbb{F}_r((1/T))$ and are simple. Wan’s calculations were prompted by the observation by the present author that, in some cases at least, the coefficients of $\zeta_A(s)$ go to 0 exponentially. Such a rate of decay is far faster than is necessitated to simply establish the basic analyticity properties of such a function and is implied by having the degrees of certain “special polynomials” (see Section 3) grow logarithmically. Such logarithmic growth is now known to be a completely general phenomenon [Boc1], [Go5].
In the second development, D. Thakur [Th2] looked into the possibility that, for general $A$, the trivial zeroes had higher order multiplicities; such a phenomenon never happens classically. In other words, the construction of trivial zeroes comes equipped with a “classical” lower-bound on the order of zero. However as Thakur found, there are many instances where this lower-bound is not the exact order; such a trivial zero is called “non-classical.” It is totally remarkable, and very important for us, that Thakur’s results on non-classical trivial zeroes involve having the sum of the $p$-adic digits of the trivial zero be bounded. Thakur’s computations have recently been extended by Javier Diaz-Vargas ([DV2]). These basic results will be recalled in Section 4.
The results of Wan, Sheats, etc., are clearly a type of “Riemann hypothesis” (see [Go3], [Go4]) and one wants to be able to put them into a general conjecture about the zeroes
in full generality for all motives ($\tau$-sheaves, etc.) and interpolations at all places of the quotient field $k$ of $A$. Our first attempt to do so [Go3] simply ignored the trivial zeroes (as they are ignored classically in the Riemann hypothesis). As was reported in [Go4], this conjecture was wrong precisely because of the impact of higher order trivial zeroes! More precisely, using the topology on the domain space of our $L$-series, one is able to use higher order trivial zeroes to inductively construct $p$-adic integers where the conjecture is false. The construction produces such $p$-adic integers by building up their canonical $p$-adic expansion out of the expansions of trivial zeroes with higher orders.
It is precisely here that the computations of Thakur and Diaz-Vargas now fit. Indeed, their computations lead naturally to the conjecture (Conjecture 1) that those integers $j$ for which the trivial zero at $-j$ is non-classical have bounded sum of their $p$-adic digits. Presenting this conjecture is the goal of this work and we discuss the conjecture both at $\infty$ (i.e., the analog of the complex field) and at the interpolations of our functions at finite primes. If this conjecture is true, then it places a limit on how one can construct counter-examples to our original conjecture. Indeed, it implies that we need not worry about non-classical trivial zeroes alone leading to counter-examples as they can have no effect on our construction once the sum of the $p$-adic digits of the integers being used becomes sufficiently large (see the discussion in Section 5; in particular, Example 4). We view this as positive evidence for the conjecture.
As the reader may see, the trivial zeroes play a special role in both the classical and characteristic $p$ theory. But what is the right general conjecture on the zeroes in characteristic $p$? As of now, we do not know. However, since the functional equation classically is what allows one to compute the order of a trivial zero, it seems to us quite reasonable that a proof of the above conjecture in our case will generate the correct ideas and techniques.
It is clear that this work owes a great deal to Euler. It should also be clear that it owes a great deal to the mathematical taste and insight of Dinesh Thakur and Javier Diaz-Vargas. It is moreover my pleasure to thank Thakur and J.-P. Serre for helpful comments.
2. Euler’s discovery of the functional equation for $\zeta_{\mathbb{Q}}(s)$
Our treatment here follows that of [Ay1]; the reader is referred there for references and any elided details. Let $\zeta(s)$ be the Riemann zeta function.
After many years of work, Euler succeeded in computing the values $\zeta(2n)$, $n = 1, 2 \cdots$ in terms of Bernoulli numbers. Euler then turned his attention to the values $\zeta(s)$ at negative integers. Of course, Euler did not have analytic continuation to work with and relied on his instincts for beauty; nevertheless, he got it right! Euler begins with the very well known expansion
$$\frac{1}{1-x} = 1 + x + x^2 + x^3 + \cdots + x^n + \cdots.$$ \hspace{1cm} (1)
Clearly this expansion is only valid when $|x| < 1$, but that does not stop Euler. Upon putting $x = -1$, he deduces
$$1/2 = 1 - 1 + 1 - 1 + 1 \cdots.$$ \hspace{1cm} (2)
To the modern eye, this is a nonsensical statement about divergent series; however following in Euler’s bold steps, we won’t let that stop us! Indeed, upon applying $x(d/dx)$ to Equation 1 and plugging in $x = -1$, we obtain
$$1/4 = 1 - 2 + 3 - 4 + 5 \cdots.$$ \hspace{1cm} (3)
Applying the process again, Euler finds the “trivial zero”
\[ 0 = 1 - 2^2 + 3^2 - \cdots , \]
(4)
and so on. Obviously, Euler is not working with the values at the negative integers of \( \zeta(s) \) but rather the function
\[ \zeta^*(s) := (1 - 2^{1-s})\zeta(s) = \sum_{n=1}^{\infty} (-1)^{n-1}/n^s, \]
(5)
however this is of little consequence and the zeta-values Euler obtains are exactly those rigorously obtained much later by Riemann. (In [Ay1], our \( \zeta^*(s) \) is denoted \( \phi(s) \).)
Nine years later, in [Eu1] (N.B.: while [Eu1] was published in 1768, it was written in 1749) Euler notices, at least for small \( n \geq 2 \), that his calculations imply
\[ \frac{\zeta^*(1-n)}{\zeta^*(n)} = \begin{cases}
(-1)^{(n/2)+1}(2^n-1)(n-1)! & \text{if } n \text{ is even} \\
0 & \text{if } n \text{ is odd}.
\end{cases} \]
(6)
Upon rewriting Equation 6 using his gamma function \( \Gamma(s) \) and the cosine, Euler then “hazards” to conjecture
\[ \frac{\zeta^*(1-s)}{\zeta^*(s)} = \frac{-\Gamma(s)(2^s-1)\cos(\pi s/2)}{(2^{s-1}-1)\pi^s}, \]
(7)
which translates easily into the functional equation of \( \zeta(s) \)!
**Remark 1.** Note the important role played by the trivial zeroes in Equation 6 in that they render harmless Euler’s inability to calculate explicitly \( \zeta^*(n) \), or \( \zeta(n) \), at odd integers \( > 1 \).
But there is still more! The value \( \zeta^*(1) \) is precisely the alternating harmonic series
\[ 1 - 1/2 + 1/3 - 1/4 \cdots \]
which Euler knows converges to \( \log 2 \); so his calculations tell him that evaluating the left hand side of Equation 7 at \( s = 1 \) gives \( \frac{1}{2\log 2} \). Euler then takes the limit on the right hand side and obtains the same value! To Euler, this is “strong justification” for his conjecture which Riemann much later proved. (This quote is from Euler’s paper [Eu1], the translation is found at the bottom of page 1083 of [Ay1].)
### 3. A quick introduction to L-series in characteristic \( p \)
We now briefly go over the basic definitions of characteristic \( p \) \( L \)-series. We will present the general definitions but the reader will lose very little by always assuming \( A = \mathbb{F}_q[T] \) in what follows.
Let \( k \) be an arbitrary global function field of transcendency degree 1 and full field of constants \( \mathbb{F}_r \). Let \( \infty \) be a fixed place of \( k \) of degree \( d_\infty \) over \( \mathbb{F}_r \) and let \( |\cdot|_\infty \) be the associated absolute value. Let \( A \) be the Dedekind domain of those functions regular outside \( \infty \). It is easy to see that the unit group of \( A \) is the set of non-zero constants and that one has
\[ h_A = d_\infty \cdot h_k, \]
(8)
where \( d_\infty \) is the degree of \( \infty \) and \( h_k \) is the respective class number.
We let \( K \) be the completion of \( k \) at \( \infty \) and \( \mathbb{F}_\infty \simeq \mathbb{F}_{r^{d_\infty}} \subset K \) be the associated finite field. We let \( \pi \in K \) be a uniformizing element so that every non-zero element \( \alpha \) of \( K \) may be written
\[ \alpha = \zeta_\alpha \cdot \pi^{n_\alpha} \cdot \langle \alpha \rangle \]
(9)
where \( \zeta_\alpha \in \mathbb{F}_\infty^* \), \( n_\alpha \in \mathbb{Z} \) and \( \langle \alpha \rangle \in U_1(K) = \{ x \in K \mid |x - 1|_\infty < 1 \} \) has absolute value 1. The elements \( \zeta_\alpha \) and \( \langle \alpha \rangle \) depend on our choice of \( \pi \). The element \( \zeta_\alpha \) is called the “sign of \( \alpha \)” and denoted \( \text{sgn}(\alpha) \).
**Example 1.** When \( k = \mathbb{F}_r(T) \) and \( A = \mathbb{F}_r[T] \), the simplest choice is \( \pi = 1/T \) so that for \( n \in A \) monic of degree \( d \), one has
\[
n = \pi^{-d} \langle n \rangle = T^d \langle n \rangle,
\]
with \( \langle n \rangle = n/T^d \equiv 1 \pmod{\pi} \).
In general, an element \( \alpha \in K^* \) is said to be **monic** or **positive** if and only if \( \text{sgn}(\alpha) = \zeta_\alpha = 1 \), which clearly depends on the choice of \( \pi \). Notice that the positive elements clearly form a subgroup of \( K^* \).
Let \( X \) be the smooth projective curve associated to \( k \). For any fractional ideal \( I \) of \( A \), we let \( \deg_K(I) \) be the degree over \( \mathbb{F}_r \) of the divisor associated to \( I \) on the affine curve \( X - \infty \). For \( \alpha \in k^* \), one sets \( \deg_k(\alpha) = \deg_k((\alpha)) \) where \( (\alpha) \) is the associated fractional ideal; this clearly gives the correct degree of a polynomial in \( \mathbb{F}_r[T] \).
Let \( \mathbb{C}_\infty \) be the completion of a fixed algebraic closure \( \bar{K} \) of \( K \) equipped with the canonical extension of the normalized absolute value on \( K \).
**Definition 1.** Set \( S_\infty := \mathbb{C}_\infty^* \times \mathbb{Z}_p \).
The space \( S_\infty \) plays the role of the complex numbers in our theory in that it is the domain of “\( n^s \).” Indeed, let \( s = (x, y) \in S_\infty \) and let \( \alpha \in k \) be positive. The element \( u = \langle \alpha \rangle - 1 \) has absolute value \( < 1 \); thus \( \langle \alpha \rangle^y = (1 + u)^y \) is easily defined and computed via the binomial theorem.
**Definition 2.** We set
\[
\alpha^s := x^{\deg_k(\alpha)} \langle \alpha \rangle^y.
\]
Clearly \( S_\infty \) is a group whose operation is written additively. Suppose that \( j \in \mathbb{Z} \) and \( \alpha^j \) is defined in the usual sense of the canonical \( \mathbb{Z} \)-action on the multiplicative group. Let \( \pi_* \in \mathbb{C}_\infty^* \) be a fixed \( d_\infty \)-th root of \( \pi \). Set \( s_j := (\pi_*^{-j}, j) \in S_\infty \). One checks easily that Definition 2 gives \( \alpha^{s_j} = \alpha^j \). When there is no chance of confusion, we denote \( s_j \) simply by “\( j \).”
In the basic case \( A = \mathbb{F}_r[T] \) one can now proceed to define \( L \)-series in complete generality. However, in general there are non-principal ideals. Fortunately there is a canonical and simple procedure to extend Definition 2 to them as follows. Let \( \mathcal{I} \) be the group of fractional ideals of the Dedekind domain \( A \) and let \( \mathcal{P} \subseteq \mathcal{I} \) be the subgroup of principal ideals. Let \( \mathcal{P}^+ \subseteq \mathcal{P} \) be the subgroup of principal ideals which have positive generators. It is a standard fact that \( \mathcal{I}/\mathcal{P}^+ \) is a finite abelian group. The association
\[
\mathfrak{h} \in \mathcal{P}^+ \mapsto \langle \mathfrak{h} \rangle := \langle \lambda \rangle,
\]
where \( \lambda \) is the unique positive generator of \( \mathfrak{h} \), is obviously a homomorphism from \( \mathcal{P}^+ \) to \( U_1(K) \subset \mathbb{C}_\infty^* \).
Let \( U_1(\mathbb{C}_\infty) \subset \mathbb{C}_\infty^* \) be the group of 1-units defined in the obvious fashion. The binomial theorem again shows that \( U_1(\mathbb{C}_\infty) \) is a \( \mathbb{Z}_p \)-module. However, it is also closed under the unique operation of taking \( p \)-th roots; as such \( U_1(\mathbb{C}_\infty) \) is a \( \mathbb{Q}_p \)-vector space.
**Lemma 1.** The mapping \( \mathcal{P}^+ \to U_1(\mathbb{C}_\infty) \) given by \( \mathfrak{h} \mapsto \langle \mathfrak{h} \rangle \) has a unique extension to \( \mathcal{I} \) (which we also denote by \( \langle ? \rangle \)).
Proof. As $U_1(\mathbb{C}_\infty)$ is a $\mathbb{Q}_p$-vector space, it is a divisible group; thus the extension follows by general theory. The uniqueness then follows by the finitude of $\mathcal{I}/\mathcal{P}^+$.
If $s \in S_\infty$ and $I$ as above, we now set
$$I^s := x^{\deg_k(I)} \langle I \rangle^y.$$ \hspace{1cm} (13)
Thus if $\alpha \in k$ is positive one sees that $(\alpha)^s$ agrees with $\alpha^s$ as in Equation 11.
### 3.1. Definition of $L$-series.
Let $G := \text{Gal}(k^\text{sep}/k)$ be the absolute Galois group of $k$ where $k^\text{sep}$ is a fixed separable closure of $k$. Let $\overline{\mathbb{Q}}_p$ be a fixed algebraic closure of $\mathbb{Q}_p$ and let $\chi : G \to \overline{\mathbb{Q}}_p^*$ be a homomorphism of Galois type; i.e., $\chi$ factors through the Galois group $G_1$ of a finite abelian extension $k_1$ of $k$. Obviously the image of $\chi$ consists of roots of unity and viewing these as sitting in $\mathbb{C}$ (via some injection) allows us to think of $\chi$ as also being complex valued. In particular, for each place $w$ of $k$ (including $\infty$) one attaches a local factor as follows: 1. The place $w$ is ramified for $\chi$; in which case the factor is simply 1. The place $w$ is unramified; in which case the factor is $(1 - \chi(F_w)t)$ where $F_w$ is the (arithmetic) Frobenius element at $w$.
Let $R_p \subset \overline{\mathbb{Q}}_p$ be the ring of integers with maximal ideal $M_p$. We fix an injection of $R_p/M_p$ into $\mathbb{C}_\infty$ and so we now obtain local factors in $\mathbb{C}_\infty[t]$ for which we will use the same notation $(1 - \chi(F_w)t)$ etc.
**Remark 2.** The reader may be wondering why we simply did not use the obvious reduction $\bar{\chi} : G \to (R_p/M_p)^*$ to begin with. The answer is that there are no non-trivial $p$-power roots of unity in characteristic $p$ and so one is hard pressed to get the local factors right. For instance, in the case $G_1$ is a $p$-group, the reduced homomorphism $\bar{\chi}$ is the trivial character. If however, we would simply use the trivial character to obtain local factors we would be off at the ramified primes. Thus is is far better to use the characteristic 0 factors in the above fashion.
Let $s \in S_\infty$ and $\chi$ as above.
**Definition 3.** We set
$$L(\chi, s) := \prod_{v \in \text{Spec}(A) \atop v \text{ unramified}} (1 - \chi(F_v)v^{-s})^{-1}. \hspace{1cm} (14)$$
As in Section 8.9 of [Go2], it is known that $L(\chi, s)$ converges on a “half-plane” of $S_\infty$ and can be analytically extended to an “entire” function on $S_\infty$. Thus, one can view $L(\chi, s)$ as a continuous 1-parameter, where $y \in \mathbb{Z}_p$ is the parameter, family of entire power series in $x^{-1}$ etc.
While we have only discussed abelian $\chi$ here for simplicity of exposition, it is clear how to proceed in the non-abelian case.
### 3.2. Special polynomials.
Let $j$ now be a non-negative integer with $\chi$, as above and let $s = (x, y) \in S_\infty$.
**Definition 4.** We set
$$z_L(\chi, x, -j) := L(\chi, \pi_+^j x, -j). \hspace{1cm} (15)$$
It is known that $L(\chi, x, -j)$ is a polynomial in $x^{-1}$ and all such polynomials are called the special polynomials of $L(\chi, s)$. By unraveling the definition of $z_L(\chi, x, -j)$, one sees that the coefficients of this polynomial lie in
$$\mathcal{O} := \mathcal{O}_V[\zeta]$$ \hspace{1cm} (16)
where $\zeta$ is a primitive $n$-th root of unity and $n$ is the order of the reduction $\bar{\chi}$ of $\chi$ (as a finite character) and $\mathcal{O}_V \subset \mathbb{C}_\infty$ is the ring of integers in the value field generated by the elements $\{I^{s_1}\}$ (see Subsection 8.2 of [Go2]). As mentioned at the end of [Go5], elementary estimates imply that the degree (in $x^{-1}$) of $L(\chi, x, -j)$ grows logarithmically in $j$.
**Remark 3.** For $L$-series associated to “$\tau$-sheaves” etc., the logarithmic growth of the special polynomials is due to Böckle [Boc1]. For arbitrary $L$-series associated to representations of Galois type (not necessarily abelian) one can use Böckle’s results and the fact that the Artin Conjecture is true [Go1] for these functions to deduce logarithmic growth.
### 3.3. Trivial zeroes
The classical, characteristic 0 valued, $L$-series associated to $\chi$ also attaches a local factor to the prime $\infty$ if it is unramified for $\chi$. In the case that $\chi$ is non-principal, one knows that this classical $L$-series is entire; i.e., is a polynomial in $u = r^{-s}$. These infinite factors are missing in the definition of our $L$-series and thereby equip them with trivial zeroes as we shall explain here.
Let $\psi$ be a Hayes-module (i.e., a sign normalized rank one Drinfeld module) [Hay1], Section 7 of [Go2] associated to a twisting of sgn. The module $\psi$ analytically arises from a rank one lattice generated by an element $\xi \in \mathbb{C}_\infty$; one knows that $\xi^{r^{d_\infty}-1} \in K^*$. The extension $K_1 := K(\xi)/K$ is a totally ramified abelian extension with Galois group $g_\infty$ isomorphic to $\mathbb{F}_\infty^\times$ via the action on $\xi$. This local extension is also obtained by adjoining to $K$ any non-trivial division point for $\psi$.
Let $W \subset \mathbb{Q}_p$ be the Witt ring of $\mathbb{F}_\infty$ and let $t: g_\infty \to \mathbb{F}_\infty^\times$ be the homomorphism given by the action of $g_\infty$ on $\xi$ and let $T_\infty: g_\infty \to W^\times$ be the composition of $t$ and the Teichmüller character of $\mathbb{F}_\infty^\times$.
We view $T_\infty$ as being extended in the obvious way to a character of the absolute Galois group $G_\infty$ of $K^{\text{sep}}/K$ where $K^{\text{sep}} \subset \mathbb{C}_\infty$ is the separable closure.
Let $\chi_\infty$ be the local factor at $\infty$ associated to $\chi$ which we also view as a character on $G_\infty$. Assume that for some non-negative $j$ the character $\chi_\infty \cdot T_\infty^j$ is unramified. Then, as in Theorem 8.12.5 of [Go2], a double congruence implies that
$$z_L(\chi, x, -j)/(1 - (\chi_\infty \cdot T_\infty^j)(F_\infty)x^{-d_\infty}) \in \mathcal{O}[x^{-1}],$$ \hspace{1cm} (17)
where $\mathcal{O}$ is given in Equation 16. Thus zeroes of $(1 - (\chi_\infty \cdot T_\infty^j)(F_\infty)x^{-d_\infty})$ clearly give rise to zeroes of the original $L$-series $L(\chi, s)$.
**Definition 5.** The zeroes of $L(\chi, s)$ arising from the factor $(1 - (\chi_\infty \cdot T_\infty^j)(F_\infty)x^{-d_\infty})$ are called trivial zeroes for $\chi$ at $-j$.
**Remark 4.** It is clear how to generalize the above construction of trivial zeroes to arbitrary representations of Galois type. For general $L$-series arising from Drinfeld modules, $t$-modules, $\tau$-sheaves etc., one proceeds cohomologically as in [Boc1].
**Remark 5.** If $\chi_\infty \cdot T_\infty^j$ is ramified, then the local factor is 1 and so no non-trivial information is deduced. In this case, it is reasonable to expect that $z_L(\chi, x, -j)$ has no zeroes of absolute value 1.
**Definition 6.** Let $t$ be a trivial zero for $L(\chi, s)$ at $-j$; so that $\pi_\infty^j t$ is a root of
$$ (1 - (\chi_\infty \cdot T_\infty^j)(F_\infty)x^{-d_\infty}) $$
of order $v_0(t)$. Let $v_1(t)$ be the order of $t$ as a zero of $z_L(\chi, x, -j)$. By (17) we know that $v_0(t) \leq v_1(t)$. If this inequality is strict, then we say that $t$ is *non-classical*. The set of all non-negative $j$ such that $L(\chi, s)$ has a non-classical trivial zero at $-j$ will be called *the non-classical set for* $L(\chi, s)$.
Let $\bar{\chi}$ be the reduction of $\chi$ considered as a homomorphism into $\mathbb{C}_\infty^\times$ via our fixed embedding of $R_p/M_p$ into $\mathbb{C}_\infty$. Let $\mathbb{F}_\chi$ be the finite field generated by the values of $\bar{\chi}$ over the base field $\mathbb{F}_p$; obviously $\mathbb{F}_\chi$ is finite and we let $g_\chi := p^{e(\chi)}$ be its order.
The next proposition is implicit in [Th2] and [DV2]
**Proposition 1.** The non-classical set for $L(\chi, s)$ is closed under multiplication by $g_\chi$.
**Proof.** This follow upon applying the $g_\chi$-th power mapping to the coefficients. □
### 3.4. $v$-adic theory and $v$-adic trivial zeroes.
Let $v$ be a finite prime of $A$ of degree $d_v$ over $\mathbb{F}_r$. Let $k_v$ be the local field at $v$ with fixed algebraic closure $\bar{k}_v$, equipped with the canonical topology, and let $\mathbb{C}_v$ be the associated complete field. As before let $\mathbf{V}$ be the value field and $n$ the order of the reduction of $\chi$. Fix an embedding $\sigma : \mathbf{V}[\zeta] \to \mathbb{C}_v$ where $\zeta$ is a primitive $n$-th root.
Via $\sigma$, the functions $\{z_L(\chi, x, -j)\}_{j=0}^\infty$ can be thought of as lying in $\mathbb{C}_v[x^{-d_\infty}]$. Upon setting $x = x_\sigma \in \mathbb{C}_v$, they interpolate to a continuous function $L_{\sigma, v}(\chi, x_\sigma, s_\sigma)$ where $(x_\sigma, s_\sigma) \in \mathbb{C}_v^\times \times S_{\sigma, v}$; here $S_{\sigma, v} = \mathbb{Z}_p \times \mathbb{Z}/(r^t - 1) = \lim_{\leftarrow n} \mathbb{Z}/p^n(r^t - 1)$ is the inductive limit over $n$ of $\mathbb{Z}/p^n(r^t - 1)$ and where $r^t - 1$ is the number of roots of unity in the extension of $k_v$ generated by the image of $\sigma$.
As in the previous subsection, let $j$ be chosen so that $\chi_\infty \cdot T_\infty^j$ is unramified at $\infty$. The local factor $(1 - (\chi_\infty \cdot T_\infty^j)(F_\infty)x^{-d_\infty})$ obviously has roots of unity for its zeroes. Thus, these roots have bounded $v$-adic absolute value (as of course their absolute value is 1). Their effect on the Newton Polygons for $L_{\sigma, v}(\chi, x_\sigma, s_\sigma)$ is thus very limited and so can essentially be ignored.
However, the process of interpolating $v$-adically precisely *kills* the Euler factor at $v$ in the following manner. Let $\sigma$ be extended to the natural action on polynomials via its action on the coefficients.
**Proposition 2.** One has
$$ L_{\sigma, v}(\chi, x_\sigma, -j) = \sigma \left((1 - \chi(F_v)v^jx_\sigma^{-d_v})z_L(\chi, x_\sigma, -j)\right). \quad (18) $$
**Proof.** This follows immediately as $\lim_{i \to \infty} v^i = 0$ in $\mathbb{C}_v$. □
We are thus led to a very interesting class of zeroes for $L_{\sigma, v}(\chi, x_\sigma, s_\sigma)$.
**Definition 7.** The zeroes of $1 - \sigma(\chi(F_v)v^j)x_\sigma^{-d_v}$ in $\mathbb{C}_v$ are called the *$v$-adic trivial zeroes of* $L_{\sigma, v}(\chi, x_\sigma, s_\sigma)$ at $-j \in S_{\sigma, v}$.
The $v$-adic trivial zeroes are remarkably similar to their counterparts in $S_\infty$. The definition of *non-classical $v$-adic trivial zeroes* is now obvious as is the definition of the *non-classical set for* $L_{\sigma, v}(\chi, x_\sigma, s_\sigma)$.
Remark 6. Actually, our whole construction of $v$-adic trivial zeroes is non-classical; indeed we know of no analog of our construction of $v$-adic trivial zeroes in the theory of $p$-adic $L$-series. However, we will continue to use “non-classical” to refer to those trivial zeroes whose order is higher than expected.
The next result is the obvious analog of Proposition 1 and has the same proof.
Proposition 3. The non-classical set for $L_{\sigma,v}(\chi,x_\sigma,s_\sigma)$ is closed under multiplication by $q_\chi$.
4. The calculations of Thakur and Diaz-Vargas
Let $\chi = \chi_0$ be the trivial character with constant value 1.
Definition 8. We call the function $L(\chi,s)$ the zeta function of $A$ and denote it $\zeta_A(s)$. The $v$-adic interpolations of $\zeta_A(s)$ are denoted $\zeta_{\sigma,v}(x_\sigma,s_\sigma)$.
Clearly one has $\zeta_A(s) = \sum_I I^{-s}$ where $I$ runs over the ideals of $A$.
The trivial zeroes of $\zeta_A(s)$ then occur at the negative integers $-j \in S_\infty$ where $j \equiv 0 \pmod{r^{d_\infty} - 1}$; indeed the local factor at $\infty$ in this case is simply $1 - x^{-d_\infty}$. In the case $A = \mathbb{F}_r[T]$, one can show that these zeroes are simple; thus the non-classical set for $\zeta_{\mathbb{F}_r[T]}(s)$ is empty.
Remark 7. Let $A = \mathbb{F}_r[T]$ and let $s = (x,y) \in S_\infty$. In [Wa1], [Sh1] it is shown that for fixed $y$, a zero of $\zeta_A(x,y)$, has multiplicity 1 and is uniquely determined by its absolute value; thus all zeroes are simple and must lie in $K$. Suppose now that $\theta$ is a classical Dirichlet character with classical (complex) $L$-series, $L(\theta,s)$. Let
$$l(\theta,t) := L(\theta,1/2 + it).$$
At the end of [Go4], it is shown how the classical functional equation combined with the action of complex conjugation imply that the expansion about $t = 0$ of $l(\theta,t)$ is, up to possible multiplication by a non-trivial constant, a real power series. Of course the Riemann hypothesis is equivalent to having the zeroes of $l(\theta,t)$ be real; so that classical theory looks quite similar to what was established for $\zeta_A(s)$.
Now let $r = p = 2$.
Example 2. Let $A := \mathbb{F}_2[T_1,T_2]/(T_1^2 + T_1 + T_2^3 + T_2 + 1)$. In this case the quotient field $k$ has genus 1.
Example 3. Let $A := \mathbb{F}_2[T_1,T_2]/(T_1^2 + T_1 + T_2^5 + T_2^3 + 1)$. Here the quotient field $k$ has genus 2.
In both cases, one finds that $A$ has class number 1 which implies that $d_\infty = 1$ also. In both cases, $\zeta_A(s)$ will have trivial zeroes at all negative integers (as $r - 1 = 1$).
Let $j$ be an integer and let $l_p(j)$ be the sum of its $p$-adic digits. After some hand and machine calculations, Thakur [Th2] established the following result.
Theorem 1. Let $A$ be as in Example 2 or Example 3. Then the order of vanishing of $\zeta_A(s)$ at $s = -j$ is 2 if $l_2(j) \leq g$ where $g$ is the genus of the quotient field $k$.
Thus, in particular, the non-classical set for $\zeta_A(s)$ is non-empty.
For very small $j$, Thakur also shows the converse to his result. Thus, for instance, in the case of Example 3, the trivial zero at $-7$ is simple. His paper contains other such examples.
In Theorem 5.4.9 of [Th3], Thakur establishes a partial converse to Theorem 1 in the case of Example 2. More precisely he shows in this case that the trivial zero at $s = -j$ is simple if $l_2(j) = 2$ or $j \equiv 0, 3, 5$ or $6 \pmod{7}$.
In [DV2], Javier Diaz-Vargas extends these calculations to more general $A$ where the degree of $\infty$ is 1 but where $A$ has non-trivial class number and so our exponentiation of non-trivial ideals is used. The same general phenomenon appears to hold.
5. A general conjecture
Let $w$ be any place of $k$, where $k$ is now completely general, and consider the non-classical set $N_w$ for the interpolation of $L(\chi, s)$ at $w$ (so that if $w = \infty$, this interpolation is $L(\chi, s)$ itself). We know from Propositions 1 and 3 that $N_w$ is closed under multiplication by $q_\chi$. Extrapolating vastly from the calculations presented in the previous section we are led to the following conjecture.
**Conjecture 1.** The non-classical set $N_w$ consists of elements with *bounded* sum of $p$-adic digits.
Let $C \geq 0$ be some bound. Then, of course, $\{j \mid l_p(j) \leq C\}$ is a particularly simple set of positive integers which is closed under multiplication by any power of $p$.
Suppose that one can find infinitely many $-j$ so that the factor $1 - (\chi_\infty \cdot T^j_\infty)(F_\infty)x^{-d_\infty}$ has many zeroes of the same absolute value (which will clearly happen if $d_\infty > 1$). Then, in [Go4], we showed how to construct elements $\alpha \in \mathbb{Z}_p$ so that the power series arising from $L(\chi, x, \alpha)$ had the strange property that there were infinitely many slopes (of the associated Newton Polygon) of length greater than 1. One does this inductively by building up $\alpha$ via its $p$-adic digits; so that in particular $\alpha$ is built inductively of integers $\{\alpha_i\}$ with $l_p(\alpha_i)$ increasing.
If Conjecture 1 is true, then such counter-examples **cannot** be created out of non-classical trivial zeroes alone. This is illustrated by the next example.
**Example 4.** Let $A$ be as in Example 2 or 3 and suppose that Conjecture 1 is true in the sense that Thakur’s result Theorem 1 is both necessary and sufficient. Then one cannot use the construction in [Go4] to obtain strange $\alpha$ as above. Indeed, once our approximations have sufficiently large sum of $p$-adic digits the effect of the non-classical trivial zeroes is negligible. In fact, let $\alpha_i$ be an approximation to $\alpha$ with $l_p(\alpha_i) > 2$. Then the trivial zero at $-\alpha_i$ must now be simple and so **cannot** contribute a slope to $\alpha$ of length bigger than 1.
We view Example 4 as being some “justification” for our conjecture in the Eulerian sense of Section 2.
Conjecture 1 is very general but clearly not the final word. One would like to know the exact structure of the non-classical sets as well as the exact orders of the associated trivial zeroes. Moreover, one would like to know how the bounds change as the place $w$ varies, etc. Still Conjecture 1 is a precise statement that indicates a much deeper theory of the zeroes.
Finally, in this paper we have worked with representations of Galois type. It is reasonable to ask for a similar conjecture for arbitrary motives etc. As of this writing, we do not know how to formulate such a conjecture. However, the following simple example indicates some possible structure.
Example 5. Let $A$ be as in Example 2 or 3 so that $p = 2$. Let $\psi$ be the Hayes module associated to $A$ and let $L(\psi, s)$ be its $L$-series. Then $L(\psi, s) = \zeta_A(s - 1)$. Thus $j$ is non-classical for $L(\psi, s)$ if and only if $j + 1$ is non-classical for $\zeta_A(s)$. Thus Conjecture 1 implies that $l_p(j + 1)$ is bounded. Note that clearly $l_p(j + 1)$ can be bounded while $l_p(j)$ goes to infinity (e.g., $j = 2^t - 1, t = 1, 2, \ldots$).
It might be that having $l_p(j + 1)$ be bounded instead of $l_p(j)$ is somehow analogous to having a functional equation of the form $s \mapsto k - s$ classically for $k \neq 1$.
References
[An1] G. ANDERSON: $t$-motives, *Duke Math. J.* **53** (1986) 457-502.
[Ay1] R. AYOUB: Euler and the Zeta Function, *Amer. Math. Monthly*, **81** No.10, (Dec. 1974), 1067-1086.
[Boc1] G. BÖCKLE: Global $L$-functions over function fields, *Math. Ann.* **323** (2002) 737-795.
[Boc2] G. BÖCKLE: An Eichler-Shimura isomorphism over function fields between Drinfeld modular forms and cohomology classes of crystals,(preprint, available at http://www.math.ethz.ch/~boeckle/).
[BP1] G. BÖCKLE, R. PINK: A cohomological theory of crystals over function fields, (in preparation).
[DV1] J. DIAZ-VARGAS: Riemann hypothesis for $\mathbb{F}_p[T]$, *J. Number Theory* **59** (1996) 313-318.
[DV2] J. DIAZ-VARGAS: On zeros of characteristic p zeta function, *J. Number Theory*, advance online publication, 9/1/2005. (doi:10.1016/j.jnt.2005.06.008).
[Dr1] V.G. DRINFELD: Elliptic modules, Math. Sbornik **94** (1974) 594-627, English transl.: *Math. U.S.S.R. Sbornik* **23** (1976) 561-592.
[Du1] W. DUNHAM: *Euler, The Master of Us All*, Math. Assoc. Amer. **22**, (1999).
[Eul] L. EULER: Remarques sur un beau rapport entre les séries des puissances tant directes que réciproques, *Memoires de l'academie des sciences de Berlin* **17** (1768), 83-106; *Opera Omnia*: Series 1, **15**, 70-90.
[Ga1] F. GARDEYN: A Galois criterion for good reduction of $\tau$-sheaves, *J. Number Theory* **97** (2002) 447-471.
[Ga3] F. GARDEYN: The structure of analytic $\tau$-sheaves, *J. Number Theory* **100** (2003) 332-362.
[Go1] D. GOSS: On the Holomorphy of Certain Non-abelian $L$-series, *Math. Ann.* **272** (1985), 1-9.
[Go2] D. GOSS: *Basic Structures of Function Field Arithmetic*, Springer-Verlag, Berlin (1996).
[Go3] D. GOSS: A Riemann hypothesis for Characteristic $p$ $L$-functions, *J. Number Theory* **82** (2000) 299-322.
[Go4] D. GOSS: The impact of the infinite primes on the Riemann hypothesis for characteristic $p$ valued $L$-series, in: *Algebra, Arithmetic and Geometry with Applications* (Eds: Christensen et al) Springer (2004) 357-380.
[Go5] D. GOSS: Applications of non-Archimedean integration to the $L$-series of $\tau$-sheaves, *J. Number Theory* **110**, (2005) 83-113.
[Hay1] D. HAYES: A brief introduction to Drinfeld modules, in: *The Arithmetic of Function Fields* (eds. D. Goss et al) de Gruyter (1992) 1-32.
[Ros1] M. ROSEN: *Number Theory in Function Fields*, Springer 2002.
[Sh1] J. SHEATS: The Riemann hypothesis for the Goss zeta function for $\mathbb{F}_q[T]$, *J. Number Theory* **71** (1998) 121-157.
[TW1] Y. TAGUCHI, D. WAN: $L$-functions of $\varphi$-sheaves and Drinfeld modules, *J. Amer. Math. Soc.* **9** (1996) 755-781.
[TW2] Y. TAGUCHI, D. WAN: Entireness of $L$-functions of $\varphi$-sheaves on affine complete intersections, *J. Number Theory* **63** (1997) 170-179.
[Th1] D. THAKUR: Zeta measure associated to $\mathbb{F}_q[T]$, *J. Number Theory* **35** (1990) 1-17.
[Th2] D. THAKUR: On Characteristic $p$ Zeta Functions, *Comp. Mathematica* **99** (1995) 231-247.
[Th3] D. THAKUR: *Function Field Arithmetic*, World Scientific (2004).
[Wa1] D. WAN: On the Riemann hypothesis for the characteristic $p$ zeta function, *J. Number Theory* **58** (1996) 196-212.
Department of Mathematics, The Ohio State University, 231 W. 18th Ave., Columbus, Ohio 43210
E-mail address: firstname.lastname@example.org
|
Developmental Expression of Steroidogenic Factor 1 in a Turtle with Temperature-Dependent Sex Determination
Alice Fleming,* Thane Wibbels,† James K. Skipper,* and David Crews*.
*Section of Integrative Biology, University of Texas at Austin, Austin, Texas 78712; and †Department of Biology, University of Alabama at Birmingham, Birmingham, Alabama 35294-1170
Accepted July 24, 1999
A variety of reptiles possess temperature-dependent sex determination (TSD) in which the incubation temperature of a developing egg determines the gonadal sex. Current evidence suggests that temperature signals may be transduced into steroid hormone signals with estrogens directing ovarian differentiation. Steroidogenic factor 1 (SF-1) is one component of interest because it regulates the expression of steroidogenic enzymes in mammals and is differentially expressed during development of testis and ovary. Northern blot analysis of SF-1 in developing tissues of the red-eared slider turtle (*Trachemys scripta*), a TSD species, detected a single primary SF-1 transcript of approximately 5.8 kb across all stages of development examined. Analysis by *in situ* hybridization indicated nearly equivalent SF-1 expression in early, bipotential gonads at male (26°C-) and female (31°C)-producing incubation temperatures. In subsequent stages, as gonadal sex first becomes histologically distinguishable during the temperature-sensitive period, SF-1 expression increased in gonads at a male-producing temperature and decreased at a female-producing temperature, suggesting a role for SF-1 in the sex differentiation pathway. SF-1 message was also found in adrenal and in the periventricular region of the preoptic area and diencephalon, but there was no apparent sex bias in these tissues at any stage examined. The overall developmental pattern of SF-1 mRNA expression in *T. scripta* appears to parallel that found in mammals, indicating possible homologous functions.
**Key Words:** steroidogenic factor 1, SF-1; Ad4BP; FTZ-F1; reptile; turtle; *Trachemys scripta*; temperature-dependent sex determination.
INTRODUCTION
Gonadal sex of species with temperature-dependent sex determination (TSD) is determined by the temperature at which their eggs are incubated. In the red-eared slider turtle (*Trachemys scripta*), only males are produced when eggs are incubated at 26°C, and only females are produced at 31°C (Bull et al., 1982). The temperature-sensitive period (TSP) for sex determination occurs between Yntema (1968) stages 15 and 21, the middle third of incubation (Wibbels et al., 1991). Commitment to gonadal sex occurs within that period.
Sex steroid hormones are implicated in the process of TSD, and estrogen, in particular, appears essential in female sex determination (Crews, 1996; Crews et al., 1994; Wibbels et al., 1998; Lance, 1997). Estrogens applied exogenously to *T. scripta* eggs incubating at a male-producing temperature override the temperature effect, and female hatchlings result (Crews et al., 1991; Wibbels and Crews, 1992). Exogenously applied inhibitors of aromatase—the enzyme that converts testosterone to estrogen (Simpson et al., 1994)—override a female-producing incubation temperature, and male...
hatchlings result (Crews and Bergeron, 1994; Wibbels and Crews, 1994).
Work with other turtle species has shown a correlation between female incubation temperatures and increased levels of endogenous aromatase transcript and activity in the putative ovary during the TSP (Desvages and Pieau, 1992; Jeyasuria and Place, 1997, 1998). Other researchers have found aromatase activity in the turtle brain prior to that found in the gonad at female-producing temperatures and have proposed that the brain, rather than the gonad, is the sex-determining source of estrogen (Merchant-Larios, 1998; Jeyasuria and Place, 1998). Whatever the endogenous source of estrogen, gonads of putative females and males are receptive to its effect as both express estrogen receptor, albeit differentially, throughout the TSP (Bergeron et al., 1998).
Male sex determination can be manipulated by exogenously applied dihydrotestosterone, a nonaromatizable androgen, and by inhibitors of its endogenous synthesis (Crews and Bergeron, 1994; Wibbels and Crews, 1992, 1995; Wibbels et al., 1992). This effect is less striking than that of estrogen in female sex determination and is only seen at intermediate, or less potent, incubation temperatures. Nevertheless, steroid hormones are undoubtedly a part of TSD in both males and females.
To begin exploring the underlying molecular mechanisms of steroid hormones in TSD, we examined the distribution of steroidogenic factor 1 (SF-1) (Lala et al., 1992), also called Ad4BP (Morohashi et al., 1992), in *T. scripta*. SF-1, encoded by the FTZ-F1 gene and a member of the nuclear receptor superfamily, is known in mammals to regulate transcription of many genes within the reproductive axis (reviewed in Morohashi and Omura, 1996; Parker and Schimmer, 1997). In steroidogenic tissue, it regulates the gene activity of many proteins involved in the synthesis of testosterone and estrogen, including steroidogenic acute regulatory protein, P450\textsubscript{scx}, P450\textsubscript{17α}, 3β-HSD, and aromatase (reviewed in Morohashi, 1999; Parker et al., 1999). During mammalian development, SF-1 is differentially expressed in testes and ovaries (Ikeda et al., 1994; Hatano et al., 1994).
In this study we examined the pattern of SF-1 mRNA expression in *T. scripta*. Northern blot analysis was performed to determine presence and size of message and possible alternate transcripts. A single transcript was found in all stages and tissues examined. Adrenal, kidney, and gonad cannot be effectively separated at early developmental stages in *T. scripta*, precluding traditional quantitative measures of SF-1 in gonad alone. For this reason, *in situ* hybridization was selected as the most appropriate technique to both localize and quantify SF-1 in the embryonic turtle. SF-1 message was found in similar amounts and distribution in the bipotential gonad of males and females. During stages when the sex of gonads is becoming distinct and committed, SF-1 message increased at a male-producing temperature and decreased at a female-producing temperature. SF-1 message was also detected in developing adrenal and the periventricular region of the preoptic area and diencephalon in similar amounts and distribution at male- and female-producing temperatures. The sex-based differential expression of SF-1 in the turtle gonad during a critical period of gonadal sex development mirrors that found in mammals, suggesting homologous functions and possible involvement in temperature-sensitive sex determination and differentiation.
**MATERIALS AND METHODS**
**Tissue Collection**
*T. scripta* eggs were purchased within 2 days of laying from Robert Kliebert (Kliebert Turtle Farms, Hammond, LA), brought to the laboratory, and kept at room temperature until viability was established by candling. Viable eggs were placed in containers with moistened vermiculite (1:1 vermiculite to water) and randomized across containers to eliminate clutch effects. The containers were placed in incubators (Precision, Chicago) at either 26 or 31°C. Continuous incubation of *T. scripta* eggs at 26°C produces all male hatchlings whereas incubation at 31°C produces all female hatchlings (Bull et al., 1982).
Temperature of the incubators was continuously monitored with HOBO data loggers (Onset Computer Corp.), supplemented by daily checks of in-incubator thermometers. Temperature fluctuations were less than 0.1°C. Egg boxes were rotated within the incubators each day, and eggs were checked periodically for
developmental stage according to Yntema’s staging guidelines (1968). By these guidelines, the temperature-sensitive period in *T. scripta* is from approximately stage 15 through 21, and eggs hatch at stage 26.
For *in situ* hybridization, embryos were taken at stages 13 through 19 and at stage 23 from each incubation temperature, quickly frozen on dry ice, and stored at $-80^\circ$C until sectioning. For all other molecular work, embryos were decapitated; the adrenal–kidney–gonad (AKG) complex and brain were then quickly dissected out, frozen in liquid nitrogen or isopentane, and stored at $-80^\circ$C until use.
**Probe Preparation**
The open reading frame of *T. scripta* SF-1 cDNA has been cloned (Cowan, J., 1998, M. S. thesis, University of Alabama at Birmingham; GenBank Accession No. AF033833; Wibbels et al., 1998). Cowan and Wibbels provided us a 457-bp clone spanning exon 5 (8 bp only) and exons 6a, 7, and 8 (185 bp only) (gene structure according to Ninomiya *et al.*, 1995). At its 5’ end, this clone includes 125 bp common to SF-1 and ELP1, an alternate transcript of FTZ-F1 (Ikeda *et al.*, 1993), and was, therefore, used in its entirety as the riboprobe template for Northern blot analysis. β-actin was cloned by reverse transcriptase polymerase chain reaction (RT-PCR) using RNA isolated per RNAagents Total RNA Isolation System (Promega) from stage 23 *T. scripta* tissue incubated at a male-producing temperature ($26^\circ$C). Degenerate primers were provided by K. Gen through Peter Thomas (University of Texas Marine Science Institute, Port Aransas, TX). SF-1/ELP1 and β-actin vectors were linearized, and riboprobes were made by run-off transcription using the RNA Strip-EZ System from Ambion and 32P-UTP from NEN. Probes were synthesized to a specific activity of $8 \times 10^7$ cpn/μg and used at a final concentration of $5 \times 10^8$ cpn/ml of hybridization solution.
Riboprobe templates for *in situ* hybridization were prepared by subcloning the Cowan and Wibbels SF-1/ELP1 clone, in both sense and antisense orientations, into pCRII vector (Invitrogen) to eliminate sequence in common with ELP1. These 330-bp subclones contain all of exon 7 and the 5’ end of exon 8 (Ninomiya *et al.*, 1995). Riboprobes were made by run-off transcription with enzymes from New England BioLabs and 35S-CTP from NEN, using protocols of Sambrook *et al.* (1989). Probes were synthesized to a specific activity of $9 \times 10^8$ cpn/μg and were used at a concentration of $0.3$ μg probe × length (kb)/ml hybridization solution.
**Northern Blot Analysis**
Total RNA was isolated according to Sambrook *et al.* (1989) or RNAagents Total RNA Isolation System. *T. scripta* tissues were AKG complexes from early, middle, and post TSP (stages 15, 18/19, and 23, respectively) at both male- and female-producing temperatures ($26$ and $31^\circ$C); whole brain from the middle of the TSP (stages 18/19) at both temperatures; and adult ovary. Twenty-five micrograms of total RNA from each tissue and RNA Millenium Markers (Ambion) were loaded. The blot was prepared using Ambion’s Northern Max kit and BrightStar-Plus membrane. After hybridization with the SF-1 probe the blot was stripped and rehybridized with the β-actin probe using RNA Strip-EZ. Bound probe was visualized by phosphorimager (Molecular Dynamics) using ImageQuant software.
**In Situ Hybridization**
Three individuals per temperature/stage were analyzed. Frozen, whole torsos or AKGs (stages 14, 16–18, and 23) or heads (stages 13 through 19 plus 23) were embedded in OCT compound (Tissue Tek) and sectioned on a cryostat (2800 Frigocut, Reichert-Jung) at 20 μm. Sections were placed serially on sets of four poly-L-lysine-treated slides, air dried, and stored at $-80^\circ$C. The *in situ* hybridization protocol used in our laboratory has been previously described (Young *et al.*, 1994). After hybridization with SF-1 antisense or sense (for negative control) probe, slides were dipped in Kodak NTB-2 autoradiographic emulsion and exposed at 4°C for 10 days. They were then developed (Kodak D19 Developer), fixed (Kodak Fixer), and stained with Harris hematoxylin for tissue in the torso or cresyl violet for tissue in the head. Darkfield quantification of silver grains in specifically labeled cells, defined as having a density of silver grains at least three times that of background, was done as previously described in Bergeron *et al.* (1998). Briefly, slides were computer coded and randomized to prevent bias during measurement. Measurement was done using the Grains Counting Program (University of Washington). The 45 most densely labeled clusters (each approximating the size
of a single cell) of gonad per individual were automatically selected and the number of silver grains per cluster counted by the Grains program. To measure tissue-based background labeling, the system was then asked to select and count grains per cluster in the adjacent kidney of each individual. The average background count per cluster was subtracted from the average gonad count per cluster. Corrected measures for the three individuals in each stage/temperature were then averaged and used in the findings below. Further statistical analysis was not done due to the small sampling size.
RESULTS
Northern Blot Analysis
A single primary band of approximately 5.8 kb was found in each of the *T. scripta* tissues examined: stages 15, 18/19, and 23 in the adrenal–kidney–gonad complex (Fig. 1) and stage 18/19 in the brain (data not shown), at both incubation temperatures. The β-actin results indicated that quality of the RNA was preserved and loading and transfer were approximately even (data not shown). SF-1 is strongly expressed in adrenal as well as in gonad (*in situ* hybridization observation), and band intensities reflect expression in combined tissue types. Therefore, quantification of SF-1 expression in individual tissues could not be done.
In Situ Hybridization
In the torso, SF-1 mRNA was detected in the adrenal and gonad of all stages assayed (stages 14, 16–18, and 23) and at both 26 and 31°C. No other tissues in the torso showed signal above background. Signal in the adrenal, although more intense than in the gonad, appeared the same at male- and female-producing temperatures, indicating no sex bias in that tissue (data not shown).
In the gonad, SF-1 signal was clearly visible at both incubation temperatures in the earliest stage examined, stage 14, which occurs prior to the temperature-sensitive period. Expression levels were nearly equivalent at male- and female-producing temperatures (Fig. 2). At this stage, gonads from both temperatures are bipotential and gonadal sex is histologically indistinguishable; gonad and adrenal tissue are distinct.
At approximately stages 18/19, gonadal sex can first be detected histologically in *T. scripta* embryos. Between stages 17 and 18, SF-1 expression rose in putative testes but decreased slightly in putative ovaries (Fig. 2).
At stage 23, a stage well after the TSP, the largest difference in gonadal expression was seen. SF-1 message dropped close to background in gonads at the female-producing temperature but remained high at the male-producing temperature.
A difference in the distribution of SF-1 in *T. scripta* gonad also emerged between sexes over time (Fig. 3). At stage 14, SF-1 mRNA was evenly dispersed throughout the gonad at both incubation temperatures (Figs. 3a and 3d). During stages 17 and 18, signal appeared clustered into striations at the male-producing temperature (Fig. 3b), coincident with early organization of medullary cords. SF-1 signal in most putative ovaries was evenly dispersed at stages 17 and 18 (Fig. 3c), though signal in two individuals was organized in a faint cord-like pattern. In *T. scripta*, medullary cords begin forming in putative females as well as males during this time (Wibbels *et al.*, 1991).
During stages 18 through 20, medullary cords prolif-
FIG. 1. Northern blot analysis of SF-1 mRNA developmental expression in adrenal–kidney–gonad tissue of *T. scripta*. Lane 1: marker. Lanes 2–7: approximate beginning of the TSP (stage 15), during the TSP (stages 18/19), and after the TSP (stage 23) tissue from male (26°C)- and female (31°C)-producing incubation temperatures. Conditions of the experiment did not allow quantification.
FIG. 2. SF-1 message expressed in developing gonad of *T. scripta* at male (26°C)- and female (31°C)-producing temperatures, determined by *in situ* hybridization. Each data point represents a simple mean of three individuals (corrected for background). Ranges, standard errors, and statistical comparisons are omitted because of small sample size. *For comparative purposes, we have included approximate timelines for commitment to gonadal sex at male (26°C)- and female (31°C)-producing temperatures, based on Wibbels et al. (1991).*
erate at male-producing temperatures and regress at female-producing temperatures (Wibbels *et al.*, 1991). By stage 23 in putative testes, medullary distribution of SF-1 signal was clearly organized in or around medullary cords (Figs. 3c and 4). In both compartments of the testes—medullary cords and interstitial space—SF-1 signal was above background (Fig. 4b). Signal in one of the compartments was clearly stronger than in the other but markers to distinguish compartments were not used in this experiment. In sharp contrast, SF-1 signal in putative ovaries at stage 23 was close to background and found only in the cortical region (Fig. 3f). Signal in the medullary region, which is largely vacuolated by this stage, was below tissue-based background as measured in the kidney.
In sections of embryonic *T. scripta* head, SF-1 mRNA was present (Fig. 5) in all developmental stages assayed (stages 13 through 19 plus 23) at both incubation temperatures. Message was localized in the periventricular region of the preoptic area and diencephalon (reference atlases were Harless and Morlock, 1979; Young *et al.*, 1994; Powers and Reiner, 1980; Kandel *et al.*, 1991). Signal in this region extended over many tissue sections, rostral to caudal, in each individual. There was no apparent sex-based difference in amount or distribution of SF-1 message. No signal above background was seen in torso or head tissues probed with labeled sense strand.
**DISCUSSION**
Steroidogenic factor 1 has now been identified in several mammals (Lala *et al.*, 1992; Morohashi *et al.*, 1992; Lynch *et al.*, 1993; Wong *et al.*, 1996; Pilon *et al.*,
FIG. 3. *In situ* hybridization of SF-1 probe to embryonic gonadal tissue of *T. scripta* (cross section). (a, b, c) Male-producing incubation temperature (26°C). (d, e, f) Female-producing temperature (31°C). (a, d) Before the TSP (stage 14). (b, e) During the TSP (stage 18). (c, f) After the TSP (stage 23). Bar, 100 μm.
1998), the chicken (Kudo and Sutou, 1997; Smith et al., 1999), and two TSD reptiles—the American alligator (P. Western and A. Sinclair, personal communication) and *T. scripta* (Wibbels et al., 1998). The pattern of SF-1 expression in early *T. scripta* gonadal development resembles that of all other amniotes examined to date: SF-1 message is present from the earliest urogenital ridge throughout the bipotential (or indifferent) phase with no apparent sex bias (Ikeda et al., 1994; Hatano et al., 1994; Smith et al., 1999; P. Western, personal communication). This implies conserved function. In mammals and chickens, expression of SF-1 in the gonad significantly precedes expression of known SF-1 target steroidogenic genes (Parker and Shimmer, 1997; Smith et al., 1999), which suggests its involvement in nonsteroidogenic functions during early develop-
FIG. 4. Lightfield (a) and darkfield (b) *in situ* hybridization images of SF-1 mRNA expression in the two compartments of post-TSP (stage 23). *T. scripta* testis. Arrows indicate comparable points on a and b. Background signal is visible in upper left corner. Bar, 100 μm.
FIG. 5. *In situ* hybridization analysis of the periventricular region of the diencephalon and preoptic area of *T. scripta* brain (coronal section). Tissue is from a male-producing incubation temperature (26°C) during the TSP (stage 18). (a) lightfield and (b) darkfield images. Bar, 100 μm.
opment of the gonad. Two separate lines of research indicate that SF-1 may be involved in the primary differentiation and/or maintenance of steroidogenic tissues. Stable expression of SF-1 in murine embryonic stem cells induces cell differentiation to the point of synthesizing progesterone (Crawford *et al.*, 1997). Ftz-F1-disrupted neonatal mice show complete agenesis of gonad and adrenal with indications of apoptosis (Luo *et al.*, 1994). In *T. scripta*, the period of bipotential gonad development extends into the approximate beginning of the TSP (stages 15/16), during which stages there is a temperature effect, but commitment to gonadal sex has not yet occurred (Wibbels *et al.*, 1991).
As gonadal sex first becomes histologically distinct, the pattern of SF-1 expression among amniotes appears to diverge. In *T. scripta*, gonadal sex can first be distinguished at developmental stages 18/19. Between stages 17 and 18, SF-1 message increases at the male-producing temperature while tapering off at the female-producing temperature. As found in a separate *in situ* hybridization, this pattern continued at stage 19 (data not shown). At the male-producing temperature of 26°C, commitment to gonadal sex starts at approximately stage 17 (3% of individuals are committed) and is fixed for 100% of individuals by stage 21. At the female-producing temperature of 31°C, the period of commitment also begins at stage 17 (20% are fixed), but 100% of females are committed by stage 19 (Wibbels *et al.*, 1991). These stages can vary slightly with the exact incubation regimen and due to clutch effect. Nevertheless, differential expression of SF-1 appears to begin at about the time both morphological distinction and commitment to gonadal sex are occurring (Fig. 2). At stage 23, after the TSP, SF-1 expression is markedly higher in males than in females (Figs. 2, 3c, and 3f).
A similar pattern is found in mammals in that SF-1 expression becomes differential in developing gonads just as testes and ovaries become distinguishable (Hatano *et al.*, 1994; Ikeda *et al.*, 1993). At embryonic day 12.5 (E12.5) in mice, expression is high in males and very low in females. This difference continues, coincident with rapid testicular differentiation, until late in gonadal development (Ikeda *et al.*, 1994).
SF-1 expression in *T. scripta* and mammals is evident in both compartments of developing testes. *In situ* hybridization darkfield images of mouse and rat indicate a higher level of SF-1 in the interstitial space than in testicular cords (Hatano *et al.*, 1994; Ikeda *et al.*, 1994) and appear quite similar to those of *T. scripta* (Fig. 4). In mammals, SF-1 is thought to regulate transcription of Müllerian inhibiting substance (MIS) in Sertoli cells in the testicular cords (Shen *et al.*, 1994; Giuili *et al.*, 1997) and synthesis of testosterone in Leydig cells in the interstitial space (Ikeda *et al.*, 1993; Hatano *et al.*, 1994). MIS has been cloned in *T. scripta* and, though its overall developmental expression pattern is not yet known, it
is present in putative testes at stage 23 (Wibbels et al., 1998), a time roughly comparable to its expression in mammals. SF-1 expression in putative testes of *T. scripta* is also high at this stage. The conserved pattern of SF-1 expression in *T. scripta* and mammals suggests homologous functions in male gonadal sex development.
In chicken and alligator, SF-1 expression following histological distinction of gonadal sex differs from that in *T. scripta* and mammals. SF-1 levels become less abundant in testes than ovary in the genetically sex-determined chicken (Smith et al., 1999) and temperature sex-determined alligator (P. Western, personal communication).
In chicken, SF-1 message expression falls to an almost negligible level in males while remaining high in females. Smith et al. (1999) found that increased SF-1 expression in the chicken ovary correlates with its high level of aromatase expression (Andrews et al., 1997; Smith et al., 1997). Aromatase is regulated by SF-1 in mammalian granulosa cells (Carlone and Richards, 1997), where it converts testosterone to estrogen. Estrogen is essential to development of chicken ovary, where it is synthesized at high levels (Imataka et al., 1988; Woods and Erton, 1978). In eutherian mammals, on the other hand, estrogen is apparently not essential to ovarian development (Lubahn et al., 1993; Greco and Payne, 1994), aromatase is rarely detected in the developing ovary (Greco and Payne, 1994), and SF-1 levels are very low at this time (Hatano et al., 1994; Ikeda et al., 1994).
In alligator, the level of SF-1 mRNA is consistently lower in putative males than females from the middle of the TSP onward, as detected by RT-PCR (P. Western, personal communication). Many lines of research implicate estrogen in female sex determination of TSD reptiles (reviewed in Crews, 1996; Lance, 1997; Wibbels et al., 1998), including alligators. The relative abundance of SF-1 in putative female alligators compared to putative males could regulate increased aromatase expression and subsequent estrogen synthesis. However, aromatase activity has not been detected in developing female alligator adrenal–kidney–gonad complex until after the TSP and gonadal determination (Smith et al., 1995).
Estrogen is considered essential to female sex determination in *T. scripta* as well (Crews and Bergeron, 1994). However, SF-1 expression in developing ovaries of *T. scripta* appears to decline, unlike that of chicken and alligator, as histological sex becomes distinct (stages 18/19). This is a critical point in *T. scripta* female sex determination, in that gonadal sex appears committed in 100% of individuals at this time (Wibbels et al., 1991). It may be that estrogen is required for only a short time as part of a female cascade, in which case SF-1 would not be needed for ongoing expression of aromatase. Indeed, a single dose of estrogen applied exogenously at stage 17 to *T. scripta* eggs incubating at an all-male-producing temperature results in all female hatchlings (Crews et al., 1991). Here, we find levels of SF-1 in putative ovary do not begin to fall until after stage 17.
Interestingly, Majdic et al. (1997) report that subcutaneous injections of estrogen in pregnant rats at E11.5 and E15.5 result in significant reduction of gonadal SF-1 message in genotypic male embryos recovered at E17.5. Were a related mechanism present in *T. scripta*, exogenous application of estrogen or endogenous production in ovary or other tissues could feed back negatively on SF-1 expression in the ovary.
SF-1 is strongly expressed in the adrenal of *T. scripta*, raising the question of estrogen synthesis in that tissue rather than, or in addition to, gonad during female sex determination. We found no apparent sex bias in SF-1 expression in the adrenal and, therefore, no direct support for differential aromatase expression in that tissue.
There is recent evidence suggesting the brain rather than, or in addition to, gonad or adrenal may be involved in sex determination in TSD turtles. Merchant-Larios (1998) found estrogen levels in midbrain of sea turtle significantly higher at female- than at male-producing temperatures during the thermosensitive period. Jeyasuria and Place (1998) found aromatase transcript in brain of putative male and female diamondback terrapin before it was detectable in gonad, and more abundant in putative females than males early in the TSP. Here, we report SF-1 expression in brain before, during, and after the TSP with no apparent sex bias. If aromatase is differentially expressed in *T. scripta* brain at male- and female-producing temperatures during the TSP, its regulation must involve differential posttranscriptional regulation of SF-1 or other regulatory factor(s) altogether.
SF-1 in developing *T. scripta* brain may be involved in other functions. In Ftz-F1-disrupted mice, the structure of the ventromedial hypothalamus is malformed (Ikeda *et al.*, 1995; Shinoda *et al.*, 1995), implying a role for SF-1 in neural development. We detected SF-1 in this region of *T. scripta* brain. In the anterior pituitary of wild-type adult mice, SF-1 has been implicated in transcriptional regulation of gonadotropins and the GnRH receptor (reviewed in Parker and Schimmer, 1997). It is tempting to suggest fetal SF-1 regulation of estrogen by way of these proteins in *T. scripta*. However, no SF-1 expression was detected in the developing pituitary.
Is SF-1 involved in sex determination and/or differentiation in *T. scripta*? SF-1 is expressed at male- and female-producing temperatures prior to and early in the TSP but it is differentially expressed only in gonad and only as gonadal sex becomes distinct. This suggests that SF-1 is not, by itself, a sex-determining gene. There is, however, enough evidence compiled to suggest that it may be one critical component in gonadal sex development, perhaps regulating MIS expression and testosterone synthesis at male-producing temperatures and aromatase expression at female-producing temperatures.
**ACKNOWLEDGMENTS**
This work was supported by NSF Grant IBN-9723617. The technical assistance of J. Bergeron, C. Gill, B. Hawkins, J. Morales, M. Ramsey, K. Wennstrom, and E. Willingham is greatly appreciated.
**REFERENCES**
Andrews, J. E., Smith, C. A., and Sinclair, A. H. (1997). Sites of estrogen receptor and aromatase expression in the chicken embryo. *Gen. Comp. Endocrinol.* **108**, 182–190.
Bergeron, J. M., Gahr, M., Horan, K., Wibbels, T., and Crews, D. (1998). Cloning and *in situ* hybridization analysis of estrogen receptor in the developing gonad of the red-eared slider turtle, a species with temperature-dependent sex determination. *Dev. Growth Differ.* **40**, 243–254.
Bull, J. J., Vogt, R. C., and McCoy, C. J. (1982). Sex determining temperatures in turtles: A geographic comparison. *Evolution* **36**, 326–332.
Carlone, D. L., and Richards, J. S. (1997). Functional interactions, phosphorylation, and levels of 3',5'-cyclic adenosine monophosphate-regulatory element binding protein and Steroidogenic Factor-1 mediate hormone-regulated and constitutive expression of aromatase in gonadal cells. *Mol. Endocrinol.* **11**, 292–304.
Crawford, P. A., Sadovsky, Y., and Milbrandt, J. (1997). Nuclear receptor steroidogenic factor 1 directs embryonic stem cells toward the steroidogenic lineage. *Mol. Cell. Biol.* **17**, 3997–4006.
Crews, D. (1996). Temperature-dependent sex determination: The interplay of steroid hormones and temperature. *Zool. Sci.* **13**, 1–13.
Crews, D., and Bergeron, J. M. (1994). Role of reductase and aromatase in sex determination in the red-eared slider (*Trachemys scripta*), a turtle with temperature-dependent sex determination. *J. Endocrinol.* **143**, 279–289.
Crews, D., Bergeron, J. M., Flores, D., Bull, J. J., Skipper, J. K., Tousignant, A., and Wibbels, T. (1994). Temperature-dependent sex determination in reptiles: Proximate mechanisms, ultimate outcomes, and practical applications. *Dev. Gen.* **15**, 297–312.
Crews, D., Bull, J. J., and Wibbels, T. (1991). Estrogen and sex reversal in turtles: A dose-dependent phenomenon. *Gen. Comp. Endocrinol.* **81**, 357–364.
Desvages, G., and Pieau, C. (1992). Aromatase activity in gonads of turtle embryos as a function of the incubation temperature of eggs. *J. Steroid. Biochem. Mol. Biol.* **41**, 851–853.
Ellinger-Ziegelbauer, H., Hihi, A. K., Laudet, V., Keller, H., Wahl, W., and Dreyer, C. (1994). FTZ-F1-related orphan receptors in *Xenopus laevis*: Transcriptional regulators differentially expressed during early embryogenesis. *Mol. Cell. Biol.* **14**, 2786–2797.
Giuli, G., Shen, W.-H., and Ingraham, H. A. (1997). The nuclear receptor SF-1 mediates sexually dimorphic expression of Müllerian inhibiting substance, in vivo. *Development* **124**, 1799–1807.
Greco, T. L., and Payne, A. H. (1994). Ontogeny of expression of the genes for steroidogenic enzymes P450 side-chain cleavage, 3β-hydroxysteroid dehydrogenase, P450 17α-hydroxylase/C[17,20] lyase, and P450 aromatase in fetal mouse gonads. *Endocrinology* **135**, 262–268.
Harless, M., and Morlock, H., Eds. (1979). “Turtles: Perspectives and Research.” Wiley, New York.
Hatano, O., Takayama, K., Imai, T., Waterman, M. R., Takakusi, A., Omura, T., and Morohashi, K. (1994). Sex-dependent expression of a transcription factor, Ad4BP, regulating steroidogenic P-450 genes in the gonads during prenatal and postnatal rat development. *Development* **120**, 2787–2797.
Honda, S., Morohashi, K., Nonnura, M., Takeya, H., Kitajima, M., and Omura, T. (1993). Ad4BP regulating steroidogenic P-450 gene is a member of steroid hormone receptor superfamily. *J. Biol. Chem.* **268**, 7494–7502.
Imataka, H., Suzuki, K., Inano, H., Kohmoto, K., and Tamaoki, B. (1988). Developmental changes of steroidogenic enzyme activities in the embryonic gonads of the chicken: The sexual difference. *Gen. Comp. Endocrinol.* **71**, 413–418.
Ikeda, Y., Lala, D. S., Kim, E., Moisan, M., and Parker, K. L. (1993). Characterization of the mouse FTZ-F1 gene, which encodes a key regulator of steroid hydroxylase gene expression. *Mol. Endocrinol.* **7**, 852–860.
Ikeda, Y., Shen, W.-H., Ingraham, H. A., and Parker, K. L. (1994).
Developmental expression of mouse steroidogenic factor-1, an essential regulator of steroid hydroxylases. *Mol. Endocrinol.* **8**, 654–662.
Ikeda, Y., Luo, X., Abbud, R., Nilson, J. H., and Parker, K. L. (1995). The nuclear receptor steroidogenic factor-1 is essential for the formation of the ventromedial hypothalamic nucleus. *Mol. Endocrinol.* **9**, 478–486.
Ingraham, H. A., Lala, D. S., Ikeda, Y., Luo, X., Shen, W., Nachtigal, M. W., Abbud, R., Nilson, J. H., and Parker, K. L. (1994). The nuclear receptor steroidogenic factor-1 acts at multiple levels of the reproductive axis. *Genes Dev.* **8**, 2302–2312.
Jeyasuria, P., and Place, A. (1997). Temperature-dependent aromatase expression in developing diamondback terrapin (*Malaclemys terrapin*) embryos. *J. Steroid Biochem. Mol. Biol.* **61**, 415–425.
Jeyasuria, P., and Place, A. (1998). Embryonic brain–gonadal axis in temperature-dependent sex determination of reptiles: A role for P450 aromatase (CYP19). *J. Exp. Zool.* **281**, 428–449.
Kandel, E. R., Schwartz, J. H., and Jessell, T. M., Eds., (1991). “Principles of Neuroscience,” 3rd ed. Elsevier, New York.
Kudo, T., and Sutou, S. (1997). Molecular cloning of chicken FTZ-F1-related orphan receptors. *Gene* **197**, 261–268.
Lala, D. S., Rice, D. A., and Parker, K. L. (1992). Steroidogenic factor 1, a key regulator of steroidogenic enzyme expression, is the mouse homolog of *fushi tarazu*-factor 1. *Mol. Endocrinol.* **6**, 1249–1258.
Lance, V. (1997). Sex determination in reptiles: An update. *Am. Zool.* **37**, 504–513.
Lubahn, D. B., Moyer, J. S., Golding, T. S., Couse, J. F., Korach, K. S., and Smithies, O. (1993). Alteration of reproductive function but not prenatal sexual development after insertional disruption of the mouse estrogen receptor gene. *Proc. Natl. Acad. Sci. USA* **90**, 11162–11166.
Luo, X., Ikeda, Y., and Parker, K. L. (1994). A cell-specific nuclear receptor is essential for adrenal and gonadal development and sexual differentiation. *Cell* **77**, 481–490.
Lynch, J. P., Lala, D. S., Peluso, J. J., Luo, W., Parker, K. L., and White, B. A. (1993). Steroidogenic factor-1, an orphan nuclear receptor, regulates the expression of the rat aromatase gene in gonadal tissues. *Mol. Endocrinol.* **7**, 776–786.
Majdic, G., Sharpe, R. M., and Saunders, P. T. K. (1997). Maternal oestrogen/xenoestrogen exposure alters expression of steroidogenic factor-1 (SF-1/Ad4BP) in the fetal rat testis. *Mol. Cell. Endocrinol.* **127**, 91–98.
Merchant-Larios, H. (1998). Brain as a sensor of temperature during sex determination in the sea turtle *Lepidochelys olivacea*. *J. Exp. Zool.* **281**, 510.
Morohashi, K.-i. (1999). Gonadal and extragonadal functions of Ad4BP/SF-1: Developmental aspects. *Trends Endocrinol. Metab.* **10**, 169–173.
Morohashi, K., Honda, S., Inomata, Y., Handa, H., and Omura, T. (1992). A common *trans*-acting factor, Ad4-binding protein, to the promoters of steroidogenic P-450s. *J. Biol. Chem.* **267**, 17913–17919.
Morohashi, K., and Omura, T. (1996). Ad4BP/SF-1, a transcription factor essential for the transcription of steroidogenic cytochrome P450 genes and for the establishment of the reproductive function. *FASEB* **10**, 1569–1577.
Ninomiya, Y., Okada, M., Kotomura, N., Suzuki, K., Tsukiyama, T., and Niwa, O. (1995). Genomic organization and isoforms of the mouse ELP gene. *J. Biochem.* **118**, 380–389.
Parker, K. L., Schedel, A., and Schimmer, B. P. (1999). Gene interactions in gonadal development. *Annu. Rev. Physiol.* **61**, 417–433.
Parker, K. L., and Schimmer, B. P. (1997). Steroidogenic factor 1: A key determinant of endocrine development and function. *Endocr. Rev.* **18**, 361–377.
Pilon, N., Behdjani, R., Daneau, I., Lussier, J. G., and Silversides, D. W. (1998). Porcine steroidogenic factor-1 gene (pSF-1) expression and analysis of embryonic pig gonads during sexual differentiation. *Endocrinology* **139**, 3803–3812.
Powers, A. S., and Reiner, A. (1980). A stereotaxic atlas of the forebrain and midbrain of the Eastern Painted Turtle (*Chrysemys picta picta*). *J. Hirnforsch.* **21**, 125–159.
Sambrook, J., Fritsch, E. F., and Maniatis, T. (1989). “Molecular Cloning: A Laboratory Manual,” 2nd ed., Cold Spring Harbor Laboratory Press, Plainview, NY.
Shen, W., Moore, C. C. D., Ikeda, Y., Parker, K. L., and Ingraham, H. A. (1994). Nuclear receptor steroidogenic factor 1 regulates the Müllerian Inhibiting Substance gene: A link in the sex determination cascade. *Cell* **77**, 651–661.
Shinoda, K., Lei, H., Yoshii, H., Nomura, M., Nagano, M., Shiba, H., Sasaki, H., Osawa, Y., Ninomiya, Y., Niwa, O., Morohashi, K., and Li, E. (1995). Developmental defects of the ventromedial hypothalamic nucleus and pituitary gonadotroph in the *Ftz-F1* disrupted mice. *Dev. Dynamics* **204**, 22–29.
Simpson, E. R., Mahendroo, M. S., Means, G. D., Kilgore, M. W., Hinshelwood, M. M., Graham-Lorence, S., Amanreh, B., Ito, Y., Fisher, C. R., Michael, M. D., Mendelson, C. R., and Bulun, S. E. (1994). Aromatase cytochrome P450, the enzyme responsible for estrogen biosynthesis. *Endocr. Rev.* **15**, 342–355.
Smith, C. A., Andrews, J. E., and Sinclair, A. H. (1997). Gonadal sex differentiation in chicken embryos: Expression of estrogen receptor and aromatase genes. *J. Steroid Biochem. Mol. Biol.* **60**, 295–302.
Smith, C. A., Elf, P. K., Lang, J. W., and Joss, J. M. P. (1995). Aromatase enzyme activity during gonadal sex differentiation in alligator embryos. *Differentiation* **58**, 281–290.
Smith, C. A., Smith, M. J., and Sinclair, A. H. (1999). Expression of chicken steroidogenic factor-1 during gonadal sex differentiation. *Gen. Comp. Endocrinol.* **113**, 187–196.
Wibbels, T., Bull, J. J., and Crews, D. (1991). Chronology and morphology of temperature-dependent sex determination. *J. Exp. Zool.* **260**, 371–381.
Wibbels, T., Bull, J. J., and Crews, D. (1992). Hormone-induced sex determination in an amniotic vertebrate. *J. Exp. Zool.* **262**, 454–457.
Wibbels, T., Cowan, J., and LeBoeuf, R. (1998). Temperature-dependent sex determination in the red-eared slider turtle, *Trachemys scripta*. *J. Exp. Zool.* **281**, 409–416.
Wibbels, T., and Crews, D. (1992). Specificity of steroid hormone-induced sex determination in a turtle. *J. Endocrinol.* **133**, 121–129.
Wibbels, T., and Crews, D. (1994). Putative aromatase inhibitor induces male sex determination in a female unisexual lizard and in a turtle with temperature-dependent sex determination. *J. Endocrinol.* **141**, 295–299.
Wibbels, T., and Crews, D. (1995). Steroid-induced sex determination
at incubation temperatures producing mixed sex ratios in a turtle with TSD. *Gen. Comp. Endocrinol.* **100**, 53–60.
Wong, M., Ramayya, M. S., Chrousos, G. P., Driggers, P. H., and Parker, K. L. (1996). Cloning and sequence analysis of the human gene encoding steroidogenic factor 1. *J. Mol. Endocrinol.* **17**, 139–147.
Woods, J. E., and Erton, L. H. (1978). The synthesis of estrogens in the gonads of the chick embryo. *Gen. Comp. Endocrinol.* **36**, 360–370.
Yntema, C. L. (1968). A series of stages in the embryonic development of *Chelydra serpentina*. *J. Morphol.* **125**, 219–252.
Young, L. J., Lopreoto, G. F., Horan, K., and Crews, D. (1994). Cloning and *in situ* hybridization analysis of estrogen receptor, progesterone receptor, and androgen receptor expression in the brain of whiptail lizards (*Cnemidophorus uniparens* and *C. inornatus*). *J. Comp. Neurol.* **347**, 288–300.
|
The Council of the City of Roanoke met in regular session on Monday, October 19, 2020 at 2:00 p.m., in the Council Chamber, Room 450, fourth floor, Noel C. Taylor Municipal Building, 215 Church Avenue, S. W., City of Roanoke, with Mayor Sherman P. Lea, Sr., presiding, pursuant to Chapter 2, Administration, Article II, City Council, Section 2-15, Rules of Procedure, Rule 1, Regular Meetings, Code of the City of Roanoke (1979), as amended, and pursuant to Resolution No. 41772-070620 adopted by the Council on Monday, July 6, 2020.
PRESENT: Council Members William D. Bestpitch, Joseph L. Cobb (participated by electronic means), Michelle L. Davis, Anita J. Price, Patricia White-Boyd, and Mayor Sherman P. Lea, Sr.-6.
ABSENT: None-0.
The Mayor declared the existence of a quorum.
OFFICERS PRESENT: Robert S. Cowell, Jr., City Manager; Timothy R. Spencer, City Attorney; and Cecelia F. McCoy, City Clerk.
The Invocation was delivered by Council Member Anita J. Price.
The Pledge of Allegiance to the Flag of the United States of America was led by Mayor Sherman P. Lea, Sr.
PRESENTATIONS AND ACKNOWLEDGEMENTS: NONE.
HEARING OF CITIZENS UPON PUBLIC MATTERS: City Council sets this time as a priority for citizens to be heard. If deemed appropriate, matters will be referred to the City Manager for response, recommendation or report to the Council.
BICYCLE ACCESSIBILITY: Barbara N. Duerk, 2607 Rosalind Avenue, S. W., appeared before the Council regarding bicycle accessibility on Avenham Avenue, S. W.; and provided recommendations regarding equity and equality within City boards and commissions.
CONSENT AGENDA
The Mayor advised that all matters listed under the Consent Agenda were considered to be routine by the Members of Council and would be enacted by one motion in the form, or forms, listed on the Consent Agenda, and if discussion were desired, the item would be removed from the Consent Agenda and considered separately.
MINUTES OF THE REGULAR MEETING OF CITY COUNCIL: Minutes of the regular meeting of City Council held on Monday, October 5, 2020, was before the body.
(See Minutes on file in the City Clerk’s Office.)
Council Member Davis moved that the reading of the minutes be dispensed with and approved as recorded. The motion seconded by Council Member Bestpitch and adopted by the following vote:
AYES: Council Members Bestpitch, Cobb, Davis, Price, White-Boyd, and Mayor Lea-6.
NAYS: None-0.
2020 CITIZEN OF THE YEAR AWARD: A communication from Mayor Sherman P. Lea, Sr., requesting that Council convene in a Closed Meeting to discuss the 2020 Citizen of the Year Award, pursuant to Section 2.2-3711 (A)(10), Code of Virginia (1950), as amended, was before the body.
(See communication on file in the City Clerk’s Office.)
Council Member Davis moved that Council concur in the request of the Mayor as abovementioned. The motion seconded by Council Member Bestpitch and adopted by the following vote:
COMMITTEE VACANCY-SELECTION OF CANDIDATE: A communication from Mayor Sherman P. Lea, Sr., requesting that Council convene in a Closed Meeting to discuss a personnel matter, being the selection of a candidate to fill the unexpired term of office of Djuna Osborne as a Member of Roanoke City Council, pursuant to Section 2.2-3711 (A)(1), Code of Virginia (1950), as amended, was before the body.
(See communication on file in the City Clerk's Office.)
Council Member Davis moved that Council concur in the request of the Mayor as abovementioned. The motion seconded by Council Member Bestpitch and adopted by the following vote:
AYES: Council Members Bestpitch, Cobb, Davis, Price, White-Boyd, and Mayor Lea-6.
NAYS: None-0.
CONSULTATION WITH LEGAL COUNSEL AND BRIEFINGS BY STAFF MEMBERS OR CONSULTANTS PERTAINING TO ACTUAL LITIGATION: A communication from the City Attorney requesting that Council convene in a Closed Meeting to consult with legal counsel and hear briefings by staff members or consultants pertaining to actual litigation, where such consultation or briefing in open meeting would adversely affect the negotiating or litigation posture of the City, pursuant to Section 2.2-3711 (A)(7), Code of Virginia (1950), as amended, was before the body.
(See communication on file in the City Clerk's Office.)
Council Member Davis moved that Council concur in the request of the City Attorney as abovementioned. The motion seconded by Council Member Bestpitch and adopted by the following vote:
ACQUISITION OF REAL PROPERTY FOR PUBLIC PURPOSES: A communication from the City Manager requesting that Council convene in a Closed Meeting for discussion and consideration of the acquisition of real property for public purposes, pursuant to Section 2.2-3711 (A)(3), Code of Virginia (1950), as amended, was before the body.
(See communication on file in the City Clerk’s Office.)
Council Member Davis moved that Council concur in the request of the City Manager as abovementioned. The motion seconded by Council Member Bestpitch and adopted by the following vote:
AYES: Council Members Bestpitch, Cobb, Davis, Price, White-Boyd, and Mayor Lea-6.
NAYS: None-0.
SCHEDULE PUBLIC HEARING: A communication from the City Manager requesting that Council schedule a public hearing to be held on Monday, November 16, 2020, at 7:00 p.m., or as soon thereafter as the matter may be heard, or such later date and time as the City Manager shall determine, in his discretion, to enter into a Parking Agreement for unreserved parking permits for the Center in the Square Garage located at 11 Campbell Avenue, S. E., was before the body.
(See communication on file in the City Clerk’s Office.)
Council Member Davis moved that Council concur in the request of the City Manager as abovementioned. The motion seconded by Council Member Bestpitch and adopted by the following vote:
PUBLIC HEARINGS:
CANDIDATES FOR CITY COUNCIL VACANCY: Pursuant to instructions by the Council, the City Clerk having advertised a public hearing for Monday, October 19, 2020 at 7:00 p.m., or as soon thereafter as the matter may be heard, to receive views of citizens regarding the applicants interviewed on Monday, October 5, to fill the unexpired term of office of Djuna L. Osborne as a Member of Roanoke City Council. The candidates are Dr. John R. Clements, Elizabeth Doughty, Alvin L. Nash, Luke W. Priddy and Vivian Sanchez-Jones, the matter was before the body.
Legal advertisement of the public hearing was published in *The Roanoke Times* on Friday, October 9, 2020, and *The Roanoke Tribune*, Thursday, October 15, 2020.
(See publisher’s affidavit on file in the City Clerk’s Office.)
The Mayor inquired if there were persons present who wished to speak in support of the abovementioned four candidates; whereupon, the following citizens appeared before Council:
In support of Dr. John Randolph Clements:
Morris Lusk, 2328 Colonial Avenue, S. W.
Bryan Barbato, 3364 Collingwood Street, N. E.
Peter Jessee, 3250 Allendale Street, S. W.
Clarissa Dailey, 918 Staunton Avenue, N. W.
Luke Bates, 2750 Avenel Avenue, S. W.
Bernarda “Benie” Thompson, 3408 Wellington Drive, S. E.
In support of Elizabeth Doughty:
Gwen Mason, 1015 First Street, S. W.
In support of Alvin L. Nash:
Vickie Royer, 3772 Norway Avenue, N. W.
Marcia Gunn, 1913 Rorer Avenue, S. W.
In support of Luke W. Priddy:
Leo Priddy, 435 Chin Quapin Trail, Christiansburg, Virginia
In support of Vivian Sanchez-Jones:
Matthew Jones, 6354 Township Drive, Roanoke County
Jordan Bell, 404 Gilmer Avenue, N. W.
Alhan Mahmoud, 1602 Salem Avenue, S. W.
Nada Melki, 417 Lakeview Drive, N. W.
Levy Romero, 1240 Summit Lane, N. W.
(See emails and letters of support on file in the City Clerk's Office.)
There being no further speakers, the Mayor declared the public hearing closed, and remarked that all comments would be received and filed. He also announced that the appointment would be made at the 7:00 p.m. session.
PETITIONS AND COMMUNICATIONS: NONE.
REPORTS OF CITY OFFICERS AND COMMENTS OF CITY MANAGER:
CITY MANAGER:
BRIEFINGS: NONE.
ITEMS RECOMMENDED FOR ACTION:
LOCAL MATCH AND PERFORMANCE AGREEMENTS: The City Manager submitted a written communication recommending execution of a Local Match Agreement and Performance Agreement among the City of Roanoke, Economic Development Authority and ASGN Incorporated in connection with expansion of its workforce.
(For full text, see communication on file in the City Clerk's Office.)
Council Member Davis offered the following ordinance:
(#41895-101920) AN ORDINANCE authorizing the proper City officials to execute a Local Match Agreement supporting a Commonwealth's Development Opportunity Fund Performance Agreement among the City of Roanoke, Virginia ("City"), the Economic Development Authority of the City of Roanoke, Virginia ("EDA"), Virginia Economic Development Partnership Authority ("VEDP"), and ASGN Incorporated ("ASGN"), that provides for a grant in the amount not to exceed $150,000 subject to certain undertakings and obligations by ASGN in connection with the creation of new jobs at ASGN's office located at 501 South Jefferson Street, Roanoke, Virginia; authorizing the City Manager to commit the City's portion of the Local Grant, defined below, with the requirement that ASGN achieve certain performance targets as described in the Local Match Agreement and to take such actions and execute such documents as may be necessary to provide for the implementation, administration, and enforcement of the Local Match Agreement; and dispensing with the second reading of this Ordinance by title.
(For full text of ordinance, see Ordinance Book No. 81, page 395.)
Council Member Davis moved the adoption of Ordinance No. 41895-101920. The motion seconded by Council Member Bestpitch and adopted by the following vote:
AYES: Council Members Bestpitch, Cobb, Davis, Price, White-Boyd, and Mayor Lea-6.
NAYS: None-0.
Council Member White-Boyd offered the following ordinance:
(#41896-101920) AN ORDINANCE authorizing the proper City officials to execute an Economic Development Job Grant Performance Agreement ("Performance Agreement") among the City of Roanoke, Virginia (the "City"), the Economic Development Authority of the City of Roanoke, Virginia (the "EDA"), and ASGN Incorporated ("ASGN"), that provides for a grant in the amount of $150,000 subject to certain undertakings and obligations by the parties in connection with the creation of new jobs at ASGN's office located at 501 South Jefferson Street, Roanoke, Virginia; authorizing the City Manager to take such actions and execute such documents as may be necessary to provide for the implementation, administration, and enforcement of such Performance Agreement; and dispensing with the second reading of this Ordinance by title.
(For full text of ordinance, see Ordinance Book No. 81, page 395.)
Council Member White-Boyd moved the adoption of Ordinance No. 41896-101920. The motion seconded by Council Member Price and adopted by the following vote:
AYES: Council Members Bestpitch, Cobb, Davis, Price, White-Boyd, and Mayor Lea-6.
NAYS: None-0.
CARES FUNDING-BOARD OF ELECTIONS: The City Manager submitted a written communication to repeal Budget Ordinance No. 41853-090820 appropriating funds in connection with the FY 20 Coronavirus Aid, Relief, and Economic Security Act – Coronavirus Relief Fund for the Board of Elections.
(For full text, see communication on file in the City Clerk’s Office.)
Council Member Bestpitch offered the following budget ordinance:
(#41897-101920) AN ORDINANCE to appropriate funding from the Virginia Department of Elections for the Coronavirus Aid, Relief, and Economic Security Act (CARES Act) – Coronavirus Relief Fund, amending and reordaining certain sections of the 2019 - 2020 Grant Fund Appropriations, and dispensing with the second reading by title of this ordinance.
(For full text of ordinance, see Ordinance Book No. 81, page 398.)
Council Member Bestpitch moved the adoption of Budget Ordinance No. 41897-101920. The motion seconded by Council Member Davis and adopted by the following vote:
AYES: Council Members Bestpitch, Cobb, Davis, Price, White-Boyd, and Mayor Lea-6.
NAYS: None-0.
COMMENTS OF CITY MANAGER: NONE.
REPORTS OF COMMITTEES:
VARIOUS EDUCATIONAL PROGRAMS: The Roanoke City School Board submitted a written report requesting appropriation of funds for various educational programs; and the City Manager submitted a written report recommending that Council concur in the request.
(For full text, see reports on file in the City Clerk’s Office.)
Council Member White-Boyd offered the following budget ordinance:
(#41898-101920) AN ORDINANCE to appropriate funding from the Commonwealth, federal and private grant for various educational programs, amending and reordaining certain sections of the 2020 - 2021 School Grant Fund Appropriations, and dispensing with the second reading by title of this ordinance.
(For full text of ordinance, see Ordinance Book No. 81, page 399.)
Council Member White-Boyd moved the adoption of Budget Ordinance No. 41898-101920. The motion seconded by Council Member Price and adopted by the following vote:
AYES: Council Members Bestpitch, Cobb, Davis, Price, White-Boyd, and Mayor Lea-6.
NAYS: None-0.
UNFINISHED BUSINESS: NONE.
INTRODUCTION AND CONSIDERATION OR ORDINANCES AND RESOLUTIONS: NONE.
MOTION AND MISCELLANEOUS BUSINESS:
INQUIRIES AND/OR COMMENTS BY THE MAYOR AND MEMBERS OF COUNCIL:
FREE RIDES ON ELECTION DAY: Council Member Price announced that High Street Baptist Church is offering free rides to the General Election voting precincts now through Tuesday, November 3, 2020; contact the church office at 540-563-0123 for additional information.
MOVEMBER ANNUAL EVENT: Council Member Bestpitch announced the month of November was "Movember" an annual event to raise awareness of men's health issues; encouraged all men to get annual check-ups and to grow moustaches or beards in the month of November to support "Movember"; and brought attention to the wording on a plaque on the north entrance to the Municipal Building from Campbell Avenue that refers to a fort erected in 1756 "to guard the frontier from hostile Indians" and "the massacre of pioneer settlers in the Roanoke Valley by Delaware and Mingo Indians" in 1764 and offered to work with a history professor at Roanoke College in Colonial and Early American history to develop an interpretive sign that could be displayed next to the plaque.
FREE RIDES ON ELECTION DAY: Council Member White-Boyd reminded the public that Valley Metro would be providing free rides to the election polls on Election Day, Tuesday, November 3, 2020.
EARLY VOTING: Council Member Davis advised that 24 percent of City of Roanoke citizens have elected to vote early in person or dropped off completed absentee ballots and thanked residents for supporting the very important election.
VACANCIES ON CERTAIN AUTHORITIES, BOARDS, COMMISSIONS AND COMMITTEES APPOINTED BY COUNCIL:
TASK FORCE TO REDUCE GUN VIOLENCE: The Vice-Mayor called attention to a vacancy on the Task Force to Reduce Gun Violence.
Vice-Mayor Cobb placed in nomination the name of Hannah Oakes.
There being no further nominations, Ms. Oakes was appointed to replace Troy Gusler as a member of the Task Force to Reduce Gun Violence, by the following vote:
AYES: Council Members Bestpitch, Cobb, Davis, Price, White-Boyd, and Mayor Lea-6.
NAYS: None-0.
At 3:28 p.m., Mayor Lea declared the Council Meeting in recess for a Closed Meeting; and thereafter to reconvene at 7:00 p.m., in the Council Chamber, Room 450, Noel C. Taylor Municipal Building.
At 7:00 p.m., the Council meeting reconvened in the Council Chamber, Room 450, fourth floor, Noel C. Taylor Municipal Building, with Mayor Sherman P. Lea, Sr., presiding.
PRESENT: Council Members William D. Bestpitch, Joseph L. Cobb (participated by electronic means), Michelle L. Davis, Anita J. Price, Patricia White-Boyd, and Mayor Sherman P. Lea, Sr.-6.
ABSENT: None-0.
The Mayor declared the existence of a quorum.
OFFICERS PRESENT: Robert S. Cowell, Jr., City Manager; Timothy R. Spencer, City Attorney; and Cecelia F. McCoy, City Clerk.
The Invocation was delivered by The Reverend Kevin McNeil, Pastor, Bethany Christian Church.
The Pledge of Allegiance to the Flag of the United States of America was led by Mayor Sherman P. Lea, Sr.
CERTIFICATION OF CLOSED MEETING: With respect to the Closed Meeting just concluded, Council Member Price moved that each Member of City Council certify to the best of his or her knowledge that: (1) only public business matters lawfully exempted from open meeting requirements under the Virginia Freedom of Information Act; and (2) only such public business matters as were identified in any motion by which any Closed Meeting was convened were heard, discussed or considered by City Council. The motion seconded by Council Member White-Boyd and adopted by the following vote:
PUBLIC HEARINGS:
VACATE, DISCONTINUE AND CLOSE AN ALLEY: Pursuant to Resolution No. 25523 adopted by the Council on Monday, April 6, 1981, the City Clerk having advertised a public hearing for Monday, October 19, 2020, at 7:00 p.m., or as soon thereafter as the matter may be heard, on a request of Elizabeth C. Barbour, Esquire to vacate, discontinue and close an alleyway between two residential properties located at 708 and 712 Arbutus Avenue, S. E., the matter was before the body.
Legal advertisement of the public hearing was published in *The Roanoke Times* on Tuesday, September 29, 2020 and Tuesday, October 6, 2020.
(See publisher's affidavit on file in the City Clerk's Office.)
The City Planning Commission submitted a written report recommending approval of the vacation of right-of-way as requested, contingent upon certain conditions.
(For full text, see report on file in the City Clerk's Office.)
Council Member White-Boyd offered the following ordinance:
(#41899-101920) AN ORDINANCE permanently vacating, discontinuing and closing a public right-of-way in the City of Roanoke located on 708 and 712 Arbutus Avenue, S. E., as more particularly described hereinafter; and dispensing with the second reading of this ordinance by title.
(For full text of ordinance, see Ordinance Book No. 81, page 400.)
Council Member White-Boyd moved the adoption of Ordinance No. 41899-101920. The motion seconded by Council Member Price.
The Mayor inquired if there were persons present who wished to speak on the matter. There being none, he declared the public hearing closed.
There being no comments and/or questions by the Council Members, Ordinance No. 41899-101920 was adopted by the following vote:
ZONING: Pursuant to Resolution No. 25523 adopted by the Council on Monday, April 6, 1981, the City Clerk having advertised a public hearing for Monday, October 19, 2020, at 7:00 p.m., or as soon thereafter as the matter may be heard, on a request of ABRE Holdings, Inc., to rezone property located 3402 and 3410 Avenham Avenue, S. W., and 562 Dillard Road, S. W., from R-12, Residential Single Family to MXPUD, Mixed Use Planned Unit Development District, the matter was before the body.
Legal advertisement of the public hearing was published in *The Roanoke Times* on Monday, October 5, 2020 and Monday, October 12, 2020.
(See publisher’s affidavit on file in the City Clerk’s Office.)
The City Planning Commission submitted a written report recommending the approval of the rezoning request, finding that the Amended Application No. 1, as amended at the hearing and subsequently submitted as Amended Application No. 2, is consistent with the general principles within the City’s Comprehensive Plan, South Roanoke Neighborhood Plan, and Zoning Ordinance as the subject property will be developed and used in a manner appropriate to the surrounding area.
(For full text, see report on file in the City Clerk’s Office.)
Council Member Bestptich offered the following ordinance:
(#41900-101920) AN ORDINANCE to rezone certain properties located at 3402 and 3410 Avenham Avenue, S. W., and 562 Dillard Road, S. W., from R-12, Residential Single-Family District, to MXPUD, Mixed Use Planned Unit Development District, subject to a certain condition proffered by the applicant; and dispensing with the second reading of this ordinance by title.
(For full text of ordinance, see Ordinance Book No. 81, page 403.)
Council Member Bestpitch moved the adoption of Ordinance No. 41900-101920. The motion seconded by Council Member Davis.
The Mayor inquired if there were persons present who wished to speak on the matter; there being none, the Mayor declared the public hearing closed.
There being no comments and/or questions by the Council Members, Ordinance No. 41900-101920 was adopted by the following vote:
AYES: Council Members Bestpitch, Cobb, Davis, Price, White-Boyd, and Mayor Lea-6.
NAYS: None-0.
COMMERCIAL PROPERTY ASSESSED CLEAN ENERGY (C-PACE) PROGRAM: Pursuant to instructions by the Council, the City Clerk having advertised a public hearing for Monday, October 19, 2020 at 7:00 p.m., or as soon thereafter as the matter may be heard, on a proposal of the City of Roanoke to consider the adoption of a Commercial Property Assessed Clean Energy (C-PACE) Program to provide C-PACE loans for the initial acquisition and installation of eligible clean energy, resiliency, and/or stormwater management improvements with willing owners of qualifying properties, which may include renovations to existing properties or new construction, the matter was before the body.
Legal advertisement of the public hearing was published in *The Roanoke Times* on Monday, October 5, 2020 and Monday, October 12, 2020.
(See publisher's affidavit on file in the City Clerk's Office.)
The City Manager submitted a written report recommending adoption of the C-PACE Ordinance and Financing Agreement, execution of any and all activities to procure a program administrator and all other necessary actions to implement the program in accordance with the Ordinance and the C-PACE Statute (Virginia Code Section 15.2-958.3).
(For full text, see report on file in the City Clerk's Office.)
Council Member Bestpitch offered the following ordinance:
(#41901-101920) AN ORDINANCE amending the Code of the City of Roanoke (1979), as amended, by adding Chapter 32.3, Commercial Property Assessed Clean Energy (C-PACE) Financing Program; establishing an effective date; and dispensing with the second reading of this ordinance.
(For full text of ordinance, see Ordinance Book No. 81, page 404.)
Council Member Bestpitch moved the adoption of Ordinance No. 41901-101920. The motion seconded by Council Member Price.
The Mayor inquired if there were persons present who wished to speak on the matter; there being none, the Mayor declared the public hearing closed.
There being no comments and/or questions by the Council Members, Ordinance No. 41901-101920 was adopted by the following vote:
AYES: Council Members Bestpitch, Cobb, Davis, Price, White-Boyd, and Mayor Lea-6.
NAYS: None-0.
OTHER BUSINESS:
INTERIM CITY COUNCIL MEMBER: Council Member White-Boyd offered the following resolution appointing an interim City Council Member to fill the Council vacancy created by the resignation of Djuna L. Osborne for a term commencing upon qualification and expiring December 31, 2022:
(#41902-101920) A RESOLUTION appointing Vivian Sanchez-Jones as a member of City Council for the City of Roanoke in accordance with Section 4 of the City Charter and Virginia Code Section 24.2-228 for a term commencing upon qualification and expiring on December 31, 2022.
(For full text of resolution, see Resolution Book No. 81, page 416.)
Council Member Bestpitch stated that Council was extremely fortunate to have five well-qualified applicants interviewed for the appointment and Vivian Sanchez-Jones, was chosen to bring something to the Council that they have not had.
Council Member Price commented that the decision was not easy; and she thanked all the candidates and encouraged them to continue to look for opportunities in which to share their gifts and talents.
Council Member White-Boyd moved the adoption of Ordinance No. 41902-101920. The motion seconded by Council Member Price and adopted by the following vote:
HEARING OF CITIZENS UPON PUBLIC MATTERS: City Council sets this time as a priority for citizens to be heard. If deemed appropriate, matters will be referred to the City Manager for response, recommendation or report to the Council.
TRANSPARENCY OF THE ROANOKE CITY POLICE DEPARTMENT: Aaron Stevenson, 1022 Alison Lane, Vinton, Virginia, appeared before the Council regarding the lack of transparency in the Roanoke City Police Department.
MISMANAGED MOUNTAIN BIKING TRAILS: Ian Bongard, 7881 Bradshaw Road, Salem, Virginia, appeared before the Council regarding how the City mismanaged mountain biking trails and how to work together to benefit all.
There being no further business to come before the Council, Mayor Lea declared the regular meeting adjourned at 7:36 p.m.
APPROVED
ATTEST:
Cecelia F. McCoy
Cecelia F. McCoy, CMC
City Clerk
Sherman P. Lea, Sr.
Mayor
|
Impact of eBusiness technologies on operational performance: The role of production information integration in the supply chain
Sarv Devaraj\textsuperscript{1}, Lee Krajewski\textsuperscript{*}, Jerry C. Wei\textsuperscript{2}
Management Department, Mendoza College of Business, University of Notre Dame, Notre Dame, IN 46556, United States
Available online 17 January 2007
Abstract
While the information technology (IT) literature is mixed regarding the direct benefits of eBusiness technologies on performance, the impact of such technologies on supply chain practices remains largely an unexplored area of research. We hypothesize that while there may be no direct benefit of eBusiness technologies on performance, these technologies might support customer integration and supplier integration in the supply chain, which in turn might impact operating performance.
To examine our hypotheses, we collected data from respondents who focused their responses to a single major product the process that manufactures it, a significant customer, and an important supplier. Our analyses showed that there was no direct benefit of eBusiness technologies on performance; however these technologies supported customer integration and supplier integration. Further, supplier integration was found to positively impact cost, quality, flexibility, and delivery performance; however there was no relationship between customer integration and performance. Consequently, there is a relationship between eBusiness technologies and supplier integration that leads to better performance. Further, there is an interactive effect between customer integration and supplier integration that supports the notion that firms that have both forms of integration, supported by eBusiness technologies, significantly outperform the others.
© 2007 Elsevier B.V. All rights reserved.
Keywords: eBusiness technologies; Customer integration; Supplier integration; Operational performance; Supply chain management
1. Introduction
With the advent of new eBusiness technologies, firms have engaged in initiatives that link supply chain processes across enterprises to create efficiencies and gain a competitive edge. The thrust of investment in eBusiness technologies is to create a seamless integration of entities in a supply chain, which calls for the sharing of accurate and timely information and the coordination of activities between business entities. Distorted information from one end of a supply chain to the other can lead to exaggerated order swings causing tremendous inefficiencies (Lee et al., 1997).
Despite the widespread adoption of eBusiness technologies, it is not clear whether eBusiness technologies have a direct affect on supply chain performance. Certainly, firms invest in eBusiness technologies with the presumption that they will facilitate supply chain integration and that performance will improve. Executives consider “supply chain planning,” “linkages with customers,” and “linkages with suppliers” to offer the greatest operational improvement opportunities, all of which are the capabilities most often transformed by eBusiness...
technologies in supply chain initiatives (D’Avanzo et al., 2003). Unfortunately, managerial expectations for these technologies have exceeded actual performance (Poirier and Quinn, 2003). Further, while the payoff from investing in information technologies has been a subject of long standing academic research and intense discussion, no clear conclusion has resulted. Payoffs from IT have been and continue to be open to debate in the literature, where it is called the “IT paradox” (Brynjolfsson and Yang, 1996). Largely due to the nature of the research designs employed, this stream of research has not definitively attributed the impact of individual technologies on organizational performance (Lee and Barua, 1999; Devaraj and Kohli, 2003). A direct linkage between eBusiness technologies and supply chain performance still remains an elusive entity. Thus, the first critical research question that this study seeks to address is “Do eBusiness technologies have a significant impact on supply chain performance?”
One possible reason for the dissatisfaction with the performance of eBusiness technologies on the part of supply chain executives is that technology solutions were selected before certain process improvements were made, thereby diluting the paybacks for these investments (Poirier and Quinn, 2003; Zhu and Kraemer, 2002). A potential remedy may be the development of processes to improve integration, which can enhance relationships with distribution channel partners (Johnson, 1999) or cultivate supplier capabilities (Krause et al., 1998). To realize these benefits, firms use eBusiness technologies to engage in information sharing and other forms of collaboration between customers and suppliers that address the issues of production planning and scheduling of their products. We will refer to this specific form of integration as *production information integration*. New eBusiness technologies facilitate quick information sharing between downstream and upstream partners and enable companies like Dell Computer to “trade inventory for information” (Milgrom and Roberts, 1988; Dell, 1999). Capturing and sharing real-time information has become essential to improving supply chain performance. Timely information sharing helps speed up decision making and often results in shorter lead times and smaller batch sizes (Cachon and Fisher, 2000). In addition to information sharing, eBusiness technologies also facilitate the collaboration of supply chain entities. Examples include jointly developing demand forecasting (Koloczyk, 1998; Aviv, 2001) and vendor-managed inventory (VMI), also referred to as direct shipment or automatic replenishment (Buzzell and Ortmeyer, 1995; Cetinkaya and Lee, 2000; Kulp et al., 2004). Conceivably, if eBusiness technologies for production planning and scheduling do not have a direct effect on firm performance, they may have an *indirect* effect on performance via their impact on the processes developed for supplier and customer production information integration. This possibility has not been addressed in the literature. Thus, our second research question is “Does production information integration constitute an important link in the pathway from eBusiness technology to supply chain performance?”
As indicated by Kauffman and Walden (2001), much of the existing eBusiness literature still relies heavily on case studies and anecdotes, with few empirical studies to measure Internet-based initiatives or gauge the scale of their impact on firm performance. Thus, there is a paucity of scientific analysis that clearly establishes the impact of eBusiness technology on strategic measures (Mukhopadhyay and Kekre, 2002). Since the evidence of both the success and failure of eBusiness initiatives has been generally anecdotal, we use a rigorous survey methodology in this paper to answer the two research questions. Specifically, we developed a set of hypotheses based on the literature to empirically test the pathway from eBusiness technology to performance, mediated by information integration. A unique aspect of our data set is that the respondents were asked to focus on their major product, the specific process that manufactures that product, the most important customer for the product, and the most important supplier for parts or components for the product. All data for the production information integration and performance variables were gathered in that context, which allowed the respondents to be specific about the value chain.
### 2. Literature and hypotheses
Our overarching premise is that eBusiness technologies add value to supply chain operations by enhancing production information integration. The focus of this paper is to provide insight into how firms can realize the benefits of those technologies. In this section we develop three constructs and a set of theory-based hypotheses on the role of eBusiness technologies in supply chain performance. These hypotheses are supported by three theories: resource-base view theory, the relational view theory, and the theory of swift and even flow, which will be discussed later. We performed an extensive survey of the literature that spanned the three areas pertinent to this study: eBusiness capabilities, production information integration, and operational performance. Table 1 is a concise summary of the representative references categorized by the three constructs mentioned above.
Table 1
Constructs and supporting literature
| Constructs | Key references |
|-----------------------------------|---------------------------------------------------------------------------------|
| eBusiness capabilities | |
| EBUSCUST | |
| (1) Customer buying online | Frohlich and Westbrook (2002), Lancioni et al. (2000), Mukhopadhyay and Kekre (2002) and Zhu et al. (2004) |
| (2) Customer configuration/customization | Barua et al. (2004), Pflughoef et al. (2003), Ranganathan et al. (2004) and Zhu and Kraemer (2002) |
| (3) Customer online order tracking | Barua et al. (2004), Chen and Paulraj (2004), Frohlich and Westbrook (2002) and Zhu and Kraemer (2002) |
| EBUSPUR | |
| (4) Find/select suppliers | Barua et al. (2004), Pflughoef et al. (2003), Zhu and Kraemer (2002) and Zhu et al. (2004) |
| (5) Online purchasing/auctions | Barua et al. (2004), Poirier and Quinn (2003), Ranganathan et al. (2004) and Zhu and Kraemer (2002) |
| EBUSCOLL | |
| (6) Web-based EDI | Frohlich and Westbrook (2001), Hill and Scudder (2002), Mukhopadhyay and Kekre (2002) and Zhu and Kraemer (2002) |
| (7) Collaboration on forecast, schedule, and replenishment | Barua et al. (2004), Frohlich and Westbrook (2002) and Hill and Scudder (2002) |
| (8) Advanced planning and scheduling (APS) | Barua et al. (2004), Frohlich and Westbrook (2002) and Poirier and Quinn (2003) |
| Production information integration| |
| (1) Sales forecast | Barua et al. (2004), Cachon and Lariviere (2001), Frohlich and Westbrook (2001), Krajewski and Wei (2001) and Lee et al. (1997) |
| (2) MPS | Barua et al. (2004), Frohlich and Westbrook (2001), Krajewski and Wei (2001) and Lancioni et al. (2000) |
| (3) Inventory | Barua et al. (2004), Frohlich and Westbrook (2001), Krajewski and Wei (2001), Lee et al. (1997) and Zhu and Kraemer (2002) |
| (4) Collaboration on net requirements | Barua et al. (2004), Cachon and Lariviere (2001), Krajewski and Wei (2001), Lee et al. (1997) and Zhu and Kraemer (2002) |
| (5) Supplier automatically replenishes inventory (VMI) | Buzzell and Ortmeyer (1995) and Lee et al. (1997) |
| Operational performance | |
| (1) Percent returns | Frohlich and Westbrook (2001), Poirier and Quinn (2003) and Rosenzweig et al. (2003) |
| (2) Percent defects | Frohlich and Westbrook (2001) and Rosenzweig et al. (2003) |
| (3) Delivery speed | Buzzell and Ortmeyer (1995), Chen and Paulraj (2004), Frohlich and Westbrook (2001) and Frohlich and Westbrook (2002) |
| (4) Delivery reliability | Buzzell and Ortmeyer (1995), Chen and Paulraj (2004), Poirier and Quinn (2003) and Rosenzweig et al. (2003) |
| (5) Production costs | Chen and Paulraj (2004), Frohlich and Westbrook (2001), Frohlich and Westbrook (2002), Poirier and Quinn (2003), Rosenzweig et al. (2003) and Zhu and Kraemer (2002) |
| (6) Production lead time | Buzzell and Ortmeyer (1995), Frohlich and Westbrook (2001), Ranganathan et al. (2004) and Rosenzweig et al. (2003) |
| (7) Inventory turns | Frohlich and Westbrook (2001), Ranganathan et al. (2004) and Zhu and Kraemer (2002) |
| (8) Flexibility | Chen and Paulraj (2004) and Rosenzweig et al. (2003) |
2.1. eBusiness capabilities
eBusiness capability is the ability of a firm to use Internet technologies to share information, process transactions, coordinate activities, and facilitate collaboration with suppliers and customers. Clearly, traditional modes of communication such as phone and fax are still used by many firms to do business with customers and suppliers. Nonetheless, we focus on eBusiness technologies because so little is known about their impact on performance (Devaraj and Kohli, 2003; Mukhopadhyay and Kekre, 2002). In addition, many firms are using the Internet to do business in their supply chains. Ninety percent of the respondents in a survey reported in Lancioni et al. (2000) used the Internet in some part of their supply chain program. Although
some firms have successfully integrated eBusiness technologies into their traditional bricks-and-mortar business models, many others still struggle with implementing and justifying eBusiness initiatives (Barua et al., 2000). The potential benefits for supply chains, however, are significant. Organizations use the web to improve customer relations by providing easier access to information, developing more flexibility to respond to customer information requests, and speeding up the transaction time to shorten product cycles (Lederer et al., 2001). Information technology has vast potential to facilitate collaborative planning among supply chain partners by sharing information on demand forecasts and production schedules (Chen and Paulraj, 2004). Through the Internet, information technology also enhances supply chain efficiency by providing real-time information regarding product availability, inventory levels, shipment status, and production requirements (Radstaak and Ketelaar, 1998; Lancioni et al., 2000; Chen and Paulraj, 2004).
Boone and Ganeshan (2001) indicate that the relationship between information technology implementation and productivity is determined in part by the use of technology. Information technology that becomes a part of the production process is associated with productivity improvements, unlike information technology that only documents or collects information. The mere institution of traditional EDI no longer results in strategic benefits to the supply chain; advancements such as fully integrated order-processing systems or electronic invoicing systems are also required (Mukhopadhyay and Kekre, 2002). The Internet, however, has enhanced traditional EDI systems by making them more flexible and affordable to smaller businesses (Lancioni et al., 2000; Zhu and Kraemer, 2002; Zhu et al., 2004). Nonetheless, many firms have gone beyond the confines of EDI and incorporated a multitude of Internet-based technologies to facilitate the connections between customers and suppliers.
Our eBusiness capability construct includes a broad set of technologies that are being used by firms to manage their supply chains. Table 1 shows the eight technologies we address in this study. While many more technologies exist, this set is based on the literature of the research and practicing communities and relates nicely to the types of technology most firms are acquiring to advance their supply chain competency: inventory planning and optimization, web-based applications, advanced planning and scheduling, and e-procurement systems (Poirier and Quinn, 2003). We classify the eight eBusiness technologies into three categories depending on their focus. The first category of technologies focuses on the demand side, which we call EBUSCUST, and relates to allowing customers to order online, configure or customize products online, and check the status of orders online. The second set of technologies focuses on the supply side, which we call EBUSPUR, and addresses the capability of the company to find and select suppliers online and purchase material through online auctions. Finally, the third set of technologies focuses on collaboration with customers or suppliers, which we call EBUSCOLL, and relates to web-based EDI, forecasting, inventory replenishment, and scheduling capabilities. Our eBusiness capability construct focuses on those technologies that conceivably relate to production information integration, which is the focus of this study.
2.2. Production information integration
Our production information integration construct embodies the nature of the information that is shared between entities in a supply chain and supported by the collaborative efforts that result in improved production information accuracy. Information sharing can be divided into demand oriented and supply oriented information sharing. Demand-oriented information sharing includes the sharing of real-time point-of-sales data, sales forecasts (Cachon and Lariviere, 2001; Aviv, 2001), customer profiling, and customer relationship management (Frohlich and Westbrook, 2002). Supply-oriented integration includes inventory ordering policies, inventory levels (Gavirneni et al., 1999) and master production schedules (Narasimhan and Das, 2001; Lancioni et al., 2000; Frohlich and Westbrook, 2001). Firms vary in the intensity of their production information integration depending upon the degree of Internet-based demand integration and the degree of Internet-based supply integration in their strategy. These strategies can range from little or no Internet-based integration to total integration from customers to suppliers (Frohlich and Westbrook, 2002; Straub et al., 2004).
Collaborative planning efforts must be considered along with information sharing to get a complete picture of production information integration. One of the most often cited initiatives firms put in place to enhance supply chain performance is collaborative planning with key customers/suppliers (Poirier and Quinn, 2003). One such form of collaborative planning is VMI, which has been shown to be positively related to manufacturer’s margins (Kulp et al., 2004). Conceivably, information sharing gives firms a first step in supply chain integration; however, it is the capability for
collaboration that separates the most successful firms from the rest of the pack.
Integration with supply chain partners is reflected by the five measures that are included in our production information integration construct, shown in Table 1. Demand forecasts from the customer provide suppliers more visibility to plan for capacity and material requirements (Lee et al., 1997; Frohlich and Westbrook, 2001). Further, sharing master production schedules with suppliers reduces forecast uncertainty and enables more detailed production quantity and timing (Lancioni et al., 2000; Wei and Krajewski, 2000; Krajewski and Wei, 2001). In addition, when a supplier has access to the customer’s inventory status, more precise replenishment production and shipment can be scheduled.
A company invests in eBusiness technologies to facilitate more information sharing and collaboration with either its suppliers or its customers in the value chain. Web-based technologies that process purchase orders and track or expedite shipments are often cited as critical integration tools (Frohlich and Westbrook, 2002; Chen and Paulraj, 2004; Barua et al., 2004). Thus, we expect that eBusiness capability will have a positive influence on production information integration. Further, as Frohlich and Westbrook (2001, 2002) point out, integration efforts can take place with customers and/or suppliers. Consequently, our production information integration construct applies to customers separately from suppliers, which we will refer to as customer integration and supplier integration, respectively. This discussion leads to the following hypotheses, all being equal.
**Hypothesis 1.** eBusiness capability influences the degree of customer integration.
**Hypothesis 1a.** EBUSCUST has a positive influence on the degree of customer integration.
**Hypothesis 1b.** EBUSPUR has a positive influence on the degree of customer integration.
**Hypothesis 1c.** EBUSCOLL has a positive influence on the degree of customer integration.
**Hypothesis 2.** eBusiness capability influences the degree of supplier integration.
**Hypothesis 2a.** EBUSCUST has a positive influence on the degree of supplier integration.
**Hypothesis 2b.** EBUSPUR has a positive influence on the degree of supplier integration.
**Hypothesis 2c.** EBUSCOLL has a positive influence on the degree of supplier integration.
### 2.3. Operational performance
There is growing empirical evidence suggesting that higher levels of integration along the supply chain are associated with greater potential benefits. Armistead and Mapes (1993) use five items to measure strength of integration, including the extent of shared ownership of master production schedules and the level of information system integration. The results of their study indicated that increasing the level of integration does increase operating performance in quality, cost, delivery time, and flexibility. Using data from 215 North American manufacturing firms, Narasimhan and Jayaram (1998) propose that supply chain integration impacts external customer responsiveness and internal manufacturing performance via the key linkage between sourcing and degree of manufacturing goal achievement. Johnson (1999) shows that strategic integration results in enhanced economic rewards for the customer (distributor) firm. Based on the data from 57 tier-1 automotive suppliers, Vickery et al. (2003) found that customer service fully mediates the relationship between supply chain integration and firm performance. There was no significant direct relationship to firm performance from supply chain integration, or from integrated information technology.
While most studies in the literature focus on either the supply side or the demand side of integration, Frohlich and Westbrook (2001) address the question “Is it more important to link with suppliers, customers, or both?” Among the industrial goods manufacturers they surveyed, the firms with the greatest degree of integration, with both suppliers and customers, had the strongest performance improvement. In addition, firms that participated in the survey had a stronger degree of integration with their suppliers than with their customers in the following integrative activities: access to planning systems, sharing production plans, and knowledge of inventory mix/levels. In a separate but related study, Frohlich and Westbrook (2002) further distinguish web-based demand chain integration from supply chain integration. They report that manufacturing and services firms adopting both the demand integration and supply integration had the highest operational performance in delivery time, transaction costs, profitability, and inventory turnover. Ranganathan et al. (2004) found significant positive associations between the benefits realized from web technologies and both internal assimilation and external diffusion of such technologies to suppliers.
Rosenzweig et al. (2003) introduce supply chain integration intensity as a proxy variable for Frohlich and
Westbrook’s (2001) “arc of integration.” Their study involves a sample of 238 consumer products companies dominated by market leaders. They found no empirical evidence to support a direct effect between integration intensity and sales growth or customer satisfaction. Rather, it appears that the benefits of integration must first be translated into operational capabilities, such as product quality, delivery reliability, process flexibility, and cost. In other words, these operational capabilities mediate the relationship between supply chain integration and multiple measures of business performance. Our operational performance construct, shown in Table 1, also includes measures for cost, quality, delivery, and flexibility. Therefore, based on the above arguments, we propose the following hypotheses.
**Hypothesis 3.** A higher degree of customer integration leads to improved operational performance.
**Hypothesis 4.** A higher degree of supplier integration leads to improved operational performance.
### 2.4. Underlying theoretical support
While the above discussion provides support for our research model, our study of the relationships between eBusiness capabilities, production information integration, and operational performance is grounded on three theories: resource-based view (RBV), relational view, and theory of swift and even flow. While the first two theories have a basis in strategic management, the third one is rooted in operations management literature. The resources-based view (RBV) is based on the notion that a firm’s performance is founded on its unique resources and capabilities that are hard to imitate, such as resource heterogeneity and immobility (Wernerfelt, 1984; Barney, 1991; Peteraf, 1993). In the context of RBV, studies have shown that the ability of a firm to develop and exploit new technologies and organizational processes, including eBusiness capabilities, will lead to sustainable competitive advantages (Mata et al., 1995; Teece et al., 1997; Straub and Klein, 2001; Zhu and Kraemer, 2002; Zhu, 2004). RBV provides the underlying foundation for the development of our eBusiness construct because this construct describes a firm’s capability in eBusiness, which may be applied in a host of ways other than production information integration.
From the perspective of the relational view, a firm’s critical resources may span its boundaries and be embedded in inter-firm resources and relationships (Dyer and Singh, 1998). This view takes the position that RBV tends to focus on the resources housed within a firm and, hence, overlooks the network of relationships in which the firm is embedded (Powell et al., 1996; Dyer, 1996, 1997). While the RBV theory tends to focus on the firm and understanding its competitive advantage from a resource perspective, the relational view theory focuses on the dyad or network relationships and processes (Straub et al., 2004; Chen and Paulraj, 2004). In this research, our production information integration construct is grounded on the relational view because we focus on a major product and its manufacturing process. We then investigate the intensity of integration specific to the product and process between the respondent firm and its customer, and between the respondent firm and its supplier, respectively.
Finally, our study is also grounded in the operation management theory of swift and even flow (Schmenner and Swink, 1998). The theory holds that the swifter and more even the flow of materials through a process, the more productive that process is. Thus, productivity for any process rises with the speed by which the material...
flows through the process, and falls with the increases in the variability associated with the flow. We postulate that strong eBusiness capabilities will lead to better production information integration with both the supplier and the customer. As a result, materials will flow through the supply chain more swiftly and evenly and bring forth improved operational performance.
Based on this discussion, Fig. 1 shows the hypothesized model.
3. Research design
3.1. Survey development
The survey was created in several stages. Initially, we developed questions involving eBusiness capabilities, production information integration, and operating performance based on the literature (see Appendix A). Next, we asked eight managers to take the survey and give us their suggestions for improvement and clarity. Finally, as a result of that pre-test, we made a number of changes to the instrument.
3.2. Sample
The target audience for the survey was selected from the following industries and SIC codes: computer components and peripherals (SIC 357), printed circuit boards and semiconductors (SIC 367), electronic equipment and supplies (SIC 369), and automotive bodies and parts (SIC 371). These SIC codes are within the ranges of recent supply chain studies (Dyer, 1996, 1997; Krause et al., 1998; Narasimhan and Das, 2001; Zhu and Kraemer, 2002; Chen and Paulraj, 2004). We used a stratified random sample of companies to ensure coverage of the target industries. The contact information was obtained from a professional database. To ensure that the respondent had expertise to accurately respond to the questions, we focused the survey on senior managers as key informants with titles such as ‘Vice President,’ ‘Manager,’ or ‘Director’, and with functional areas of expertise such as ‘Operations’, ‘Production’, or ‘Manufacturing’. Key informants were used to obtain all of the data used in this study for several reasons. First, operational performance data at the plant level is generally not available from any source other than company records which is privy only to key persons in the organization. Further, many of the questions in the survey required adequate knowledge about the eBusiness capability, information sharing in the supply chain, and performance. It is more likely that a limited number of individuals in the plant rather than somebody selected at random in the plant would generally have an adequate knowledge of, or access to, this information. Under these conditions, key informants are commonly used to obtain the necessary data and enhance the likelihood of valid and reliable data (Huber and Power, 1985). Although key informants can provide reliable data as discussed above, consistent with prior research in operations management using key informants, inter-rater reliability cannot be assessed.
The data collection process resulted in 120 usable responses. After accounting for undelivered surveys and incomplete responses, we obtained a useable or effective response rate of 8.4%. A priori, a low response rate was anticipated due to the length of the survey and the fact we focused on top-level managers (Pflughoef et al., 2003). Nevertheless, the proportions among the SIC codes in the 120 usable responses were close to the proportions in the surveys that were mailed out. This sample size is in comparable range with other respected empirical studies (Krause et al., 1998; Kulp et al., 2004). In general, the absolute size of the sample is more important than the proportion of the population sampled (Black, 1999; Fowler, 1993). Consequently, to ensure that our sample size was adequate, we used Verma and Goodale (1995) guidelines for computation of statistical power; for a medium effect sample size, the power is greater than 0.90 at a significance level of 0.05. The computation of statistical power yielded a value of 0.91.
3.3. Respondent profile
About 55% of the responses came from companies with less than 250 employees (small), 18% from companies with between 250 and 500 employees (medium), and the remaining 27% from large companies more than 500 employees. In terms of the distribution of responses from the various industries, about 49.2% of the responses were from the automotives industry and the remaining 50.8% from the electronics and computer industry. The percentages of the industries within the sample distribution are very close to the percentages of the surveys that were originally mailed out. There was no statistical difference between the distribution of responses and the sample distribution at a 5% level of significance.
As a test of non-respondent bias, we compared the responses of early respondents with those that were collected from respondents by telephone solicitation (non-respondents in the first round). A statistical test of the comparison between the two groups for the constructs employed in this study yielded no significant difference.
3.4. Measures
3.4.1. Construct validation
Appendix A provides the scale items and the exploratory factor analysis (EFA) factor loadings of the items utilized in this study.
22.214.171.124. eBusiness capabilities. Our construct for eBusiness capability has three categories as shown in Table 1: EBUSCUST, EBUSPUR, and EBUSCOLL. We examined the statistical properties of the scales in several ways. First, we performed a principal component factor analyses to examine if the underlying factor pattern for the eBusiness capabilities construct was consistent with the three categories mentioned above. The factor loadings using Varimax rotation are shown in Appendix A. All the items relating to eBusiness capabilities separate out into the three categories discussed above. A factor analysis of the constructs indicated that all items load onto a single factor with all factor loadings greater than 0.679, which are substantially above the minimum requirements of 0.40 (Nunnally and Bernstein, 1994; Gefen et al., 2000). Also there were no cross-loadings of any items on more than one component. In Table 2 we present evidence of this in terms of the factor loadings of the eight items of eBusiness capabilities. The eigen values are 2.093, 1.953, and 1.372. An examination of the factor loadings, eigen values, as well as scree plots provided robust evidence of a three-factor solution to the eBusiness capabilities scale. Taken together, this is preliminary evidence of the convergent and divergent validity of the scales for eBusiness capabilities. The cumulative percent of variance extracted by this factor structure was 67.8%.
We next examined the reliabilities of these scales as indicated by Cronbach’s alpha. The reliabilities for EBUSCUST, EBUSPUR, and EBUSCOLL were 0.72, 0.70, and 0.76, respectively, which is considered higher than the recommended threshold (Nunnally, 1978).
126.96.36.199. Supplier integration. The items employed in the construct for supplier production information integration (SUPINTEG) are extracted from the literature as shown in Table 1. A factor analysis of the items measuring supplier integration revealed a single factor solution with factor loadings in excess of 0.65 for each item. The percent of variance explained was 55.8%. Further, Cronbach’s alpha for the reliability of the scale was 0.80.
188.8.131.52. Customer integration. Similar to the construct for supplier integration, our construct for customer production information integration (CUSINTEG) taps into the information sharing that takes place between the firm and its customers. The items for this construct were derived from the literature as shown in Table 1. The items loaded onto a single factor with factor loadings in excess of 0.67, providing evidence of the unidimensionality of the scale. The percent of variance explained was 53.4%. Reliability of this scale as assessed by Cronbach’s alpha was 0.78.
184.108.40.206. Performance. Operational performance is typically assessed along the dimensions of cost, quality, flexibility, and delivery (Vickery et al., 1993; Miller and Roth, 1994; Devaraj et al., 2004). The cohesiveness of the items measuring operational performance was very high; all items loaded onto a single factor with factor loadings greater than 0.61. The percent of variance extracted was 57.3%. The reliability of the scale for performance was also very high (Cronbach’s alpha of 0.89). Thus, we treat operational performance as a unidimensional construct in our analyses.
220.127.116.11. Control variables. We controlled for two variables that can also conceivably affect performance—type of industry and size of the firm (Zhu et al., 2004; Rosenzweig et al., 2003; Zhu and Kraemer, 2002; Frohlich and Westbrook, 2002). Respondents to our survey were from the automotive and computers/electronics industries. Thus, we used an indicator variable to depict the industry classification in our estimation models. Further, we also had information on the size of the firm (small—less than 250 employees, medium—between 250 and 500 employees, and large—greater than 500 employees). We used two indicator variables to capture the effect of the firm size on performance.
Table 2
eBusiness capabilities—factor loadings
| | Component |
|-------|-----------|
| | 1 | 2 | 3 |
| Item 1| .154 | .816 | .013 |
| Item 2| -.081 | .740 | .127 |
| Item 3| .134 | .752 | .081 |
| Item 4| .115 | .139 | .780 |
| Item 5| .117 | -.035 | .839 |
| Item 6| .679 | .136 | .050 |
| Item 7| .888 | .006 | .103 |
| Item 8| .847 | .063 | .153 |
3.4.2. Confirmatory factor analysis
Structural equation modeling was used to perform a confirmatory factor analysis (CFA) for the measurement model. CFA allows for tests to be conducted for unidimensionality, convergent validity, and divergent validity of the scales employed in the study. Consistent with extant literature in eBusiness technologies (Zhu and Kraemer, 2002) we use EFA to develop the constructs and CFA to confirm their properties.
18.104.22.168. Unidimensionality. Unidimensionality is the extent to which empirical measures (indicators) are strongly associated with each other and represent a single concept. Tests for unidimensionality indicated that the factor loadings associated with constructs were statistically significant ($p \leq 0.01$).
22.214.171.124. Convergent validity. Convergent validity is the extent to which varying approaches to construct measurement yield same results (Campbell and Fiske, 1959). The value for the Bentler–Bonentt coefficient $\Delta$ (Bentler and Bonentt, 1980) was 0.91, indicating strong convergent validity.
126.96.36.199. Discriminant validity. Discriminant validity assesses the extent to which a concept and its indicators differ from another concept and its indicators (Bagozzi et al., 1991). To test discriminant validity, we compare two CFA models: one in which the correlation of a pair of constructs is constrained to equal 1.0 (model-a), and another in which the correlation is free to vary (model-b). The chi-square difference test checks the statistical significance of the statistic (chi-a minus chi-b) at $p < 0.01$ (Venkatraman, 1989). A statistically significant value of (chi-a minus chi-b), with a threshold of 32.0, demonstrates that the two constructs under consideration are distinct. This procedure is repeated for all pairs of scales in the instrument. For each pair of constructs, the chi-square test was statistically significant ($p \leq 0.01$) providing support for discriminant validity of constructs in all the measurement models.
188.8.131.52. Criterion validity. Criterion validity is the extent to which the items predict a set of criteria of interest. One way to assess this is to estimate the correlation between items used in the study with other measures that attempt to capture the same or similar criteria. As one example to illustrate criterion validity, we examined the correlation between the performance measures for delivery speed and delivery reliability with an objective measure of delivery performance (% products delivered on time) that was not used in this study. The correlation coefficients were 0.59 and 0.61 and very highly significant ($p < 0.001$). Another check examined the correlation between production lead time performance and an objective measure of manufacturing lead time in days. Again, the correlation was highly significant with a correlation coefficient of 0.27 and $p < 0.001$.
3.4.3. Bias
One of the potential sources of bias in survey research is common method variance. One of the procedures used to test for evidence suggesting the presence, or absence, of common method bias in a data set is the Harman’s one-factor test (Podsakoff and Organ, 1986). An exploratory factor analysis was performed on the variables of interest. If a single factor is obtained or if one factor accounts for a majority of the covariance in the independent and criterion variables, then the threat of common method bias is high. We performed such a factor analysis by combining the independent and dependent variables and did not observe a single factor structure that explained significant covariance. This suggests that common method bias may not be a cause for concern in our sample.
Another source of potential bias is the use of subjective data. According to Miller et al. (1997), two criteria where subjective data may be reliable and valid are: (a) questions do not require recall from distant past, and (b) informants are motivated to provide accurate information. We promised confidentiality of data and highlighted the usefulness of the project. Further, we believe that respondents would try to respond as accurately as possible since we mailed a benchmarking report after the data were collected. Therefore, we minimized distortions in subjective data obtained from key informants.
Given that the constructs employed in the study demonstrated good statistical properties, we proceeded to test the hypothesized research model shown in Fig. 1.
4. Results
We present descriptive statistics and the correlation matrix in Table 3. The correlations and descriptive statistics refer to the average of the items reflecting each construct. Structural Equation Model (SEM) analysis has been widely applied in the operations management, social sciences, and marketing literature. Researchers have recognized the advantages of SEM as a second-generation technique (Zhu and Kraemer, 2002; Pflughoeft et al., 2003; Carr and Pearson, 1999; Narasimhan
Table 3
Descriptive statistics and correlations
| | Mean | Standard deviation | 1 | 2 | 3 | 4 | 5 | 6 |
|-------|--------|--------------------|-------|-------|-------|-------|-------|-------|
| 1. EBUSCUST | 1.808 | 1.029 | 1.000 | | | | | |
| 2. EBUSPUR | 2.358 | 1.211 | 0.226** | 1.000 | | | | |
| 3. EBUSCOLL | 2.469 | 1.222 | 0.328*** | 0.273*** | 1.000 | | | |
| 4. SUPINTEG | 3.992 | 1.587 | 0.080 | 0.153* | 0.315*** | 1.000 | | |
| 5. CUSINTEG | 3.402 | 1.404 | 0.088 | 0.193** | 0.318*** | 0.247*** | 1.000 | |
| 6. PERFORM | 4.925 | 1.219 | −0.028 | −0.049 | 0.081 | 0.396*** | −0.049 | 1.000 |
* Significant at 0.10 level.
** Significant at the 0.05 level.
*** Significant at 0.01 level.
and Jayaram, 1998). The SEM consists of two parts—the measurement model and the structural model. The measurement model (discussed earlier) assesses the adequacy of the measures used for theoretical constructs employed in the study. The structural model, on the other hand, specifies the relationship between constructs. Effectively, the SEM methodology incorporates these two aspects to ascertain the fit between the variance–covariance matrix observed in the sample data and that implied by the theoretical or research model.
This fit is expressed using measures of goodness-of-fit index (GFI). It is standard practice to report several measures of GFI. The measures we report are goodness-of-fit index (GFI) (Bentler and Bonent, 1980), adjusted goodness-of-fit index (AGFI) (Bagozzi and Yi, 1988), comparative fit index (CFI) and normed fit index (NFI) (Bentler and Bonent, 1980), and the root mean square error of approximation (RMSEA) (Steiger, 1990). Values higher than 0.90 for GFI, CFI, and NFI are indicative of a good fit (Gefen et al., 2000), while AGFI values higher than 0.80 suggest a good fit of the hypothesized model. For RMSEA, a value less than 0.1 is considered a good fit, and a value less than 0.05 is considered a very good fit of the data to the research model (Gefen et al., 2000; Steiger, 1990).
We implemented the structural equation modeling using LISREL 8.50. The fit indices for each model are listed in Table 4. Another advantage of the SEM methodology is that it enables us to test competing models. We tested the hypothesized model with another competing model. According to the discussion of the resource-based view in Section 2.4, it is plausible that firms with eBusiness capabilities might in general have better performance, regardless of the production information integration achieved by eBusiness. Thus, we tested a model with a direct impact of eBusiness capabilities on performance. We call this model the Direct Effect model.
The results for the hypothesized model indicate a good fit between the variance–covariance matrix of the data and that implied by the model. This is indicated by the measures of fit (GFI, AGFI, CFI, and NFI) all above the threshold values for a good model. The results in Table 4 suggest that the hypothesized model fits better than the Direct Effect model. A more detailed examination of the additional paths between eBusiness capabilities and performance in the Direct Effect model indicates that none of the three dimensions of eBusiness capability were significantly associated with performance. Thus, we find evidence that there is no direct impact of eBusiness capabilities on performance. A firm that has a high degree of eBusiness capability is not necessarily experiencing better performance. Our results indicate that the ways in which these firms leverage their eBusiness capabilities is what determines if they also experience superior performance. This finding also responds to the IT literature on the paradox or the lack of impact of IT investment on performance (e.g., Hitt and Brynjolfsson, 1996; Porter, 2001; Vickery et al., 2003; Devaraj and Kohli, 2003).
Next, we examined the relationships between constructs hypothesized in the earlier section. Fig. 2 presents a graphical representation of the SEM models with their regression coefficients for each construct. We observe support for the general idea that eBusiness capabilities affect the extent to which firms integrate production information with their suppliers. Hypotheses 1b and 1c are supported by the data; EBUSPUR and EBUSCOLL positively influence the degree of supplier
integration. However, there is no support for Hypothesis 1a. Thus we observe support for Hypothesis 1 along two of the dimensions of eBusiness capabilities—purchasing and collaborative capabilities.
Similar to the above set of relationships, there is support for the general idea that eBusiness capabilities affect the extent to which firms integrate production information with their customers. Hypotheses 2b and 2c are supported by the data; EBUSPUR and EBUSCOLL positively influence the degree of customer integration. However, similar to the results for supplier integration, Hypothesis 2a was not supported. The underlying dimension of eBusiness capability that relates to customer online ordering capability was not linked to supplier integration or customer integration. We discuss these results in greater detail in Section 5.1.
The relationship between supplier production integration and firm performance was highly significant, lending strong support for Hypothesis 3. However, an interesting and counter-intuitive result was that the relationship between customer integration and performance was not statistically significant (Hypothesis 4). While supplier production information integration was viewed as positively affecting performance, a similar result was not true for customer production information integration. We investigate this phenomenon in greater detail in Section 5.2.
Finally, we included indicator variables to control for the size of the company as well as the type of industry. The control variables were not statistically significant, but their inclusion in the model lends credibility to our results that after controlling for the size and industry type we still observe significant relationships between the constructs employed in the study.
5. Discussion
This study addresses two important questions about the role of information technology in supply chain management: (1) do eBusiness technologies have a significant impact on supply chain performance? (2) Does production information integration constitute an important link in the pathway from eBusiness technology to supply chain performance? Based on our results, we can say that eBusiness technologies do impact the production information integration in a supply chain. Further, supplier integration affects the operational performance of the firm. However, some results were surprising and need further analysis. In particular, we did not expect the EBUSCUST variable to drop out of the model. We also did not expect the lack of an association between customer integration and operational performance. Finally, as supplier integration had such an important effect on operational performance, we did an analysis to find out which performance measures were the most affected by it. We discuss these results next.
5.1. eBusiness effects
Our analysis indicated that eBusiness capability, assessed along the dimensions of customer ordering capability, purchasing capability, and supply chain collaboration capability, has no direct effect on operational performance. This result simply emphasizes the fact that having any capability is ineffective unless the firm also has the systems and processes in place to leverage the capability (Poirier and Quinn, 2003; Zhu and Kraemer, 2002). From a supply chain perspective, the capability for customer integration and supplier integration are the keys to achieving improved
operational performance with eBusiness technologies. Both types of integration require methods, procedures, and processes that go far beyond the mere capability to interact with customers over the Internet. However, it was unexpected to see that customer ordering capability (EBUSCUST) was not associated with either of the integration constructs. There are two possible reasons for not observing these relationships in our original sample. First, it is conceivable that this is a transient phenomenon and a large percentage of the firms represented by our sample have not yet incorporated the customer ordering capability. As firms mature in the use of the Internet, we could see many more of them developing customer ordering capabilities and, if so, that capability should support the firm’s production information integration efforts. Second, it is conceivable that the predominance of the target population, represented by our sample, consists of firms that do not use (or need) customer ordering capability. Assessing which if any of these possibilities for customer ordering capability is present would be a useful future investigation.
5.2. Production information integration
The lack of a relationship between customer integration and operational performance seems to defy intuition. Certainly, the practitioner literature would have us believe that receiving production information from a customer and collaborating on future schedules and vendor-managed inventories should have a positive effect on performance. Nonetheless, our results lead us to believe otherwise. Frohlich and Westbrook (2002) showed that firms integrating both customers and suppliers perform better than firms integrating only one or the other. That concept could also apply here, however the twist in our study is that customer integration, as measured by specific information flows and collaborative efforts with customers, does not by itself relate to performance. While the independent impact of supplier integration might exist, what might be even more significant is the combined or interactive effect of supplier and customer integration. To examine this conjecture, we test the statistical significance of the interaction term in the model. To estimate the interaction effects, we first employ the more established practice of estimating an ordinary least squares regression (OLS) with the interaction term captured as the product of the two types of integration. Results of the OLS regression with performance as the dependent variable are shown in Table 5. As can be observed, the impact of the interactive effect of supplier and customer production integration is highly significant lending support to the notion that firms that exhibit both forms of integration outperform the others significantly. Further evidence of the significance of the interaction term is also observed as the increase in $R$-square after its inclusion. The $R$-square value increased from 18.6% to 26.9% after we included the interaction term.
We assessed multicollinearity issues that pose a threat to the validity of the analyses. An examination of the variance inflation factors (VIFs) indicated that all VIFs were less than 2. These are significantly less than the threshold of 10 that is commonly viewed as indicative of multicollinearity. Further, we also looked at Belsley–Kuh–Welsch indices (Belsley et al., 1980) to reconfirm that we did not have multicollinearity problems.
Finally, following Ping (1995) we estimated the interaction term within the context of a structural equation modeling framework and found statistical support for the significance of the interaction between customer and supplier information integration. In the interests of brevity, we did not report the details of this analysis.
Fig. 3 takes this analysis one step further. We categorized the firms into high or low customer
Table 5
Interactive effect of customer integration and supplier integration
| Independent variables | Dependent variable performance |
|----------------------------------------|-------------------------------|
| Interaction between supplier and customer | 0.29*** |
| Supplier | 0.39*** |
| Customer | −0.11 |
| Size 1 | 0.003 |
| Size 2 | −0.05 |
| Industry | −0.07 |
| $R$-square of model | 26.9% |
| $F$-statistic | 6.92*** |
*** Significant at the 0.01 level.
Fig. 3. Interaction effect on performance.
integration intensity and high or low supplier integration intensity depending on the firm’s scores relative to the median score for each construct. The operational performance score is the average of the firms in each quadrant.
Two groupings have statistically significant differences in their operational performance at the 0.05 level of significance: firms with high supplier integration and high customer integration (5.29) versus firms with low supplier integration and high customer integration (4.00); and firms with high supplier integration and low customer integration (4.96) versus firms with low supplier integration and high customer integration (4.00). Firms with low customer integration and high supplier integration do not experience operational performance significantly better than firms that have low customer integration and low supplier integration. Apparently, if customer integration is left at a low level, most of the gains in performance can be achieved with low levels of supplier integration. The largest benefit comes when both customer integration and supplier integration are at high levels. The synergistic effects of both forms of integration combine to produce the best outcome. However, why does a firm with low supplier integration and high customer integration have such poor performance? A possible answer resides in the anticipated benefits of the Internet capability, which has been shown to be a driver for supply chain integration (Frohlich and Westbrook, 2002). The level of customer expectation rises relative to performance once a firm has established a capability to interact with customers via the Internet. Indeed, this level of expectation could translate to the firm’s management as they evaluate their own operational performance; they expect more of the process because the customer expects more. The actual performance, nonetheless, could be limited because the firm has not established an adequate level of supplier integration. While we agree that both forms of production information integration are best, our data indicate that developing high levels of customer integration without first establishing supplier integration can result in lower levels of perceived operational performance. This finding is consistent with a recent study by Barua et al. (2004) who posit that the supplier-side “digitization”, the online transactions and information exchanges, serves as a prerequisite for digitization on the customer-side. Without increasing supplier-side digitization, a firm may over-promise customers and then fail to deliver.
5.3. Impact of supplier integration on performance
The structural equation analyses and the OLS regression analyses outlined above use an aggregate measure of performance as the dependent variable. This is due to the statistical properties of the scale for performance supporting a unidimensional construct. From a managerial perspective, it would be useful to discover which measures have the strongest relationship to supplier integration. To address this issue, we conducted a one-way ANOVA with the items comprising the performance scale as the dependent variables and supplier integration as the independent variable.
The results of the one-way ANOVA are shown in Table 6. As can be seen all the dimensions considered in the performance scale are statistically significant (at the 0.05 level of significance). In decreasing order of significance, the dimensions that are related to supplier integration are delivery reliability, delivery speed, production costs, percent products returned by customer, production lead time, flexibility of the process, percent defects during production, and inventory turns. That is, dimensions of performance related to aspects of delivery timing, cost, and quality problems discovered by the customer have a stronger relationship with supplier integration than the other dimensions.
| Performance measure | $F$-statistic | Significance ($F$-statistic) |
|--------------------------------------------|---------------|------------------------------|
| Percent product returned by the customer | 2.851 | 0.000 |
| Percent defects during production | 2.465 | 0.001 |
| Delivery speed | 3.021 | 0.000 |
| Delivery reliability | 3.247 | 0.000 |
| Production costs | 2.881 | 0.000 |
| Production lead time | 2.713 | 0.000 |
| Inventory turns | 1.733 | 0.026 |
| The flexibility of the process | 2.538 | 0.000 |
6. Contributions and future research
6.1. Key findings and managerial implications
The results of this study provide insights into the design of effective value chains. A key finding is that eBusiness capability supports customer and supplier integration efforts. Our eBusiness capability construct focused on specific supply chain technologies that supported customer ordering, purchasing, and collaboration between customers and suppliers. Firms should invest in eBusiness capabilities if they want to enhance their production information integration intensity. Nonetheless, firms should not try to justify investments in eBusiness technologies on the basis of their immediate impact on operational performance. To achieve operational performance advantages, the firm must also have the processes and procedures in place to use those capabilities.
A second key finding is that operational performance, which affects the customer’s perception of the quality of the business relationship with the firm, can best be improved by intensifying the production information integration with customers and suppliers. This implies that increased exchanges of information such as sales forecasts, master production schedules, and inventory levels should result in better operational performance. In addition, firms should increase their collaboration with customers and suppliers regarding production requirements and move more toward vendor-managed inventory arrangements.
A third key finding relates to the approach that should be used in designing an effective supply chain. Often there are budgetary considerations that limit firms from doing everything at once. When referring to production information integration, our study has indicated that it is best to develop supplier integration before customer integration, with the goal to have both in place for maximum operational performance. By first developing supplier integration the firm establishes the capability to meet the performance expectations of its customers before engaging in customer integration.
Finally, while supplier integration positively affects all dimensions of operational performance, it appears to have its greatest impact on delivery timing, costs, and quality.
6.2. Research contributions
The significance of these findings resides in three research contributions of this study. First, the model we tested bridges the work of several other studies and verifies some general findings. Zhu and Kraemer (2002) found that aggregate measures of firm performance such as total sales and profit margins are too remote to be significantly associated with e-commerce capability. In our study, we also found that eBusiness capability is not directly associated with operational performance; however it is mediated by production information integration, which is related to operational performance. Rosenzweig et al. (2003) showed that supply chain integration intensity is associated with competitive capabilities (which we call operational performance) and that competitive capabilities are associated with positive business performance. The present study has shown that supplier integration and customer integration have an interaction effect that is more highly associated with operational performance than either taken alone. Finally, Frohlich and Westbrook (2002) proposed four web-based supply chain integration strategies that incorporate different degrees of demand integration (which we call customer integration) and supply integration and showed that they are linked to operational performance. We have also shown that various combinations of production information integration lead to varying levels of performance. However, our study has separated the web capability from specific forms of production information integration to show the role that eBusiness capabilities play in the integration efforts. Consequently, a major contribution of the study is that it is the first study of supply chain integration that addresses the effects of eBusiness capability, production information integration, and operating performance in one theoretical framework.
Second, our findings indicated that eBusiness capabilities, by themselves, do not directly impact operational performance. Firms must develop a capability for customer and supplier integration to realize the benefits of the new eBusiness technologies. In addition, supplier integration should be at a high level before customer integration is developed if both capabilities cannot be developed simultaneously. Customer integration by itself does not directly affect operational performance and must be implemented with supplier integration to realize its full potential.
Finally, we developed new supply chain constructs for eBusiness capability, customer integration, and supplier integration. Following the suggestions of Rosenzweig et al. (2003), our measures include technology capabilities and specific tactics for supplier and customer integration that allow us to better capture the nature of those constructs and their affect on supply chain performance.
6.3. Limitations and future research
No empirical study is without limitations. To be fair, some of these limitations are true for cross-sectional survey research in general. First, while the hypothesized model represents our best predictions based on the arguments articulated in the section on hypothesis development, no causal claims can be made from our results due to the cross-sectional nature of the data employed in the study. Future studies might examine the relationships proposed here in greater detail and over time to provide support for a causal perspective. The second issue relates to our ability to generalize the results to other industries. Strictly, we can generalize our results to the automotive, computers, and electronics industries. In addition, our sample had 73% of firms in the small to medium (SME) size category. While our “size” control variable was not significant, it is conceivable that the results may be more reflective of SMEs. This may have had an influence on the results we had for the EBUSCUST construct, which we found to have no affect on the integration variables for the complete sample. Nonetheless, we believe there are interesting insights for industries that have similar demographics and supply chain and technology practices. A third limitation is that some of our results might be a function of the timing of the survey. Specifically, the lack of support for eBusiness technologies relating to customer ordering capabilities might be due to the vast majority of companies in our sample not having implemented that technology at this point in time. However, this might change in the years to come, which implies that the relationship between customer-oriented eBusiness technologies and production information integration and consequently performance might be different than what we observed. Again, we believe this offers an opportunity to examine relationships hypothesized in this study as these technologies mature and find greater acceptance and implementation in organizations.
The topic of supply chain integration is still in its infancy. This study included only first tier suppliers to the respondent firm. Future research should address the issue of how deep the integration should go. Information exchanges and collaborative efforts could extend to second and third tier suppliers, however we do not know if those efforts would result in significant differences in operational performance. Further, the role of manufacturing process capability remains to be explored. For example, it is conceivable that manufacturing process flexibility is another mediating variable for eBusiness capability. Research could also explore the differences between SMEs and large firms in the relationships between eBusiness capabilities and production information integration. There are many opportunities for future research in this area.
Appendix A. Appendix of survey items (with factor loadings)
A.1. eBusiness capabilities
Responses range from ‘Not Implemented’ to ‘Fully Implemented’.
The following items relate to my company’s eBusiness capabilities in general:
1. Allow customers to order products online (0.816).
2. Allow customers to configure or customize products online (0.740).
3. Allow customers to check the status of their orders online (0.752).
4. Find and select suppliers online for commodity components (0.780).
5. Purchasing materials through online auctions (0.839).
6. Support web-based EDI (0.679).
7. Enable collaboration with suppliers or customers on forecasting, scheduling, or replenishment online (0.888).
8. Support advanced planning and scheduling (APS) for optimizing supply chain performances (0.847).
A.2. Supplier production information integration
Responses ranged on a seven-point Likert scale from ‘Strongly Disagree’ to ‘Strongly Agree’:
1. My company provides the following information items to the supplier:
a. Sales forecast (0.651).
b. Master production schedule (0.791).
c. The inventory status (0.809).
2. My company collaborates with the supplier to jointly develop the net requirements of the component that the supplier will need to deliver (0.723).
3. My company authorizes the supplier to automatically replenish the inventory of the component (0.747).
A.3. Customer production information integration
Responses ranged on a seven-point Likert scale from ‘Strongly Disagree’ to ‘Strongly Agree’:
1. The customer provides the following information about its final product to my company (if the customer is a retailer, then the final product is the same as the product you supply):
a. Sales forecast (0.708).
b. Master production schedule (0.788).
c. The inventory status (0.761).
2. The customer collaborates with my company to jointly develop the net requirements of the product that my company supplies (0.714).
3. The customer authorizes my company to automatically replenish the inventory of the product my company supplies (0.678).
A.4. Operational performance
Please rate the performance of the Process along the following dimensions:
Responses for the above questions ranged on a seven-point Likert Scale from ‘Not Very Good’ – ‘Average’ – ‘Very Good’:
1. Percent product returned by the customer (0.619).
2. Percent defects during production (0.776).
3. Delivery speed (0.865).
4. Delivery reliability (0.839).
5. Production costs (0.718).
6. Production lead time (0.818).
7. Inventory turns (0.644).
8. The flexibility of the process to accommodate changes to shipping schedules within the effective lead time of the product without the use of safety stock (0.742).
References
Armistead, C.G., Mapes, J., 1993. The impact of supply chain integration on operating performance. Logistics Information Management 6 (4), 9–14.
Aviv, Y., 2001. The effect of collaborative forecasting on supply chain performance. Management Science 47 (10), 1326–1343.
Bagozzi, R.P., Yi, Y., 1988. On the evaluation of structural equation models. Journal of Academy of Marketing Science 16 (1), 74–94.
Bagozzi, R.P., Youjae, Y., Phillips, L.W., 1991. Assessing construct validity in organizational research. Administrative Science Quarterly 36, 421–458.
Barney, J., 1991. Firm resources and sustained competitive advantages. Journal of Management 17 (1), 99–120.
Barua, A., Whinston, A.B., Yin, F., 2000. Value and productivity in the Internet economy. IEEE Computer 33 (5), 102–105.
Barua, A., Konana, P., Whinston, A.B., Yin, F., 2004. An empirical investigation of Net-enabled business value. MIS Quarterly 28 (4), 585–620.
Belsley, D.A., Kuh, E., Welsch, R.E., 1980. Regression Diagnostics. John Wiley & Sons, USA.
Bentler, P.M., Bonett, D.G., 1980. Significance tests and goodness of fit in the analysis of covariance structures. Psychological Bulletin 88, 588–606.
Black, T.J., 1999. Doing Quantitative Research in the Social Sciences. Sage, London.
Boone, T., Ganeshan, R., 2001. The effect of information technology on learning in professional service organizations. Journal of Operations Management 19 (4), 485–495.
Brynjolfsson, E., Yang, S., 1996. Information technology and productivity: a review of literature. Advances in Computers 43, 179–214.
Buzzell, R.D., Ortmeyer, G., 1995. Channel partnerships streamlining distribution. Sloan Management Review 36 (3), 85–96.
Cachon, G.P., Fisher, M., 2000. Supply chain management and the value of shared information. Management Science 46 (8), 1032–1048.
Cachon, G.P., Lariviere, M.A., 2001. Contracting to assure supply: how to share demand forecasts in a supply chain. Management Science 47 (5), 629–646.
Campbell, D.T., Fiske, D.W., 1959. Convergent and discriminant validity by the multitrait-multimethod matrix. Psychological Bulletin 56, 81–105.
Carr, A.S., Pearson, J.N., 1999. Strategically managed buyer–supplier relationships and performance outcomes. Journal of Operations Management 17 (5), 497–519.
Cetinkaya, S., Lee, C.Y., 2000. Stock replenishment and shipment scheduling for vendor-managed inventory systems. Management Science 46 (2), 217–232.
Chen, I., Paulraj, A., 2004. Towards a theory of supply chain management: the constructs and measurements. Journal of Operations Management 22 (2), 119–150.
D’Avanzo, R., von Lewinski, H., Van Wassenhove, L.N., 2003. The link between supply chain and financial performance. Supply Chain Management Review 40–47.
Devaraj, S., Kohli, R., 2003. Performance impacts of information technology: is actual usage the missing link? Management Science 49 (3), 273–289.
Devaraj, S., Hollingworth, D., Schroeder, R., 2004. Generic manufacturing strategies and plant performance. Journal of Operations Management 22 (3), 313–333.
Dell, M.S., 1999. Direct from Dell: Strategies that Revolutionized an Industry. Harper Collins, New York.
Dyer, J.H., 1996. Specialized supplier networks as a source of competitive advantage: evidence from the auto industry. Strategic Management Journal 17 (4), 271–291.
Dyer, J.H., 1997. Effective interfirm collaboration: how firms minimize transaction costs and maximize transaction value. Strategic Management Journal 18 (7), 535–556.
Dyer, J.H., Singh, H., 1998. The relational view: cooperative strategy and sources of interorganizational competitive advantage. Academy of Management Review 23 (4), 660–679.
Fowler, F.J., 1993. Survey Research Methods, 2nd. ed. Sage, London.
Frohlich, M.T., Westbrook, R., 2001. Arcs of integration: an international study of supply chain strategies. Journal of Operations Management 19 (2), 185–200.
Frohlich, M.T., Westbrook, R., 2002. Demand chain management in manufacturing and services: web-based integration, drivers and performance. Journal of Operations Management 20 (4), 729–745.
Gavirneni, S., Kapuscinski, R., Tayur, S., 1999. Value of information in capacitated supply chains. Management Science 45 (1), 16–24.
Geten, D., Straub, D., Boudreau, M., 2000. Structural equation modeling and regression: guidelines for research practice. Communications of AIS 4 (7), 1–76.
Hill, C.A., Scudder, G.D., 2002. The use of electronic data interchange for supply chain coordination in the food industry. Journal of Operations Management 20 (4), 375–387.
Hitt, L.M., Brynjolfsson, E., 1996. Productivity, business profitability, and consumer surplus: three different measures of information technology value. MIS Quarterly 20 (2), 121–142.
Huber, G.P., Power, D.J., 1985. Retrospective reports of strategic-level managers: guidelines for increasing their accuracy. Strategic Management Journal 6, 171–180.
Johnson, J., 1999. Strategic integration in industrial distribution channels: managing the interfirm relationship as a strategic asset. Journal of the Academy of Marketing Science 27, 4–18.
Kauffman, R.J., Walden, E., 2001. Economics and electronic commerce: survey and directions for research. International Journal of Electronic Commerce 5 (4), 5–116.
Koloczy, G., 1998. Retailers, suppliers push joint sales forecasting. Stores (June).
Krajewski, L., Wei, J., 2001. The value of production schedule integration in supply chains. Decision Sciences 32 (4), 601–634.
Krause, D.R., Hanfield, R.B., Scannell, T.V., 1998. An empirical investigation of supplier development: reactive and strategic processes. Journal of Operations Management 17 (1), 39–58.
Kulp, S.C., Lee, H.L., Ofek, E., 2004. Manufacturer benefits from information integration with retail customers. Management Science 50 (4), 431–444.
Lancioni, R.A., Smith, M.F., Oliva, T.A., 2000. The role of the Internet in supply chain management. Industrial Marketing Management 29, 45–56.
Ledgerer, A.L., Mirchandani, D.A., Sims, K., 2001. The search for strategic advantage from the World Wide Web. International Journal of Electronic Commerce 5 (4), 117–133.
Lee, B., Barua, A., 1999. An integrated assessment of productivity and efficiency impacts of information technology investments: old data, new analysis and evidence. Journal of Productivity Analysis 12 (1), 2143.
Lee, H.L., Padmanabhan, V., Whang, S., 1997. Information distortion in a supply chain: the bullwhip effect. Management Science 43 (4), 546–558.
Mata, F., Fuerst, W., Barney, J., 1995. Information technology and sustained competitive advantage: a resource-based analysis. MIS Quarterly 19 (4), 487–505.
Milgrom, P., Roberts, J., 1988. Communication and inventory as substitutes in organizing production. Scandinavian Journal of Economics 90 (3), 275–289.
Miller, C.C., Cardinal, L.B., Glick, W.H., 1997. Retrospective reports in organizational research: a reexamination of recent evidence. Academy of Management Journal 40, 189–204.
Miller, J.G., Roth, A.V., 1994. A taxonomy of manufacturing strategy. Management Science 40 (3), 285–304.
Mukhopadhyay, T., Kekre, S., 2002. Strategic and operational benefits of electronic integration in B2B procurement processes. Management Science 48 (10), 1301–1313.
Narasimhan, R., Jayaram, J., 1998. Casual linkages in supply chain management: an exploratory study of North American manufacturing firms. Decision Sciences 29 (3), 579–605.
Narasimhan, R., Das, A., 2001. The impact of purchasing integration and practices on manufacturing performance. Journal of Operations Management 19 (4), 593–609.
Nunnally, J.C., 1978. Psychometric Theory. 2nd ed. McGraw-Hill, New York.
Nunnally, J.C., Bernstein, I.H., 1994. Psychometric Theory. 32nd ed. McGraw-Hill, New York, NY.
Peteraf, M.A., 1993. The cornerstone of competitive advantage: a resource-based view. Strategic Management Journal 14 (3), 170–191.
Plughoefit, K.A., Ramamurthy, K., Soofi, E.S., Yasai-Ardekani, M., Zahedi, F., 2003. Multiple conceptualizations of small business web use and benefit. Decision Sciences 34 (3), 467–512.
Ping, 1995. A parsimonious estimating technique for interaction and quadratic latent variables. Journal of Marketing Research 32, 336–347.
Podsakoff, P.M., Organ, D.W., 1986. Self reports in organizational research: problems and prospects. Journal of Management 12 (4), 531–544.
Poirier, C.C., Quinn, F.J., 2003. A survey of supply chain progress. Supply Chain Management Review 40–47.
Porter, M., 2001. Strategy and the Internet. Harvard Business Review 79 (3), 63–78.
Powell, W.W., Koput, K.W., Smith-Doerr, L., 1996. Inter-organizational collaboration and the locus of innovation: networks of learning in biotechnology. Administrative Science Quarterly 41, 116–145.
Radstaak, B.G., Ketelaar, M.H., 1998. Worldwide Logistics: The Future of Supply Chain Services. Holland International Distribution Council, Hague, The Netherlands.
Ranganathan, C., Dhaliwal, J.S., Teo, T.S.H., 2004. Assimilation and diffusion of Web technologies in supply-chain management: an examination of key drivers and performance impacts. International Journal of Electronic Commerce 9 (1), 127–161.
Rosenzweig, E.D., Roth, A.V., Dean, J.W., 2003. The influence of an integration strategy on competitive capabilities and business performance: an exploratory study of consumer products manufacturers. Journal of Operations Management 21 (4), 437–456.
Schmenner, R.W., Swink, M.L., 1998. On theory in operations management. Journal of Operations Management 17 (2), 97–113.
Steiger, J.H., 1990. Structural model evaluation and modification: an internal estimation approach. Multivariate Behavioral Research 25, 173–180.
Straub, D., Klein, R., 2001. E-competitive transformations. Business Horizons 44 (3), 3–12.
Straub, D., Rai, A., Klein, R., 2004. Measuring firm performance at the network level: a nomology of the business impact of digital supply networks. Journal of Management Information Systems 21 (1), 83–114.
Teece, D.J., Pisano, G., Shuen, A., 1997. Dynamic capabilities and strategic management. Strategic Management Journal 18, 509–533.
Venkatraman, N., 1989. The concept of fit in strategy research: toward verbal and statistical correspondence. Academy of Management Review 14 (3), 423–444.
Verma, R., Goodale, J.C., 1995. Statistical power in operations management research. Journal of Operations Management 13 (2), 139–152.
Vickery, S.K., Droge, C., Markland, R.E., 1993. Production competence and business strategy: do they affect business performance? Decision Sciences 24 (2), 435–455.
Vickery, S.K., Jayaram, J., Droge, C., Calantone, R., 2003. The effect of an integrative supply chain strategy on customer service and financial performance: an analysis of direct vs. indirect relationships. Journal of Operations Management 21 (5), 523–539.
Wei, J., Krajewski, L., 2000. A model for comparing supply chain schedule integration approaches. International Journal of Production Research 38 (9), 2099–2123.
Wernerfelt, B., 1984. A resource-based view of the firm. Strategic Management Journal 5, 171–180.
Zhu, K., Kraemer, K.L., 2002. E-commerce metrics for net-enhanced organizations: assessing the value of e-commerce to firm performance in the manufacturing sector. Information Systems Research 13 (3), 275–295.
Zhu, K., Kraemer, K.L., Xu, S., Dedrick, J., 2004. Information technology payoff in e-business environments: an international perspective on value creation of e-business in the financial services industry. Journal of Management Information Systems 21 (1), 17–54.
Zhu, K., 2004. The complementarity of information technology infrastructure and E-commerce capability: a resource-based assessment of their business value. Journal of Management Information Systems 21 (1), 167–202.
|
RECORDING ENGINEER/PRODUCER
$4.00
October 1985
Volume 16 — Number 5
DIGITAL AUDIO FOR VIDEO — page 74
VIRTUAL CONSOLE DESIGNS — page 162
DIRECT METAL DISK MASTERING — page 58
PRODUCING AUDIO FOR • TAPE • RECORDS • FILM • LIVE PERFORMANCE • VIDEO & BROADCAST
The TS24 is the first in-line console from Soundcraft. And it represents a major breakthrough in inline technology, because it now makes the console far easier to understand and operate.
Believe us, this is no hollow promise. Our argument is built around two rock solid foundations. Firstly, a new concept in console layout so logical, engineers used to split or in-line consoles can start work from day one. And secondly, a set of master conditions so advanced they'll amaze you.
STATUS.
One touch of the status button will configure the whole console for each particular stage of recording, mixing, broadcasting and video post production without sacrificing any flexibility whatsoever. In other words, one touch and you're off and running.
NEW DESIGN.
Conventional in-line consoles suffer from the limitations of one long travel fader and one equaliser being shared by two signal paths. With the engineer fader reversing and moving the equaliser back and forth throughout the recording, overdubbing and mixing process to optimise the situation.
The TS24 eliminates these shortcomings, thanks to its logical design. The long travel fader is in the section called MIX, which is the signal path for both monitoring and mixing. The equaliser moves between the MIX and CHANNEL signal paths automatically by use of the master status switches. 'Soft' switches may locally move EQ and AUX sends between the two signal paths but are also automatically reset.
When mixing, the Channel sections become available as additional inputs or effects sends without the limitations imposed by more conventional designs.
DROP-IN. BOUNCE.
Drop-ins are made easy by the use of the TAPE and GROUP button (T & G). Tape and Group enables you and the musician to monitor the original track and the overdub simultaneously.
The Bounce button facility enables you to take any combination of channels with their fader and pan settings directly to the routing matrix giving you instant bounce down.
SOUND AND VISION.
To create perfect sound, you also need perfect vision. With the TS24, that's exactly what you get. Separate scribble strips are provided instead of the usual confusing double one, and the Mix and Channel controls are in clearly defined areas for easier use.
AUTOMATION.
Soundcraft have developed a unique interface to the disc based MASTER MIX automation system, which enhances its operational flexibility by totally integrating the full extent of the console muting.
One feature of this system enables you to bypass the Channel VCAs, thereby
optimising the original recording quality.
Surprisingly enough, all this practical technology, combined with sleek good looks doesn’t carry a huge price tag. So our doors are open to practically everybody.
Which only leaves us with one thing to say: if you want to keep your finger on the button in the most up-to-date mixing console design available, contact us.
**Soundcraft TS24**
*Soundcraft Electronics Ltd., 5-8 Great Sutton St. London EC1V OBX.*
*Tel: 01-253 6988. Telex 21198 SCRAFT G.*
*Soundcraft Electronics USA, 1517 20th. St. Santa Monica, California 90404. Tel: (213) 453 4591. Telex: 664923.*
*Soundcraft Canada Inc. 1444 Hymus Blvd., Dorval, Quebec, Canada H9P 1J6. Tel: (514) 685 1610. Telex: 05 822582.*
— October 1985 Contents —
— Production Viewpoint —
Building a reputation as one of the most prolific producer/musicians working in the studio today... Nile Rodgers... enjoying success with Madonna, David Bowie, Jeff Beck, Mick Jagger, Duran Duran and, most recently, with the Thompson Twins.
Interviewed by Mel Lambert
page 40
— After The Mix —
DIRECT METAL MASTERING
Improving the Quality of Disk Cutting and Album Manufacture
by Rob Tuffly
page 58
— Recording Techniques —
DIRECT TO TWO TRACK
Capturing the High Energy of a Direct-to-Digital Session
by Jeffrey Weber
page 74
— Live-Performance Sound —
SERVO-DRIVEN LOUDSPEAKERS
Innovative Sound Companies Assimilate a New Bass-Driver Technology
by David Scheirman
page 85
— Musical Creativity —
SYNTHESIZERS IN THE STUDIO
Part Two: Studio and Live-Performance Applications of Digital Synthesizers with Sampling, Timecode and MIDI Control Capabilities
by Quint B. Randle
page 96
— Film Sound —
DIGITAL SOUND FOR MOTION PICTURES
Including Production Recording and Random-Access Editing
by Larry Blake
page 122
— Multimedia Production —
AUDIO/VIDEO RECORDING OF “MISTER DRUMS”:
BUDDY RICH AND HIS BAND LIVE ON KING STREET
Utilizing the SQ/Tate Two-Channel Surround-Sound System
by Paul Broucek
page 138
— Computers In The Studio —
MIDI RECORDERS: MYTH OR REALITY?
An R-e/p Guide to MIDI Data Recorder and Sequencer Software
by Stephen St. Croix
page 149
— Emerging Technology —
THE VIRTUAL CONSOLE
Digital Control of Analog or Digital Signal Electronics to Provide Enhanced Operational Flexibility and Dynamic Recall Capabilities
by Ralph Jones
page 162
— The Directory —
Dynamics Control, Noise Reduction and Effects Processors
page 176
— In-Use Operational Assessments —
LEXICON 224XL DIGITAL REVERB
Reviewed by Bob Hodas
page 186
KURZWEIL 250 DIGITAL SYNTHESIZER
Reviewed by Bobby Nathan
page 194
SANKEN CU-41 CONDENSER MICROPHONE
Reviewed by Lowell Cross
page 200
dbx Model 166 STEREO COMPRESSOR/LIMITER
Reviewed by Roman Olearczuk
page 204
— Departments —
☐ Letters — page 8 ☐ Exposing Audio Mythology, by John H. Roberts — page 12
☐ News and Industry Developments — page 23
☐ A Personal View: Let’s Not Forget the Sound, by Murray R. Allen — page 24
☐ Seminar Meeting Report: “Applications For Better Quality Cassettes,” by Mel Lambert and Rhonda Kohler — page 32
☐ Studio Update — page 108 ☐ Final Stage — page 114
☐ New Products — page 210 ☐ Classified — page 222
☐ People On The Move — page 226 ☐ Advertiser’s Index — page 226
The art of engineering is serious business.
You made the decision to engineer audio because you care for the art of music and sound. The realities are that you also need to operate as a business. The audio console that you choose for your creative fulfillment is the most expensive piece of capital equipment in your facility. Serious business.
We also consider the art of engineering a serious business. We have devoted 12 years to engineering and building truly the finest audio consoles. Year after year, we have reinvested our profit back into our design, manufacturing and distribution facilities. The result is that TAC and AMEK together are among the world's largest professional audio console companies. Serious business.
The SCORPION is one of the highlights of the TAC line. With 9 different modules, 2 mainframe configurations, 5 meter packages and 8 or 16 buses, the SCORPION can be configured to suit any professional application. Of course, each SCORPION comes standard with the EQ sound and chassis design that has made TAC/AMEK world renowned. Affordable value is serious business.
We ask only that you look deeper than just an ad, a brochure, or a sales presentation before you make your next major capital investment. At TAC, we treat the art of engineering as a serious business.
AMEK CONSOLES INC., 10815 Burbank Blvd., North Hollywood, CA 91601 Tel 818-508-9788 Telex 662526 AMEK USA
TOTAL AUDIO CONCEPTS LTD., Islington Mill, James Street, Salford M3 5HW England Tel 061-834-6747 Telex 668127 AMEK G
October 1985 □ R-e p.5
See the first console specifically built for 64 track digital recording at the New York AES
Designed for the world's largest and most sophisticated recording studios, the SUPERSTAR is a 20-bit analog console with the performance, specifications, and functions necessary for digital recording. The SUPERSTAR is totally modular and totally expandable, and features 64 mixing busses for recording to two 32-track tape recorders.
DESIGNED FOR DIGITAL
Through critical analysis of design, testing and re-testing of components, the signal path and sound quality of this console is optimized for digital recording. Quad Eight, as a part of the Mitsubishi Pro Audio Group, developed this console as the perfect companion to digital multitracks such as Mitsubishi's new X-850 32-channel recorder.
64 MIXING BUSSES
The SUPERSTAR has 64 mixing busses controlled from a central assign panel and readout. The 72 by 64 output matrix uses logic-controlled summing bus switching, providing 64 instantly selectable output busses. Using its own memory for five complete presets, it also allows automation control via a serial communication port.
COMPUMIX IV AUTOMATION
A 32-bit master processing computer records data on an 80 megabyte Winchester hard disk in real time for unprecedented accuracy in an automation system. This fourth-generation design stores four instantly accessible real time mixes plus eight compressed mixes on the hard disk simultaneously, and transfers compressed mixes to and from floppy disk. A distributed multi-processing system, Compumix IV has individual computers handling dedicated functions at different levels of the system architecture.
INTELLIGENT DIGITAL FADER
With its own microprocessor, the IDF can operate standing alone or coupled to the automation system. Using a monolithic direct digital 8-bit encoder/fader and a membrane touch panel inputting the 10-bit internal processor, exact dB values are calculated using 14-bit arithmetic, displayed, and converted to DC using a 12-bit D/A. All functions are at 10 times scanning rate for \( \frac{1}{10} \) frame mute accuracy, and fader smoothing algorithm. There are 16 nested groups, and any module can be assigned master without changing its individual function.
The VCA circuitry is on a separate PC card that plugs onto the main module PC board. Different VCAs may be easily substituted.
PLUG-IN EQUALIZER
Finally, there's a choice! The SUPERSTAR equalizer plugs in on each input module. Normally delivered with a four-band parametric equalizer with variable frequency, bandwidth, and peak/dip level; others are available. Each module also has a variable concentric high pass, low pass filter with individual in out buttons.
AUTOMATED EQUALIZER
Each channel module has been designed to accept an automated equalizer, making the SUPERSTAR the most advanced console available.
PLUG-IN PREAMPLIFIER
Each module's microphone preamplifier is also of top panel plug-in design. Transformers—or transformerless differential, the choice is yours. And new technology can be instantly added to your console.
FOR WORLD-CLASS STUDIOS
The SuperStar
by quad eight
MODULE FEATURES
Each module is a dual in-line design with separate channels for recording and monitor/mixdown. Main fader (or VCA), equalizer, filter, auxiliary sends, and line trim can be switched to either channel. Each input module has eight auxiliary sends configured as four monaural and two stereo sends, with panning. They are switchable as pairs to either recording channel or monitor. Monitor/mixdown channel is selectable to two stereo outputs for simultaneously making two different mixes. All output busses are differential balanced with optional transformers. For added overall control, each module has a switch (AGM) which allows it to become an audio sub-master for a group of input modules. A signal presence/peak dual LED circuit on each module indicates peak overload at microphone preamplifier out, or equalizer out, or fader out. Unique circuitry allows all to be connected to the indicator with only the peak signal shown, without addition from the other samples.
BAR GRAPH METER
Above each module is a 60-segment LED vertical bar level meter. The metering system is switchable to VU or peak ballistics with changeable electroluminescent scales for each, VCA level indication, or two sets of spectrum analyzers in 1/3 octave increments.
TOTALLY MODULAR FRAME
The SUPERSTAR console is constructed of individual housing sections of eight modules each. The console is not limited to just a few standard frame sizes, but may be ordered with any number of inputs. Interwiring of console sections and input/output connections is all with shielded plug-in ribbon cable. High quality bantam jacks are on PC boards, arranged module by module, and plug into the mother boards by shielded ribbon cable. This feature, along with the modular frame, makes this the only truly field-expandable console.
OPTIONAL OVERBRIDGE
An overbridge is available for mounting above the primary meter bridge to house additional accessories.
LIMITER / COMPRESSOR / GATE
This is a plug-in option for the meter overbridge. It is wired directly in-line with each channel, or as a peripheral patchable processor. More than just an accessory to the module, it is a full-function studio-quality leveling amplifier.
AFFORDABLE DIGITAL
The SUPERSTAR costs less than other world-class consoles. And a digital package with a Mitsubishi multitrack can save you even more.
NEVE, SSL, SUPERSTAR.
See them all before you decide.
MITSUBISHI PRO AUDIO GROUP
DIGITAL ENTERTAINMENT CORPORATION
Headquarters: 225 Parkside Drive, San Fernando, CA 91340 • Phone (818) 898-2341 • Telex 311786
New York: Suite 1530, 555 W. 57th Street, New York, NY 10019 • Phone (212) 713-1600 • Telex 703547
Nashville: 2200 Hillsboro Road, Nashville, TN 37212 • Phone (615) 298-0613
Canada: 363 Adelaide Street E., Toronto, ONT. M5A 1N3 • Phone (416) 865-1899
United Kingdom: 1 Fairway Drive, Greenford, MIDDX UB6 8PW • Phone (01) 578-0957 • Telex 923003
For additional information circle #103
SONY APR-5002 REVIEW
from: Takeshi Yazawa and Hiro Konno
Professional Audio Division,
Sony Corporation of America
As indicated in the review article by Peter Butt [published in the August issue], there are some discrepancies between our published performance data and his data as measured.
As Mr. Butt's measurement method could not have possibly been the same as the one we did, we felt it was appropriate to describe the measurement method that we use in all of our tape-recorder products. It is broken down into two categories, of which detailed descriptions are as follows.
A: Signal to Noise Measurements
Using an AC voltmeter w/dB scaling for appropriate sensitivity and range, the machine is measured for the above. We use a reference fluxivity of 510 nWb/m, with a shorted Audio input, and bulk-erased audio tape for Signal-to-Noise measurements of Record Input to Playback Output. This produces a measurement of both the Record and Playback Signal processing systems. The resultant of this measurement can be expressed in weighted, unweighted, and dB(A) scales.
B: Frequency Response Measurements
Assuming that the machine has been properly aligned for flat-frequency versus amplitude response using an approved reproducer alignment tape, and has been calibrated for proper bias, Record level and equalization for the tape being used, the machine can be measured for overall frequency response — Record/Play. A precision sinewave oscillator is connected to the line input or calibration input connector and an AC voltmeter w/dB scaling is connected to the line or calibration output connector. At constant amplitude output, the oscillator is swept from approximately 10 Hz to the upper frequency limit of the device. These upper and lower frequency limits are defined as the 3-dB down points in measured outputs. Throughout this sweeping upward, the operator denotes the amplitude response variations (if any). Upon completion, the results are mapped out on a frequency versus amplitude scale and the appropriate standard deviation is noted ±. This can then be verified against a reference specification. The resultant is called the frequency response.
Peter Butt replies:
Sony/MCI Product Management is correct on points A and B.
A. The (S+N)/N figures given in the tabulation at the head of the article were measured using biased tape that had not been bulk erased, and then passed over the heads in Record mode at the speed of interest, inputs shorted. It was then replayed by the reproduce head at the speed of interest. The reproducer had previously been calibrated and equalized for a reference fluxivity of 260 nW/m at 1 kHz for each of the relevant speeds. Because $20 \times \log(510/260) = +8.81$ dB, this figure was added to the +4 dBv meter calibration to yield a reference correction factor of +12.81 dB. The indicating instrument is a Hewlett-Packard 334A Distortion analyzer, Option H05. The average-responding 334A meter 30-kHz LPF noise reading was corrected to RMS by adding the standard 1.11 dB. The 30-kHz meter bandwidth was corrected by subtracting $20 \times \log(30/20) = 3.52$ dB from the reading.
The reported (Signal + Noise)/Noise data reported was derived as follows:
\[ S = (8.81 - E_m + 1.11 - 3.52) \text{ Decibels} \]
20-kHz LPF unweighted.
B. Frequency response tolerance data was determined by first aligning, biasing, and equalizing the machine for the tape used at each speed of interest for the reproducer characteristics required. Flux reference levels were set to 260 nW/m at 1 kHz for each case. All equalizations were set using CW [continuous wave] sinewave signals at: 1 kHz, 10 kHz, and 50 Hz. Deviations were checked with a sine sweep from less than 20 Hz to a frequency greater than the high-frequency 3 dB break at the high-end of each track/speed response. The 7.5 ips response data was equalized similarly at a signal level 10 dB below the flux reference.
These preparations having been completed, the record/reproduce/sync frequency response of each was taken by recording a squarewave having a frequency of 1.963125 Hz for the frequency band below 200 Hz, and 195.3125 Hz for the band above 200 Hz. These signals were then reproduced in the appropriate modes and digitized in the time domain. The time domain data was then operated upon to produce the Discrete Fourier Transform, and then deconvoluted by the digitized, DFTd time-domain data of the generator squarewave output waveform. This operation yields the transfer function of the transmission device responsible for the observed changes in the time-domain output response, as compared with the input-signal time-domain response.
The resulting data is then plotted in such a way as to show the two bands of... continued on page 12 —
KILLER PERFORMANCE IN A SMALL PACKAGE
Crown, a name long synonymous with advanced technologies, restates their position in the industry with the Micro-Tech 1000™, a unique meld of peak performance and dependability.
The level of precision the Micro-Tech 1000 offers is enhanced by two Crown Patents, the Grounded Bridge Circuit and ODEP (Output Device Emulator Protection). Together, this new technology brings the heritage of the Crown DC300 Series to a smaller, lighter and more powerful package than Crown has ever produced.
Expanding on the tradition of peerless performance Crown established over 33 years ago, the Micro-Tech Series will soon become the standard by which all others are measured. Small, powerful and dependable — a "Killer" combination.
Crown
1718 W. Mishawaka Rd.
Elkhart, IN 46517
(219) 294-8000
For additional information circle #105
Introducing the Tapeless
Multi-track Overdubbing with Synclavier's New Direct-to-Disk™ Recording System
During the AES Convention from October 13-16 in New York, New England Digital will debut the latest Synclavier enhancement, the Direct-to-Disk multi-track recording system.
Used in conjunction with the state-of-the-art Synclavier, the combined system will offer a stand-alone "tapeless" digital recording and post-production environment. In other words, the capability of a $1,000,000 studio at a fraction of the cost, and it's all digital!
The system to be premiered will feature four tracks of recording to disk at 50kHz, with storage capacity up to 500 megabytes. The production model will support up to sixteen tracks and offer the capability of 100kHz sampling or stereo sampling at 50kHz.
The Direct-to-Disk system will be controlled from the Synclavier's 32-track digital recorder. Complete vocal and instrumental tracks, recorded continuously, can now be part of any Synclavier recording. For example, a user could incorporate a recording of an acoustic instrument or vocal track(s) into their existing Synclavier polyphonic sampled, synthesized, or MIDI tracks.
Be sure not to miss this revolutionary advancement!
The Midi System that Works!
Along with all the amazing capabilities of the Synclavier system, you can add the flexibility of MIDI. Now it is possible to incorporate the dynamics and timbres of your favorite keyboard with the Synclavier memory recorder. For example, record your keyboard into the Synclavier's 32-track digital memory recorder or trigger the timbres of the Synclavier from one of your favorite performance instruments.
The standard Synclavier MIDI system features one channel in/four out. The standard system can be expanded to 8 inputs/32 separate MIDI outputs. Plus, the new Synclavier software gives the ability to slide tracks forward or backward in time, eliminating the MIDI related delays which have plagued most MIDI systems.
Other Fantastic Synclavier Features
Besides these great new additions to the Synclavier system, the system also offers:
**Polyphonic Sampling**
(16-Bit/50-100 kHz) Up to 32 voices and 32 megabytes of R.A.M.
**Multi-Channel Outputs**
**SMPTE**
**Velocity/Pressure Keyboard**
**Automated Music Printing**
**Guitar Interface**
Instructional Video Cassettes
If you're interested in relaxing at home and learning the basics of the Synclavier system, you can now purchase three video cassettes which guide you through its basic features and operations. Send your check for $175 per set (not sold separately) plus postage and handling. Complete printed documentation is also available for $200 per set.
For more information or a personal demonstration, please call New England Digital or one of our authorized distributors:
**New England Digital**
White River Jct, VT 802/295-5800
**Los Angeles**
New England Digital 213/651-4016
**New York**
Digital Sound Inc. 212/977-4510
Synclavier is a registered trademark of N.E.D. Corp.
Direct-to-Disk is a trademark of N.E.D. Corp.
Copyright 1985 New England Digital
See us at the A.E.S!
New York Hilton October 13 - 16
Synclavier operator captures continuous live vocal overdub
For additional information circle #106
LETTERS
— continued from page 8 —
data as a continuous transfer function. The frequency response data are then reported from inspection of the plotted magnitude data to the tolerances indicated by the manufacturer (±2 dB).
The variance in magnitude response indicated for the test results is due to the energy of the squarewave signal occurring at odd multiples of the fundamental frequency. For convenience, I choose to call data obtained by the above method the Dense Spectrum response, as it contains many more frequency components than does the single- or swept-frequency sinusoidal magnitude response. High-frequency response is shown more pessimistically this way, as the squarewave harmonics act to contribute to their own recording bias signal in a way proportional to their respective magnitude and frequency. I feel that this approach better represents the system response for the case of linearized analog magnetic record/reproduce systems than does the pure sine-wave method, as sinewaves rarely constitute modulation signals commonly encountered. I grant that this is a more severe representation of the record play magnitude frequency response. I do think it is more representative of common application.
I have begun to suspect that reality rarely has the charm of fantasy. ■■■
EXPOSING AUDIO MYTHOLOGY
Laying to Rest Some of the Pro-Audio Industry’s More Obvious “Old Wives’ Tales”
by John H. Roberts
In this month’s column only I’d like to address some of the subtleties to balancing signals for transmission across the room, and across the country.
A popular misconception is that transformers are the *only* way to truly balance a line. Not only is this untrue, but many transformer-coupled lines are imbalanced by incorrect termination, or floated from ground intentionally.
Before we get into a strict definition of what is and isn’t balanced, let’s back up a minute and look at *why* one would want to balance a line. The basic goal is to transmit a signal from point A to point B with maximum fidelity.
There are many things that can happen between here and there to degrade your signal, such as: signal losses in the line; crosstalk between the channels; interference from outside noise sources; and ground-potential differences.
Signals within a given piece of equipment are usually routed around single-ended configuration, based upon the assumption that ground potential at all points within that box will be virtually identical. But for signals sent over any distance (like the inside of a large recording console), we must consider differences in ground potential. The most popular, and cheapest, way to correct for ground-potential errors is to use a differential summing amplifier (Figure 1).
\[
V_1 = V_s + V_g \times \frac{R_2}{(R_1 + R_2)} \times [1 + (R_4 - R_3)] + V_g (-R_4 - R_3)
\]
For \( R_1 = R_2 = R_3 = R_4 \),
\[
V_1 = V_s + V_g (0.5 \times 2) + (-1 \times V_g)
\]
\( V_{\text{bond}} = V_{\text{source}} \)
The ability of the circuit shown in Figure 1 to reject this common mode of ground potential is directly related to the matching of \( R_1 \) to \( R_3 \), and \( R_2 \) to \( R_4 \). Typically, 1% resistors will be used in such circuits with critical applications being trimmed. The ratio of \( R_1, 3 \) to \( R_2, \)
4, need not be unity, allowing this topology to deliver boost or cut and still provide substantial attenuation of common mode signals. The ability to reject common-mode signals is called Common Mode Rejection Ratio (CMRR) and, as you can probably guess, is measured in decibels.
A second, somewhat more expensive, way of isolating ground-potential differences is to use transformer coupling (Figure 2). Since the signal is transferred from the transformer's primary to secondary winding as a magnetic flux, the secondary voltage is floating, and can be tied to any reference. This characteristic makes the transformers useful in isolating large potential differences, and even allows some simple
---
**Figure 1: Typical System Interface**
Vs - Source Voltage
Vl - Load or Output Voltage
Vg - Differential Voltage between the source and load ground.
---
**WHEN YOU NEED TO RENT...**
Digital Reverbs / Delays / Recorders / Vintage Mics, etc....
**WE DELIVER!**
ANYTIME...ANY PLACE...
---
**CALL (800) 446-FAST**
(OFFICE CALIFORNIA)
(213) 664-FAST (818) 952-FAST (714) 662-FAST
JOHN MOLINO—General Manager MICHAEL MAY—Operations Manager
DIGITAL DISPATCH 3912 Riverside Drive, Suite 101, Burbank, CA 91505
For additional information circle #108
with a view.
We'd like to open your eyes to the incredible REV-1 digital reverb. Because it gives you unheard-of control over virtually all reverb parameters. And something that has never been seen in any type of reverb: the capability to "look" at the sound as well as hear it.
The remote unit that controls the nineteen-inch rack-mountable unit has a lighted high-resolution LCD display that graphically depicts the results of the adjustments you make.
So getting just the right reverb sound is no longer a question of trial and error.
The logical grouping of the parameter controls on the remote also makes it easy to create any effect you like. Then store it in any of 60 memories for instant recall.
The remote also contains 9 additional RAMs so you can store programs and carry them with you to use anywhere there's an REV-1.
And there are 30 additional ROMs with factory preset sounds. Many of which can be completely edited (as can the user-programmed sounds) by using the LEDs to tell you the set value or indicate in which direction to move the control so you can easily and precisely match the value of the originally programmed sound.
And the sound itself is far superior to any other digital reverb. The REV-1 uses specially developed Yamaha LSIs to create up to 40 early reflections and up to 99.9 seconds of subsequent reverberation. So the effect can be as natural (or unnatural) as you want it to be.
We could go on about the REV-1. Tell you about its 44.1 kHz sampling rate that provides a full 18 kHz bandwidth to prevent the natural frequency content of the input signal from being degraded.
How it has a dynamic range of more than 90 dB for the delay circuitry and more than 85 dB for the reverb circuitry.
But why not take a closer look at the REV-1 at your authorized Yamaha Professional Audio Products dealer. Or for a complete brochure, write: Yamaha International Corporation, Professional Products Division, P.O. Box 6600, Buena Park, CA 90622. In Canada, Yamaha Canada Music Ltd., 135 Milner Ave., Scarborough, Ont. M1S 3R1.
addition or subtraction of signals. For example, connecting two secondaries in series with the same polarity (dot indicates positive polarity) will sum the two signals, while connecting the windings in series with opposing polarity will difference the two signals.
While both of these circuits are adequate at suppressing ground-potential errors, they are limited in their ability to eliminated noise signals induced into the interconnecting wires. Unless the signal source has an infinite impedance to earth ground (not likely), the two lines (positive and negative) will have different impedances to ground. These different impedances will cause slightly different noise voltages to be induced in each line, and thus not provide complete noise cancellation. An impedance can be added in series with the negative lead at the sending unit, equal to the single-ended source impedance, balancing (there's that word) the impedance to ground (Figure 3). This addition will improve rejection of common-mode noise and crosstalk, at the expense of doubling the effective source impedance.
However, the resultant circuit will still suffer from an imbalance in the impedance to ground as seen by a normal (sometimes referred to as "metallic") signal. This imbalance will cause signal currents to flow in the various equipment ground paths. The voltages generated by these ground currents should be common-mode, and thus reducible by the differential amp. It is good engineering practice, however, to avoid these currents in the first place and not tempt Murphy to visit.
The easiest way to satisfy all of these requirements, and get some extra benefit...
At Peavey Electronics we're dedicated to our commitment to design and manufacture high performance products at realistic prices. We've underlined that philosophy with our Celebrity Series line of microphones.
The Celebrity Series feature large diameter diaphragm/voice coil structures for increased sensitivity with the ability to handle high sound pressure levels. These higher output levels allow for significantly less mixer gain and are a tremendous aid in maintaining good signal-to-noise ratios.
Perhaps the most important characteristic of any performing microphone is reliability. The design of our cartridge/shock mount system increases ruggedness as well as isolation capability to insure long-term performance under severe field conditions.
Our microphone screen utilizes extremely heavy gauge wire that has been "junction locked". Once the screen is formed, we do not stop there. The heavy wire screen is "fired" in an oven after forming, thus causing the plated wire to "fuse" at all interconnecting points. The result is an unbelievably durable "brazed" wire windscreen that will hold together under the most severe abuse. After the ball windscreen is formed, brazed and coated, a precision urethane foam pop filter is fitted to minimize the undesirable proximity effects. This special acoustically transparent foam protects the entire sound system by breaking up explosive high SPL pressure waves created by close vocals or close miking percussion instruments. For those applications requiring even more acoustic screen from wind noise, etc., Peavey offers special external colored wind noise filters that slip over the screen and internal pop filter.
While outwardly, the appearance of the Celebrity Series is somewhat conventional, the aspect of "feel" has been given heavy emphasis since our experience has shown that performers prefer a unit that not only sounds right and looks right, but must also have a comfortable balance, weight, and overall tactile characteristics.
Special "humbucking" coils (models CD-30" & HD-40") have been designed into the microphone element that effectively counter-balance any hum that might be picked up from external sources. Performers who play clubs where hum from light dimmer switches or other sources are a problem can appreciate this unique feature.
We invite comparison of our Celebrity Series with other cardioid microphones. You'll see why we feel that in terms of performance, features, and price, there is no competition.
For a complete catalog featuring the entire line of Peavey sound reinforcement equipment send $1.00 to Peavey Electronics, Dept. A, 711 A Street, Meridian, MS 39301
October 1985 Rep 17
to boot, is to break the signal up into two opposing polarity versions of the original. The positive polarity version is sent at one half its original amplitude down the positive line, and the opposite polarity version sent at one half the original amplitude down the negative line. When these two signals are recombined in a differential summing input, such as one of those shown in Figure 4, the original full-level signal is recovered, while common-mode noise is suppressed and ground potential errors rejected.
In addition, crosstalk will be dramatically reduced, for two reasons: first, you are now only sending one half the original peak level down the line; and secondly, there will be some self-cancellation by the two closely-coupled but opposite polarity signals. Note: you can get all the benefits of balanced outputs/lines with the simple differential input of Figure 1B, except for the freedom from ground-signal currents.
Now for the strict definition of balancing: A "balanced" output will provide two equal (but opposite polarity) signals, symmetrically situated about ground or some appropriate reference potential. The impedance to ground from both positive and negative outputs will be equal for common-mode signals and for normal signals. Note: the common-mode impedance may be different from the normal impedance, as long as it is always the same at both outputs. A "balanced" line and input will have identical impedance restraints, with the "balanced" input having very closely matched but opposite polarity gain at its input ports.
**Electronic Versus Magnetic**
It is possible to meet these requirements easily with op-amps as well as transformers. Since op-amps are much cheaper and smaller than even a low-grade transformer, you might wonder why anyone would ever use one. Well... there are several reasons:
First, transformers make an excellent
EMI: AMS
At a recent studio managers' conference held at EMI Abbey Road Studios in London it was unanimously agreed that pieces of AMS outboard equipment would be made available for every control room in all EMI recording studios worldwide. The delegates represented studios from EMI's international network including Japan, Australia, New Zealand, U.S.A., Germany, Sweden, South Africa, France, Holland and the U.K.
AMS NEW·NEW DMX 15-80S Two Channel Sampling
A new update has now been introduced for the DMX 15-80S giving users the option of sampling and triggering two independent pieces of information.
Each of these samples is controllable as with the original single loop - continuously looped, manually single triggered or triggered by audio input. In the case of audio triggering, audio input sufficient to illuminate either the channel A or channel B input LEDs will result in triggering of the sample stored on that channel of the unit.
AMS NEW·NEW RMX 16 Memory Expansion
The RMX 16 can now be supplied with memory expansion to increase the number of factory set programmes from 9 to a number capable of accommodating all AMS factory set programmes available at any one time. New programmes for the RMX 16 will still be made available on bar code allowing those owners with remote terminals and wands to immediately take advantage of new software issued.
AMS NEW·NEW TIMEFLEX
AMS Timeflex has continued to prove its popularity with audio, video and film post production facilities by providing very high quality audio time compression. Timeflex is capable of operating in mono, dual channel or stereo modes and for this reason contains additional circuitry to that offered with standard AMS pitch changers to ensure complete phase matching of channels when used on a stereo signal.
A new interface card for AMS Timeflex is now available providing communications to external audio, video or film machines. The two standards currently offered are RS 422 and 9K6/19K2 tacho signals. These interfaces allow Timeflex to automatically correct audio pitch should the machine to which it is interfaced be vari-speeded. Alternatively, again using this interface, Timeflex can behave as master simply allowing the user to enter a new play time and accordingly Timeflex will accurately alter the machine speed and correct the audio pitch.
TIMES SIX
The popularity of "Echo Times", particularly in the U.S.A., as a medium for keeping owners, users and potential owners of A.M.S. equipment up to date with the latest developments has not gone unnoticed. We have received many requests to supply back-issues and accordingly reprints of all previous issues have been made and complete sets are now available on request.
The following people were interviewed in issues 1 to 5, all discussing their uses and applications for A.M.S. units: Martin Rushent, Kevin Peak, Air Studios, Hilton Sound Rental Company, Tom Bailey (of the Thompson Twins), Phil Collins, Humberto Gatica, Jeff Lynne of ELO, Paul McCartney and Hugh Padgham.
"The AMS DDL is used to provide variation on the various rhythms, especially the bass drum rhythms. Effects used on 19 were setting the delay to a semi quaver's length so that instead of a steady four on the bass drum you get sixteenth notes in succession. A reverb with a long delay time could then be added to the original bass drum but omitted from the echoes for extra effect. Another effect that was used was to make the echo fall on an existing beat such that phase elimination would occur.
Also sometimes I add a bit of white noise to the snare by playing it onto a track from a synth, just to make it sound bigger. And I've found ways of using the AMS to make the sound much bigger."
Paul Hardcastle talking in an interview with Richard Walmsley in Electronic Soundmaker and Computer Music magazine.
"It is generally felt in digital circles that hard-disc editing is the way of the future, and with AudioFile, AMS has beaten many of its larger competitors. The software possibly needs a little refinement, but I for one am looking forward to the day when I can install one of these devices in Tape One."
Bill Foster of Tape One studios talking in Music Week
"One of the stars of APRS 85 was AudioFile from AMS."
Jim Evans of Music Week.
"On Mag element A there was an LCR band mix, mag B contained Sting's vocal on track one, and the girl backup vocals on tracks two and three, and on the last three-track mag element there was bass and stereo audience. AMS digital effects were summed onto selected tracks during this mixdown: "AMS mania" according to Aaron."
Brad Aaron talking to Larry Blake of Recording Engineer Producer magazine about "The Police Synchronicity Concert" film.
"After all, recording in 1985 is not like recording even in 1982. A little bit of the modern technology had kind of passed them by while the band was regrouping (after Lionel Richie went solo). They saw the AMS gear lined up in the outboard rack, and they couldn't believe it. We were sampling drums: we'd have a guy come in, but we wouldn't use him playing - we'd just sample his kit. Then we would have the track programmed on a Linn and we'd replace the machine bass drum or snare with sampled sounds.
It took the making of this album for the band to embrace the new technology."
Dennis Lambert talking about the making of the Commodores "Nightshift" album in an interview with Mel Lambert and Ralph Jones of Recording Engineer Producer magazine.
"One thing we did was to take the kit out into the live foyer, record the snare onto digital, pick up a good sounding hit and dump it into the AMS digital memory. Then in the mix we triggered it from the normal snare and added it to the overall sound to give a bigger Ambiance."
Producer Chris Kimseu discussing the track Kayleigh by Marillion with Jim Betteridge in International Musician and Recording World.
"If we're doing the cymbal parts separately, I'll use an AMS stereo timeprocessor with no delay using pitch changer on A channel reading 1.005 and on B channel reading 0.995 (1.000 is normal pitch). If you send the left hand cymbal track to the B channel of the AMS which returns on the right hand side, and send the right hand cymbal track to the A channel of the AMS which returns on the left hand side, this gives a nice zingy spread to the cymbals without being too splashy."
Producer Steve Brown talking to Janet Angus about his work with Wham, ABC and many others in HSR magazine.
"Outboard equipment is also comprehensive with AMS 1850S and RMX 16 units, a Yamaha Rev 1 and the Lexicon 224. Nick also has thoughts for the future in this area. "I would dearly like to get the AMS Audiofile. It would be absolutely super - both for our audio clients and straight audio use".
Nick Turnbull talking to sound engineer.
"On the Go West album we only had the MSQ 700, which was our lifeline. Now we've got the SXB, which links up really well with the TR 909. It's great for programming, triggering the AMS and stuff like that.
"For the Radar album we used the SRC - I like to have that facility because you can change the drum patterns if new ideas come up. On some of the tracks we had to pull out whole bass lines and relocate them with the AMS. It's like painting pictures - you can just rub a bit out and move it. It might take two hours but I'll pick out a couple of things I can use somewhere else and it sounds really whacky."
Go West producer Gary Stevenson talking to Peter Buck of Sound Engineer magazine.
Ray Parker Jnr.
Ray Parker Jnr. is one of those people who never cease to amaze you as to how many projects they have been involved in or even how many successful songs they have written. Although not particularly big in England, Ghostbusters gave Ray three separate attacks at the British charts – firstly on the singles release, secondly on the release of the Ghostbusters film and finally it climbed the charts again as a 12" mix.
A.M.S.: Briefly, what is your history?
R.P.J.: The first 5 or 6 years of my career I worked as a studio musician and got involved in a series of different projects ranging from Marvin Gaye and Stevie Wonder to the Rolling Stones and Boz Skaggs. Then I got into writing songs and had success with things for Barry White, Rufus, Chaka Kahn and of course my own stuff – Ghostbusters was obviously a big break.
A.M.S.: Is there anything you consider distinctive in the way you work?
R.P.J.: I don’t know about everyone else but I write to sounds. I have to go into the studio and hear the drums just like they are going to be on the record – I’ve got to hear the synthesizers, again just as they are going to sound on the record. Once I’ve got a framework I can formulate other things around that – I can’t just sit down with a Linn like some people do. I have to have the sound EQ’d with reverb and effects added which is why AMS is so important.
A.M.S.: What other reverb units do you use?
R.P.J.: Let me see, I’ve had an AKG spring for 9 years. I have a Lexicon 224 and the 224X and a big EMT but I’ve never really got into that. I like things where I can reach them and just punch buttons which is one reason why I decided to add the RMX 16. I love the sound of the A.M.S. reverb and the sounds I really like I can get quickly and easily. For that reason it’s the system I use most of all – that and probably the 224.
A.M.S.: Do you have any favourite programmes?
R.P.J.: All the programmes sound real good but my favourite is the AMS Nonlin. It’s so different – it’s unique – yet! AMS Nonlin I really love that one. The Reverse programme is nice too. I guess a plate or a plate programme will get people to say well that’s reverberation – but these special effects programmes are real nice.
A.M.S.: So what’s next for you?
R.P.J.: I enjoy being a solo artist/engineer and all I want to do is get in there and play with more buttons and gadgets and just experiment. I’ve heard a lot about DMX 15-80S DDL pitch changer and it sounds real interesting – I don’t own one yet but my studios here are just choosing some new gear so who knows! Don’t forget to listen out for my new album and 45 you’ll definitely hear lots of AMS on them.
Thomas Dolby seemed to appear from nowhere at a time when totally synthesizer based bands such as the Human League were enjoying the peak of their success. Unlike quite a few of the "totally electronic" bands, Thomas Dolby has survived and gone on to further develop his individual style. AMS caught up with him during a three month stay in Los Angeles where, amongst other things, he was completing work on a project with Joni Mitchell.
A.M.S.: So here we are in the Hollywood Hills!
T.D.: Yeh! I've rented this house whilst working here. The best thing about the house is not that it originally belonged to Jenny Agutter but that Steve McQueen's 50's pick-up truck is down in the garage in absolutely showroom condition.
A.M.S.: How did your career develop?
T.D.: When I was 14 I used to write the odd song on the piano but with not having lessons there was never any discipline to get good at it. Because of that I moved to synthesizers. People had just got past the long blond hair and cape stage and instead of individual bravado on a Minimoog, people like Brian Eno exploring different textures created by a synthesisers were beginning to influence popular music. Living alone in London during the Punk era meant that even though I'd got very good at writing and arranging quite sophisticated songs on the Portastudio that had just come out, I really wanted to play in a band. So I managed to get some session work with bands including Bruce Woolley. Lene have the potential that the more you build - the smaller it gets if you know what I mean. So once I've done my arrangement I go back to the original sounds that I've sampled, store and edit them in the 15-80S and then trigger them from the Fairlight. That gives me far superior sound quality and perspective, longer samples and also very importantly more accurate control by being able to offset the triggered samples to get the right feel to the piece.
A.M.S.: You aren't the first person I've heard mention perspective. How important is that to your music?
T.D.: Perspective has been an enormous breakthrough. The creative energy in England that continues to build up seems to have gone into production rather than the raw commodity, but there are a group of English producers that are 2 or 3 years ahead of the rest of the world - and I think it's because of their use of perspective. When all you had was an echo plate the information you got was how far away you were from an instrument. Now with delay lines and units such as the RMX 16 you can create atmospheres and your instruments can come from anything from a small room to an empty lonely canyon.
A.M.S.: Do you create these perspectives during recording?
T.D.: Yes, my approach is very cinematic. I tend to use the RMX 16 during the recording process to build up the song as I don't like leaving everything to the mix. You can make mistakes this way introducing perspectives to a single track that don't work when taken with the whole song. In an ideal world I would have a huge rack of multiple everything such that the mix would be vocals from the multitrack and everything else running live.
A.M.S.: Does that mean something like AMS AudioFile interests you?
T.D.: AudioFile is very exciting. I could quite happily do away with my multitrack tape recorder because AudioFile would allow me to drop in and out of record, edit within a track and repeat phrases. It's fascinating and given it's my own view of the way things are going to go I think AudioFile is the first serious device to arrive and I'm sure it will have a big influence.
A.M.S.: A final question. Is there any function on any piece of A.M.S. equipment you would miss most if you lost it?
T.D.: No. I'd miss them all!
— continued from page 18 —
hand-aid. If you have a problem with imbalance at an input or output, simply bolt on a transformer and, presto, you’ve saved another box from hi-fi heaven.
Secondly, since they are more tolerant of miswired connections, transformers help in overall system reliability. Try shorting out your electronically balanced output. While this shouldn’t cause instant smoke on properly designed gear, you are using up one or more of its nine lives, making it more likely to fail from some other stress. Also, transformer-coupled inputs/outputs are not capable of passing DC, a nice thought when you’re using DC-coupled power amps to reproduce those alpha waves found in today’s popular music.
Secondly, since they are more tolerant of miswired connections, transformers help in overall system reliability. Try shorting out your electronically balanced output. While this shouldn’t cause instant smoke on properly designed gear, you are using up one or more of its nine lives, making it more likely to fail from some other stress. Also, transformer-coupled inputs/outputs are not capable of passing DC, a nice thought when you’re using DC-coupled power amps to reproduce those alpha waves found in today’s popular music.
Finally, the best reason for using transformers — and it has only a little to do with noise — is the ability to block large common-mode potentials. The typical diff amp or electronically balanced input will only handle about 15V of common-mode garbage. This common-mode range can be increased by padding down the input and restoring it later, but with a subsequent reduction in dynamic range (because of the increased noise floor). Even with this increased common-mode range, a mis-wired stage-power circuit or lightning-induced spike (you don’t need a direct hit) can still cause catastrophic failure.
Conclusion: Electronically balancing equipment is so inexpensive that I can see almost no reason for not doing it universally. (However, it still isn’t free.) If you are sending signals over fairly short runs, the use of shielded, single-ended outputs with the simple diff amp as shown in Figure 1 should be sufficient. If you are working with live PA, all line-level snakes should be balanced (mike-level snakes already are).
Whether you need transformers or not is a judgement call. If the band and house mixer plug into the same extension cord, don’t worry about it. If you get electricity bills from two different utility districts . . . worry! Finally, if you are dealing with telephone lines, or high-risk gigs (like protecting yourself when interfacing with somebody else’s system) transformers are still the only way to go.
If you do decide to use a transformer, check out what is commercially available. Transformers are not at the point where their sound quality can be taken for granted. While, at their best, they can approach a high performance op-amp, the typically lower-priced transformer can degrade your system’s sound. If you are in a cost-sensitive situation (and who isn’t?) listen to a few different samples of what you can afford. Take care to terminate them with the same impedances, and run them at the same levels they will see in normal use. Maybe even listen to a sample of what you can’t afford, and then make your choice.
For simple interfacing, I can think of no good reason to run the signal through even the best transformer. But, until everything is getting sent around on fiber-optic cables, there will be a place for transformers in professional audio.
Reading For Extra Credit
References 1, 2 and 3 go into great detail regarding grounding and shielding of all kinds or signals. Reference #4 is a good source of information about transformers designed for audio applications.
1. Henry W. Ott, *Noise Reduction Techniques in Electronic Systems* (John Wiley & Sons, New York, 1976).
2. Ralph Morrison, *Grounding and Shielding Techniques in Instrumentation* (John Wiley & Sons, New York, 1977).
3. Edward F. Vance, *C Shielded Cables* (John Wiley & Sons, New York, 1987).
4. Various Jensen Trans sheets and application notes from 10735 Burbank Boulevard, Hollywood, CA 91601
News
KURZWEIL ANNOUNCES 50 kHz USER SAMPLING FOR 250 DIGITAL SYNTHESIZER
According to Bob Moog, VP of new product research, “The new 50-kHz enhancement has been tested extensively at the factory and in the field, and offers dramatically brighter and crisper sounds.”
Starting in September, all shipments of the Advanced Sampling Kurzweil 250 unit, as well as the Sound Modeling Program option, will come standard with 50-kHz digital sampling. [An in-use operational assessment of the K250 can be found on page 194 of this issue — Editor.]
List price for the new 50 kHz Sound Modeling Program option is $1,995, the same as the previous 25-kHz version. Current owners of K250s with the 25-kHz version can upgrade to 50 kHz for $250 at an authorized Kurzweil service center.
MORE NEWS . . . continued on page 225 —
A PERSONAL VIEW
LET'S NOT FORGET THE SOUND:
Quality and Operational Advantages of Working with
35mm Mag Film, Compared with Tape-based Systems
by Murray R. Allen, president, Universal Recording Corporation
As the recording industry moves into the mid-Eighties, we are bombarded on a daily basis with all kinds of new technology. There are digital reverb systems that will allow us to create the acoustics of a room of unlimited dimensions around any sound that we record; there are synthesizers that will create just about any sound we want to put into that room of limitless size; and there are tape machines capable of recording up to 32 tracks on one piece of tape. And if that isn't enough, we can buy, rent or steal any number of timecode synchronizers to enable us to record an unlimited number of tracks of our synthesized sounds placed in the middle of our synthesized room.
I think this is all fantastic; some of the sounds being created today are really great in every meaning of the word. But there are some very basic principles being overlooked, and the technology of audio postproduction sweetening for video might well represent a good example of this possible obsession with technology at the expense of sound quality.
As many *R-e/p* readers will already be aware, when engineers began to record video pictures on magnetic tape, it became necessary to develop an addressing system that would accomplish the same purpose as the feet and frames information generated by sprocket holes on film. The resultant SMPTE timecode data records on a conventional audio track — or, the case of Vertical Interval Timecode (VITC), within the video picture itself — made it possible to locate electronically any point on any piece of magnetic tape to an accuracy of at least 33 milliseconds. It also became possible to lock together two or more tape transports in perfect synchronization, a function that opened the possibility of creating totally mixed tracks that would synchronize with picture without using the time-honored method of sprocketed magnetic film.
Those of us involved in both film mixing and non-sprocket recording were thrilled with this capability, for obvious reasons. In the late-Seventies the consoles used for studio recording were moving ahead technically faster than those designed for film re-recording, and the average, small rock and roll studio had more processing equipment than a larger film-mixing facility. The speed and difficulty of synchronizing sprocket film to tape, along with the questionable quality relative to magnetic film, contributed to the appeal of video sweetening using videotape rather than film.
Advantages of Mixing on Magnetic Film
However, we have all overlooked the good points of film mixing to embrace this new video-based technology. And times have changed. Seven years later, film mixing consoles are as well equipped as music consoles and, when it comes to panning features, they are superior. Every type of processor known to man will also be found in the film-remixing theatre. Film and tape can now be synchronized with no problems relative to speed or operational difficulty. In fact, when it comes to shifting tracks relative to one another, film-mixing techniques offer a superior advantage, since a mag dubber can be advanced a sprocket hole at a time to provide sync offsets in about 10-millisecond increments. Magnetic film stocks have also come of age. Because it is coated on a three- or five-mil base, mag film offers superior print-through relative to standard two-inch tapes. Also, 35mm magnetic film moves at speeds of 18 or 22.5 ips, depending on whether it is running at 24 or 30 frames a second. In addition, the frequency response of mag film is comparable in every way to regular two-inch tape.
Now let us get down to the reality of what happens in a mix. First we will examine what happens during a typical video-sweetening session. The client has mixed down his original 24-track music to four tracks of a new 24-track tape, along with timecode and 59.94 Hz video sync on two additional tracks. The track breakdown is (usually) rhythm, lead vocal, background vocals, and horns and strings. Additional tracks to be recorded on the new timecode-stripped 24-track consist, typically, of sync dialog recorded on location, voice-over announcer recorded in the studio, and five sound-effects tracks.
The location sync sound normally comes out of the field on quarter-inch/full-track tape with a 60 Hz neopilot resolve tone recorded across the tape. Step #1 will be to transfer the location audio, using a machine capable of resolving the neopilot tone, to one channel of a two-track recorder, simultaneously recording timecode onto channel #2. (Make sure that your timecode generator is looking at the same clock source as your neopilot resolver.) Now line up your 24-track with your video picture; hopefully somebody knows at which timecode frame the music...
QUALITY ADVANTAGES OF 35MM MAGNETIC FILM
track will be synchronized with the video picture! Best of all you will have an Edit Decision List, from which — using the newly created location sync two-track tape — the location sound can be laid onto the 24-track tape while constantly checking it with picture for sync.
The same procedure is followed for the voice-over announcer. Of course, when laying this audio onto the 24-track, you will not be checking for sync but for timing and dialog placement. If the timing is off, you may have to electronically edit the pauses to ensure correct timing. Next, follow the same procedure as above on each of the five sound-effects tracks.
Now you are ready to mix these 11 tracks. The music is second-generation, location sync is third-generation, as is voice-over announcer and sound effects. (As an aside one can record the announcer directly to the 24-track tape, while viewing picture; this method is a much better way to go, but quite a bit more costly.)
All of the above sound elements are mixed to a two- or four-track tape with audio on one or two tracks — depending on whether or not the program is monaural or stereo — with timecode on another track. In the case of four-track masters, it is desirable to also record 59.94 Hz derived from your in-house sync generator onto another track. This four-track audio master will be used to transfer (or layback) the final mono or stereo audio tracks to the master videotape.
You are now four generations down on the music, and five generations down on everything else. Some people like to master directly from the multitrack to videotape, a technique that can be okay if the resultant videotape is the one that's actually going to air without too many interim plays. (Remember that noise on one-inch C-Format videotape is comparable, at best, to a chrome audio cassette. I do not know anybody that would want to master their audio onto an audio cassette!) If there are any additional generations created on videotape, or if excessive plays cause dropouts on the audio track, the master audio can always be relayed.
Noise Build-up with Audio Tape
Let us discuss why it is worthwhile to conserve generations relative to analog audio. The first and most important consideration is noise, which is increased by every tape generation. In the above examples, the music would have picked up about 3.8 dB of noise prior to its transfer to videotape, while all other tracks individually would have picked up a total of 10 dB of noise prior to their transfer to videotape. Other considerations are increases in distortion, tape saturation, and a "dulling" of the general sound. These factors are influenced to varying degrees relative to the pro-
... continued on page 30 ...
GET YOUR ACT TOGETHER AND TAKE IT ON THE ROAD.
Packing up for a gig. It's an important moment of truth for every musician. Within the safe confines of your studio, you've worked your music into shape. Polished it. Perfected it. Put it on tape. Now it's time to take it on the road. You're excited, keyed up. How will your music hold up under the hot lights and cold scrutiny of the outside world?
One thing's certain: you'll be counting on your equipment to come through for you when you're up on stage. Your mixer? If it's a TASCAM 300 Series, it's the same quality you've recorded with in the studio. The same familiar, clean performance in a package compact and rugged enough to hit the road with you.
One mixing console for recording and sound reinforcement. The M-300's are the first series of mixers to accomplish this elusive ideal. They have all the foldback, effects, subgrouping, and monitoring you'll need. Balanced and unbalanced inputs and outputs. Stereo or mono output. Top panel switching matrix to eliminate patching. Sophisticated solo system. Flexible buss assignment. Extensive talkback system.
Over a decade of experience designing boards that last means TASCAM dependability. Find out how musicians are making the most of their mixers. See the TASCAM 300 Series at your dealer today. Or write to us for more information at: 7733 Telegraph Road, Montebello, CA 90640.
THE TASCAM 300 SERIES MIXERS
TASCAM THE SCIENCE OF BRINGING ART TO LIFE.
© Copyright 1985 TEAC Corporation Of America
For additional information circle #117
Since its introduction the E-mu Systems Emulator II has set the standard for digital sampling keyboards. The Emulator II offers truly stunning sound quality and an impressive array of features: 17 seconds of sampling time, built in disk drive, a variety of analog and digital sound processors (including VCA's, VCF's and LFO's), a powerful MIDI sequencer, a SMPTE code reader/generator, full MIDI implementation and much more. The sonic realism, creative power and expressive control of the Emulator II are unequaled by any digital sampling keyboard.
Now, Digidesign announces Sound Designer—a powerful music software package that links the Emulator II and the Apple Macintosh, creating a music system offering unprecedented performance at a breakthrough price.
What can Sound Designer do? Sample any sound with the Emulator II. Transfer the sound to the Macintosh and display the waveform on the Mac's high resolution screen. You won't be kept waiting—the Macintosh and Emulator II communicate at the lightning speed of 500,000 bits per second—nearly 17 times MIDI rate!
Redraw any part of the waveform using Sound Designer's pencil. Remove clicks or other extraneous noises from sounds by simply drawing them out of the waveform.
Use Sound Designer's digital mixer to perform a variety of digital signal processing functions. Mix sounds in any proportion, fine tune the level of a sound or create hybrid sounds using the merge function. A saxophone that gradually becomes a screaming electric guitar? No problem. Of course, the sound you create can be quickly transferred to the Emulator at any time for high quality playback.
The sound waveform displayed can be scaled independently on both the amplitude and time axes to show any degree of detail, from a few samples to the entire waveform. Use the "Zoom Box" to magnify a small area of the waveform for closer inspection. Scale marks and a screen cursor display the exact time location and level at any point in the sound.
Use cut and paste editing to rearrange the sound, or to splice pieces of one sound onto another sound—up to three sounds can be displayed on-screen at once. Sounds can be edited with an accuracy of nearly 1/30,000th of a second! Throw away your razor blades.
Synthesis? Yes. Sound Designer includes direct digital synthesis. Because it is software (algorithm) based, virtually any type of synthesis can be implemented, including FM, Waveshaping, Additive and other powerful synthesis techniques.
And once you have created your sounds, you can use Sound Designer's Emulator II front panel mode to adjust all of the Emulator II's parameters. Graphic programming screens are provided for each Emulator II module: arrange samples on the keyboard, draw filter response and ADSR curves, set up controller and MIDI configurations, adjust keyboard velocity, MIDI, controller and arpeggiator parameters and more.
The essential process of looping sampled sounds is greatly simplified by Sound Designer. No more random (and time consuming) searches for loop points—you can see the waveform and quickly assign the loop in the proper location.
Break the sound file down into hundreds of separate frequency bands using Sound Designer's FFT (Fast Fourier Transform) based frequency analysis. The three-dimensional FFT waveform reveals the envelope of each frequency as the sound evolves. Very educational for those intrigued by the nature of sound.
At about one-third to one-tenth the price of comparable systems, the Sound Designer/Emulator II combination represents the best value in computer music systems. However, the system offers another advantage more important than money.
Most computer music systems are hardly user-friendly. User-indifferent is a better description: strange commands to memorize, confusing terminology and painfully slow operation have thwarted many musician's attempts to use this advanced technology.
You don't need unlimited patience and a Ph.D. to learn Sound Designer. Sound Designer takes full advantage of the Macintosh's simplicity—program functions are visually represented by icons (pictures). No cryptic commands to memorize!
The Emulator II/Sound Designer system is a valuable tool, whether you're scoring a film, adding sound effects to a video production or creating the sounds for your next hit. You'll find the system quite stimulating—to both your creativity and your ears!
Want to see the system in action? Send $29 to Digidesign (address below) for a 30 minute demonstration video (specify Beta or VHS). Like to know more about the Emulator II? Send $2.00 to E-mu Systems for a color brochure and a very impressive demo record.
Sound Designer requires a 512K Macintosh, 2 disk drives or an internal hard disk (recommended), and an active imagination. The Emulator II requires fingers.
E-mu Systems, Inc.
applied magic for the arts
2815 Chanticleer
Santa Cruz, CA 95062
408.476.4424
digidesign
920 Commercial Street
Palo Alto, CA 94303
415.494.8811
QUALITY ADVANTAGES OF 35MM MAGNETIC FILM
— continued from page 26 —
gram material, the competence of the recording and mixing engineers and, of course, the condition of the equipment being used. (Some engineers can capture more "punch" in their fourth-generation copies than others in their original; because of the many variables involved, I will not try to evaluate such factors.)
Another problem is wow and flutter. With current state-of-the-art equipment, one has to go many generations before exceeding the wow and flutter induced in the video transfer. Although technically this is a problem, in the real world of well maintained equipment, two generations more or less should not cause serious trouble.
Summarizing, the main concern of generation loss is one of additive noise.
Noise Build-up with Mag Film
Now let us go through the same process outlined above using film-mixing techniques. The original 24-track music tapes would be mixed to four-track 35mm magnetic film, the film recorder receiving its sync from the same time-code already recorded on the original 24-track for automated mixing purposes and picture sync during music recording. The four-track music mix would then be physically cut into proper synchronization with the picture on a standard Moviola. The location sync sound would be transferred to mag, utilizing a playback machine capable of resolving to the neopilot signal, and then physically edited to conform to picture. Similarly, the sound effects and voice-over announcer audio would all be transferred to 35mm magnetic film, physically edited and placed in the correct sprocket registration relative to picture. Although electronic and physical (cut-and-splice) editing take about the same amount of time, once the "splice" is made on film it can be physically laid in at high speed, whereas the electronic audio track on non-sprocketed tape must be laid in real time.
So, by the film method we are two generations down on music, sync, voice-over and sound effects; now comes the mix. We can master to a four- or a two-track tape machine, as well as a monaural or two-track film recorder. (Once again, we should try to avoid mastering to a C-Format VTR unless this master was going directly to air.) Once this audio master has been transferred to videotape, we are now four generations down on music, sync, voice-over and effects — the same result relative to music as with video sweetening, and an improvement of one generation over video sweetening relative to everything else. However, there is another factor to be considered.
The track width of a monaural film head measures 200 mils, and a four-track film head 150 mils per channel. In contrast, a four-track/half-inch tape head measures 70 mils per channel, while a two-track/quarter-inch head measures 80 mils per channel. As we all know, bigger track widths provide us a quieter and better signal, a fact that was proven when we moved to two-track/half-inch mastering. So, taking into consideration the above argument, the following will be true: assuming that a mono 35mm track is the quietest format, a four-track film head will be 1.25 dB noisier, a two-track/quarter-inch audio head will be 4 dB noisier, a four-track/half-inch audio head will be 4.5 dB noisier, and a 24-track head — which measures 37 mils per channel — will be 7.3 dB noisier.
All of which means that our original music transfer will be 2 dB quieter on four-track magnetic film than on a 24-track/two-inch tape. All our other tracks will have saved one generation by going via the film method, which is worth about 1 dB per track. When originally transferred to monaural 35mm magnetic film and mixed, these latter tracks will be 5 dB quieter than if the video-sweetening method has been used.
Using the video-sweetening mixing technique, the final mixed track prior to transfer to videotape will have an unweighted signal-to-noise ratio of about 52 dB. Using the standard film-mixing method, the comparable SNR will be 57...
dB. In the real world of mixing, of course, some of these tracks will be brought up in level, thereby increasing noise; other tracks will be brought down, thereby decreasing the noise. Also, some tracks will be equalized, which might either increase or decrease noise. Other tracks will be compressed, thereby increasing noise, while "silent" areas might be gated to decrease the noise. Other factors affecting noise levels include variations in the specifications of record amplifiers, the tape's running speed, and the actual recording equalization curve used.
The unweighted signal-to-noise ratio of a C-Format VTR is about 56 dB. Let us assume that the "zero" operating level for such a video recorder is 8 dB down from the 3% THD (Total Harmonic Distortion) level. Throughout the mixing process, we have been operating with our zero level at about 12 dB down from the 3% THD point. Therefore, the noise floor below zero operating level for the video-sweetening master will be about -40 dB; the same parameter for the film-mixed master will be -45 dB. When transferred to videotape, the film master will be only 3 dB above the C-Format noise floor, while the video-sweetening master will be 8 dB above the same noise floor. Percentage-wise, the video-sweetening master is 78% noisier than the film-mixing master. When your local television station adds its own form of compression, this additional 5 dB of noise will create problems.
As you can see, it would be possible to go another generation on film and still be quieter than if the same mixing process had taken place on the regular audio tape used during video sweetening. Of course, in the hands of a good engineer, the video-sweetening technique represents a valuable and useful production tool. And, when we reach the stage at which only digital recorders are used, it will truly be a competitive tool. Eventually, all audio information will be stored in digital form on magnetic or optical disk and/or solid-state devices. When this happens, all of the above arguments regarding the noise and sonic advantages of mag film over audio tape will be meaningless. [For a full discussion of the impact that digital storage and random-access editing systems will have on film post production, see Larry Blake's article elsewhere in this issue — Editor.]
In the meantime, however, we should all make a gigantic effort to keep ourselves constantly aware of the effect that recording technology can have on the pristine quality of sound. If we allow our ears to accept less than the best, we are not properly serving our audio community. For those of us that plan to remain in this business long range, the stewardship of our aural credibility is of paramount importance. Whenever a new technology is added, or a new system is devised, or a new machine is tested, ask yourself... "What does this do to the sound?"
SANKEN INTRODUCES FOUR MORE MICROPHONES
Maker of world-acclaimed CU-41 double-condenser microphone releases new products to international market.
Sanken Microphone Co., maker of the CU-41 two-way condenser microphone, famed among sound engineers throughout the world for the transparency of its recording qualities (which make it perfect for compact disk recording), is pleased to announce the release of four more of its high quality microphones to the international market. The microphones are:
**CMS-6 MS Stereo Microphone**
A small, lightweight, hand-held microphone for high quality outdoor radio, TV and movie recording. Comes with portable battery power supply and switchable matrix box. Freq. response 50Hz to 18kHz, dynamic range 108dB, self noise less than 19dB.
**CMS-2 MS Stereo Microphone**
For quality music, radio, and TV studio recording. Small and lightweight, it has been widely used in Japan for more than eight years. Freq. response 20Hz to 18kHz, dynamic range 129dB, self noise less than 16dB.
**CU-31 Axis Uni-Directional Condenser Microphone and CU-32 Right Angle Uni-Directional Microphone**
For music, radio, TV and movie studio recording. Renowned for their high performance and remarkable reliability. Freq. response 20Hz to 18kHz, dynamic range 129dB, self noise less than 19dB.
For more information on these new microphones, as well as on the famous CU-41, contact your nearest Sanken dealer, as listed below.
| New York | Nashville |
|----------|-----------|
| Martin Audio Video Corp. | Studio Supply Company, Inc. |
| 423 West 55th Street | 1717 Elm Hill Pike: Suite B-9 |
| New York, New York 10019 | Nashville, Tennessee 37210 |
| TEL (212) 541-9900 | TEL (615) 366-1890 |
| TLX 971846 | |
Japan's most original microphone maker
Sole export agent: Pan Communications, Inc.
5-72-6 Asakusa Taito-ku, Tokyo 111 Japan
Telex J27803 Hi Tech/Telephone 03-871-1370
Telefax 03-871-0169/Cable Address PANCOMMJPN
October 1985 □ R-e / p 31
For too long now, many audio professionals would argue, the pre-recorded cassette has been looked upon as the "poor relation" of the music software market. There is no getting away from the fact that the Compact Cassette was introduced to the public several decades after vinyl releases had become firmly established in the majority of people's minds as the most convenient (and familiar) way of enjoying pre-recorded music. (And, it must be conceded, the first cassette decks and tapes were of a very much lower quality than we are used to in the Eighties; such unfavorable, early experience might in part explain the public's reluctance to add yet another piece of hi-fi hardware.) In addition, the exceptional success of the Compact Disc has somewhat stolen the limelight from advances being made in both the analog disk and cassette manufacturing industries — one example of which, direct-metal disk mastering, is described in a feature article beginning on page 58 of this issue.
On the positive side, however, pre-recorded cassettes now outsell vinyl releases, the 50% market share for Compact Cassettes having been passed just over two years ago. Also, the advent of "Walkman-type" portable cassette players, and radically improved in-car systems, has added to the growing impact of pre-recorded cassettes as a primary form of music software for a growing number of consumers.
But what of the technical advances being made in the design and operation of high-speed duplication equipment which, there is no denying, has radically increased the quality of pre-recorded cassettes? Despite the fact that the majority of R-e/p readers are involved with the production of master tapes for the record- and cassette-buying public (plus corporate and audio-video clients), an overview of the cassette duplication processes — in particular the format of master tapes to be delivered to duplication facilities, improvements in tape formulations, and so on — can prove useful if the maximum quality potential is to be realized from the pre-recorded cassette medium.
Against this background, the panel discussions that took place at the recent "Applications for Better Quality Cassettes" seminar proved extremely interesting. While there is insufficient space here to fully report on each of the sessions, several themes and future developments came to light during the three-day seminar, held in San Francisco during mid-August, and which attracted almost 300 participants.
Organized by ElectroSound, and co-sponsored by Agfa-Gevaert, American Multimedia, Ampex, Audiomatic, BASF, Capitol Magnetics, Dolby Labs, ICM, IPS, JRF, Lenco, MRI, Saki, Sprague, Studer-Revox, and others, the seminar began with an overview session, entitled "The Art in Cassette Masters." Panel members comprised Gene Wooley, chief engineer and director of engineering, MCA Whitney Recording, Ed Outwater, director of quality assurance, Warner Bros. Records, Mary Bornstein, VP of quality control, A&M Records, and Steve Miller, former director of production, engineering and QC at Windham Hill Records, under the moderation of Sandy Richman, administrator of the XDR Program at Capitol Records.
To extract the very best quality from
the high-speed duplication process, Richman offered, a facility needs to start with a high-quality master tape, and he placed particular emphasis on encouraging record labels — and hence the session producers — to provide digitally-encoded masters. Wooley conceded that certain producers and artists may not care for the sound of digital; in which case, he says, there should be a recommendation that a label offers both analog and digital masters to cover all eventualities. Miller also pointed out that, while he is a great proponent of the sonic advantages offered by digital recording, what needs to be stressed is the operational advantages to a cassette manufacturing facility when working with digital masters. During the preparation of loop-bin masters, the saving in wear and tear — and potential high-frequency losses — of an analog master can be reduced significantly if the master is a digitally-encoded videocassette.
A quick poll of the panel members produced the following rundown of master-tape formats received by their companies:
- Without exception, Capitol will make a digital copy of the original two-track master (either analog or digital) for use in preparing the loop-bin master.
- At MCA Whitney, 50% of country releases are mastered from JVC VP-900 digital format, and the remainder from analog masters; on pop-music titles around 35% of titles are received as a digital master. (Gene Wooley did point out, however, that in the future the company's duplicating facility plans to copy all tapes to digital prior to running-master preparation.)
- A&M's situation is similar to that of MCA, while both Warner Bros. and Windham Hill work exclusively with Sony PCM-1610 format masters.
Given the multitude of different digital formats, it was proposed by the panel that duplicating plants be prepared to handle PCM-1610, VP-900 and other digital-format masters.
Should duplication houses re-equalize or add dynamics processing to master tapes, in an attempt to produce a better quality product, the panel was asked? "No," Outwater stressed, "because that opens up a whole can of worms. Instead of re-equalizing, we would go back to the record company and get a better master tape; we prefer to transfer flat." Wooley agreed: "We need to ensure a flat transfer of a good master — there are many slave transports involved [during the duplication process], and we cannot adjust them all to correct for deficiencies in the master tape." It was also agreed that during the duplication of older archive and re-issued material, master tapes should be left unaltered, unless the artist "agrees to 'update' the sound," Wooley offered.
A member of the audience asked how each of the panel members ensured that the quality of test cassettes produced by a duplication facility matched that of production runs? Bornstein explained
Dameon Higgins founded Delta Sounds and Video in 1976 after 10 years in broadcasting. This radio experience and his uncompromising audio standards quickly established Delta as a very successful recording studio and entertainment sound service in the Orange County/LA area. Although the company specialized in supplying complete custom sound programs and systems for school dance DJs and Discos, it wasn't long before Dameon found himself turning down a lot of tape duplication requests. The high sensitivity was not practical for "real time" duplicating, and the jobs that he "farmed out" to high speed duplicating companies often came back to hurt his image.
Eventually, because of missed profit opportunities and a frustrating lack of control over quality, Dameon decided to install his own high speed duplicating equipment. He looked carefully at every product on the market and finally selected the Telex 6120, seven-slave, 1/2 track cassette-to-cassette model. He knows that he can add on to his system as his business grows, but for now his 6120 can copy up to 280 C-30s in one hour, and is easily operated by one non-technical employee because of its computerized single touch operation, indexed or short tape warning lights and automatic master rewind. Dameon hasn't regretted his decision for one moment because he now has a thriving additional business of duplicating voice and DJ audition tapes, seminars and syndicated radio programs. Now he reports a zero reject rate and his quality image is under his control where it belongs.
For over twenty years now, Telex has been the choice of those who, like Dameon Higgins, are fussy about the quality of their duplicate tapes. To learn more about what the 6120 can do for you, write to Telex Communications, Inc., 9600 Aldrich Avenue South, Minneapolis, MN 55420. We'll send you complete specifications and production capabilities.
For quick information, call Toll Free 800-828-6107 or in Minnesota call (612) 387-5531.
that A&M carries out batch checking of product delivered to record stores, while Outwater said that Warner Bros. effects the same kind of quality checks with production cassettes, to make sure that consumer product sounds the same as the previously approved test cassettes. According to Richman, Capitol uses the burst of special test tones recorded on the loop-bin master to check the quality of XDR pancakes, and ensure the integrity of the cassette transfer process; in addition, the company checks between two and three percent of the final product after the loading of pancakes into shells. "We prefer to check during the duplication process," he explained, "so that we can catch the problem before the tape reaches the cassette shells."
Next came a presentation entitled "Masters versus Cassettes — How Do They Compare," by Dennis Staats, tape duplication manager at Dolby Laboratories, during which he outlined some of the problems faced during the high speed recording of music onto cassette tape. The 0.125-inch width of cassette tape, and resultant narrow track format, made it difficult to record short wavelengths, he pointed out, a problem that can be compounded by tape moving away from the heads at high duplicating speeds. Because of these and other factors, he offered, many duplication facilities felt that tape noise and distortion produced by the running or loop-bin master need not be considered, simply because all of the technical limitations of the duplication process can be attributed to the cassette medium.
This was untrue, he said, and proceeded to compare MOI (maximum output level), distortion and signal-to-noise figures for cassette formulations and the half-inch tape used typically for preparing loop-bin masters. At a 200 nW/m fluxivity level, a "typical" ferric-oxide cassette tape offers a noise performance of -52 dB, a 3% total harmonic distortion level of +5 dB, and a high-frequency MOI of -1 dB; the comparable figures for a half-inch, 7.5-ips bin master (referenced to 320 nW/m fluxivity) are -61 dB, +9 dB and +2 dB. Given the 6-dB increase in noise during the cassette recording stage, Staats considered that a 10-dB noise advantage should be offered by the half-inch master over the cassette formulation — which was barely the case with a 7.5-ips master.
Moving on to consider the figures for a 3.75-ips half-inch master — the more popular format for duplicating facilities — the SNR, 3% THD and HF MOI drop to -58 dB, +8 dB and +2 dB, respectively. As could readily be appreciated, the noise and THD performance of such a bin master are pretty close to those of cassette tape. Consequently, Staats explained, Dolby had come to the conclusion that its HX (Headroom eXtension) process would be of particular use during the preparation of a 3.75-ips loop-bin master, to reduce noise and distortion.
He also added that the transfer of audio program from the two-track digital or analog master to the 3.75-ips running master was one that needs to be made with care, particularly for electronic and synthesizer-based material, for which the average-to-peak ratio may be in the region of 15 dB on digital masters. One way around the problem, he suggested, would be to specify that meters with fast rise times and peak-reading capability be used during the preparation of loop-bin masters.
The second panel session of the day, entitled "The Technology of Cassette Masters," was moderated by Jim Roe, director of engineering at WEA Manufacturing, and included panelists Dennis Drake, studio manager/chief engineer of the Polygram Tape Facility, Pat Weber, technical director of MCA Whitney Recording, Paul West, director of national quality control and EMI-America studio operations, Kent Smi thiger, national quality control manager, ElectroSound Group, Fred Layn, northwestern regional manager, Studer Revox of America, and Larry Schnapf, director of recording operations and facilities for RCA Records. Subtitled "A discussion about how to produce the best possible running master," the session addressed the question of whether a facility should utilize 7.5- or 3.75-ips loop-bin masters. According to West, the higher speed makes it "difficult to handle long lengths of tape," and leads to tape storage problems at a 240-ips loop-bin speed. Also, he offered that a master may not last as long at higher duplication speeds. "To me," he concluded, "[Dolby] HX holds the key to producing masters at 3.75 ips."
"The quality of 3.75-ips masters is sufficient for most material," opined Smithiger. "Seven-and-a-half ips may only be needed for classical releases, or for material with a wide dynamic range."
Asked to outline their company's policy towards the format and production techniques for loop-bin masters, the panelists summarized these as follows:
- Polygram utilizes PCM-1610-format digital masters and a system of tone burst (similar to the Capitol XDR technique) to provide an objective quality assessment at any stage during cassette duplication.
- MCA uses test tones and calibration tapes to verify the quality of a one-inch, 3.75-ips running master, and sets recording levels as high as possible before tape saturation.
- EMI uses a 1610-format master to produce a one-inch, 3.75-ips running master without additional equalization, limiting, etc. Also, test and alignment-level tones are added to allow a mastering facility to check azimuth, etc.
- ElectroSound uses fast-reading peak meters during the production of running masters.
- RCA insists that an LP accompany the master tape, so that the duplicating facility can check for EQ differences, etc. If there are any differences, the tape
Which of these microphones looks "flattest" to you?
The flat response microphone is the one in the center. It's the new 4004 studio microphone from Bruel & Kjaer.
A significant contributor to its flat amplitude characteristic and uniform phase response is its small size.
All other factors being equal, the smallest microphone will have the best response, because its envelope causes the least obstruction into the sound field. If you'd like to see curves that show how flat a professional microphone can be, request our literature. If you'd like to hear how flat a professional microphone can sound, call your B&K field applications engineer for a demonstration in your space.
Bruel & Kjaer Instruments, Inc.
185 Forest Street, Marlborough, Massachusetts 01752 • (617) 481-7000 • TWX: 710-347-1187
Regional offices: NJ-201-526-2284; MD-301-948-0494; GA-404-951-0115; MI-313-522-9600; IL-312-358-7582; TX-713-645-0167; CA-714-979-8066; CA-415-574-8100
World Headquarters: Naerum, Denmark. Sales and service in principal US cities and 55 countries around the world.
For additional information circle #123
will be rejected. Currently, the ratio of analog to digital master tapes used to produce the 3.75-ips, one-inch running master at RCA duplication facilities is 4:1.
- WEA uses a half-inch, 3.75-ips running master recorded with Dolby HX Pro circuitry.
Addressing the question of the projected lifetime of a loop-bin master, West related that EMI, through the use of test tones, has established 3,400 passes at the maximum life for the running master. In quantitative terms, he looks for a 1 dB dropoff at 16 kHz — assuming, of course, that there are no mechanical problems with the tape, including shedding and physical damage. According to Smithiger, ElectroSound manages to obtain around 2,000 passes from a master, "although the tape is usually destroyed before that time! Usually it fails because of tape stiffness, and operator error." The problem, West confided, was "finding a tape that works well at 3.75 ips, but that will also hold up at the 120-ips duplicating speed."
If the production of a running master can cause so many problems, a member of the audience asked, should it be mastered at a central location, rather that at the duplication plant? "Yes," Weber offered, "because it helps quality control, and also enables an artist to be present during the [transfer] process." EMI/Capitol's mastering studio is located in Hollywood, West advised, "so that technical quality control is tightly monitored, and we have ready access to the artists." Drake conceded that a central facility made sense, but that "the acoustics and accuracy of the listening room was also very important — they should not 'colour' the sound." The majority of the panelists were in general agreement with West and Drake's comments.
The second presentation of the day, "Mastering for the Media," comprised a discussion of the types of tape formulations that satisfy the demands of cassette duplication. The speakers were Dr. Andreas Merkel, technical training and customer support, Agfa-Gevaert, Warren Simmons, manager, professional audio products, Ampex Magnetic Tape Division, and Walter Derendorf, applications engineer, audio products, BASF.
Merkel began his presentation by comparing the 3% MOL and dynamic range of the original master tape, safety copy, 3.75-ips running master, running-master copy, and the final duplicated cassettes, from which he concluded that cassette-tape noise is the limiting factor in the transfer process, and that noise on the running master is "immaterial." (In fairness, he also contemplated the use of one-inch rather than half-inch tape for the 3.75-ips running master. While, because of increased track width, azimuth loss and phase shift were more critical for one-inch masters, head-azimuth adjustment can be set more easily with the larger width tape.)
To improve the signal-to-noise ratio range of a 3.75-ips running master, Merkel proposed that a 3180 plus 140 microsecond record/replay equalization curve be adopted — rather that the current 3180+90 — a change that would enhance the SNR figures by 3.5 dB.
Derendorf described the thinking behind the development of BASF's new 902 chrome tape for running masters which, at 3.75 ips, offers an MOL of 67.5 dB at 315 Hz, and a high-frequency dynamic range of 59.5 dB at 12.5 kHz. Since the new chrome formulation requires an additional 3 dB of bias — and also produces an additional 10 dB of HF headroom — Derendorf foresaw problems in providing sufficient bias.
STOP PRESS: MARSHALL ELECTRONIC RE-APPOINTED U.S. DISTRIBUTOR FOR QUANTEC ROOM SIMULATOR
The December 1984 issue of R-e/p contained a review by Bob Hodas of the Quan-tec Room Simulator as modified and enhanced by the company's then U.S. distributor, Marshall Electronic. Shortly after the review was published, however, the Marshall-modified QRS, which featured hand-selected analog components, hand-trimmed DAC (digital-to-analog convertor) offsets, and several other proprietary enhancements, became temporarily unavailable.
As this issue was being prepared for the printers, Quan-tec and Marshall announced that Europa Technology is no longer serving as the QRS distributor for the U.S. As a result, the original, customized units again will be available exclusively from Marshall Electronic, Box 438, Brooklineville, MA 21022. (301) 484-2220 — Editor.
and record-level gain for conventional transports used to prepare the 3.75-ips running master. He did explain, however, that the BASF has achieved up to 4,000 passes from the new formulation.
Simmons began his presentation with an overview of the parameters of an "ideal" loop-bin master tape: it never sheds, never tangles, presents no change in electrical performance during production runs, lasts for 100,000 passes, and is low in cost. Coming back to reality, however, he summarized the cost versus quality trade-offs as follows:
- Low tangle factor, which necessitates the use of a high film-base thickness and a high conductivity backcoating;
- Low dropout activity, which can be improved by surface cleaning and correct slitting during manufacture, and the use of a tough binder system;
- Good short-wavelength performance, which requires the use of high-performance oxide coatings and/or a high level of oxide-surface gloss; and
- High signal-to-noise ratio and low electrical distortion, which can be achieved by using high quality oxides.
All of these parameters, he concluded, were available from Ampex 456 Grand Master. An added bonus, he related, was the fact that "limited experience [has shown] 456 will perform satisfactorily at 480-ips duplication speeds, because of its tough binder system."
During the seminar Richard Clark of American Multimedia invited attendees to a demonstration of the company's new bin-loop system, which allows 64:1 duplication while running a 7.5-ips master. According to Bob Farrow, chief engineer, the weakest point in cassette duplication is tension control in the high-speed playback of a running master. American Multimedia has modified an ElectroSound Model 8000 transport, including the addition of two vacuum servo systems to improve tape tension and handling. A vacuum transducer monitors the tape, and automatically adjusts the tension while keeping the tape correctly positioned. In order to show that the process accurately duplicates original material, a 7.5-ips master was made from a Compact Disc. After running in a 480-ips loop-bin feeding a slave, the resultant cassette copy was compared to the original. Several members of the audience were astonished at the remarkable similarity between the original and duplicated versions.
Highlights of the second day's proceedings included a presentation on tape performance by Kaus Goetz, manager of applications for BASF. Entitled "Tape Specifications — What are the Choices?" featured moderator Joe Kempler, director of technical marketing for Capitol Magnetic Products, and panel members Dr. Andreas Merkel of Agfa-Gevaert, Walter Derendorf of BASF, Helge (Kris) Kristensen, senior staff engineer, R&D) Test and Evaluation, Ampex Magnetic Tape Division, John Hudson, technical services manager, Dupont, David Mills, director of marketing and sales, Pfizer, Donald Wingquist, national sales manager, Recording Products, Hercules, Frank Diaz, technical director, Columbia Magnetics.
A panel discussion entitled "C-Os: What are the Choices?" touched on some of the questions confronting the C-O tests being conducted by the International Tape Disc Association. Moderator Sam Burger, senior VP for CBS Records, and Henry Brief of the ITA explained that the tests being carried out in cooperation with 10 manufacturers and six duplicators are being made in an attempt to reduce the angle error in shells, and thereby improve the quality of the pre-recorded cassette. The manufacturers involved in the ITA testing are Athenia Industries, Data Packaging Corp., Filam National Plastics, ICM, IPS, Lenco, Magnetic Media, Rainbow Manufacturing, Shape and TransAm Industries, while the duplicators are Capitol Records, Cassette Productions, MCA Manufacturing, CBS Records, RCA Records, and WEA Manufacturing.
The final session of day two was a panel discussion on consumer expectations, and what can be done to improve the overall quality of the finished pro-
... continued on page 224 ...
YOUR WORLD
© 1983 3M Co. "Scotch" is a registered trademark of 3M. Photographed at Soundworks Digital Audio/Video Studios, Ltd., NYC.
For you, it’s the sixth session of the day. For them, it’s the biggest session of the year. So you push yourself and your board one more time. To find the perfect mix between four singers, 14 musicians, and at least as many opinions. To get all the music you heard on to the one thing they’ll keep. The tape.
We know that the tape is the one constant you have to be able to count on. So we make mastering tapes of truly world-class quality. Like Scotch 226, a mix of Scotch virtuosity and the versatility to meet your many mastering needs—music, voices, effects. And Scotch 250—with the greatest dynamic range and lowest noise of any tape, it is simply the best music mastering tape in the world. Both offer a clearer, cleaner sound than any other tape. Getting you closer to your original source. Plus, they’re both backed by our own engineers a call away. They are just two of the tapes that make us...number one in the world of the pro.
Our Tape
Masters of Clarity.
Scotch
Audio & Video Tapes
NUMBER ONE IN THE WORLD OF THE PRO
October 1985 Reel p 39
From his innovative sessions with Chic, Nile Rodgers has moved on to become one of the most prolific and successful producer/musicians around today. Over the last several years he has contributed his unique production style — infusing solid rhythm elements with strong lead and solo identities — to sessions with Sister Sledge (When the Boys Meet the Girls), Madonna (Like A Virgin), David Bowie (Let’s Dance), Jeff Beck (several tracks from Flash), Mick Jagger (three tracks from She’s The Boss), and his recent solo album, B-Movie Matinee. During that period, he has also found time to produce singles by Diana Ross (“Upside Down”), Duran Duran (“Wild Boys”), as well as several film soundtracks, including music for the upcoming movie White Nights.
Re/p caught up with the busy producer during remix dates for the new Thompson Twins album, Here’s to Future Days, working with first-call session engineer James Farber at Skyline Studios, Rodgers’ new base of operations. Following the remix session, he was scheduled to begin a new album with Shenna Easton, followed by possible sessions with Philip Bailey and Duran Duran.
Re/p: You have an enviable reputation as a session musician as well as for your innovative work with Bernard Edwards and Chic. How did you first become involved with the production side of making records?
Nile Rodgers: Actually, it was out of necessity. When Bernard and I first put Chic together, we were concerned with being respected as players primarily, especially because it was right at the high point of the “Fusion Era” — so of course our role models were musicians. The problem was that any time we would go into the studio to make a record, because of the way we played, the producers we were working with would try and bring out that Fusion Element. Whenever we went into the studio, invariably we would come off sounding like “young John McLaughlins,” or “young Chick Coreas.” [Laughter] I was studying with a number of very good jazz and classical guitar players, so we were concerned with musicianship.
Unfortunately, we wanted to make a commercial pop record. As soon as we would start practicing, we would over-interpret our own songs, but we
FROM HAIRPIN TURNS TO STRAIGHTAWAYS, THE SPEED OF SOUND HAS NEVER BEEN SO SMOOTH.
For years, sloppy tape transportation and handling have made the audio engineer's day much harder than it had to be.
This tormenting state has come to an end with the introduction of Sony's APR-5000 2-track analog recorder, available in a center-track time code version.
The APR-5000's precise handling and numerous advanced features make the audio engineer's day run much smoother. For example, the APR-5000's 16-bit microprocessor manages audio alignment with a precision that's humanly impossible. And the additional 8-bit microprocessor opens the way for extremely sophisticated serial communications. In tandem, they reach a truly unique level of intelligence.
Not only does the APR-5000 do its job well; it does it consistently. The die-cast deck plate and Sony's longstanding commitment to quality control maintain that the APR-5000 will hardly need time off.
All of which results in a consistent sonic performance that'll stand even the most critical audio professionals on their ears.
For a demonstration of the recorder that transports analog audio to a new fidelity high, contact your nearest Sony office:
Eastern Region (201) 368-5185;
Southern Region (615) 883-8140;
Central Region (312) 773-6000;
Western Region (213) 639-5370;
Headquarters (201) 930-6145.
SONY
Professional Audio
© 1985 Sony Corp of America. Sony is a registered trademark of Sony Corp.
We give you the kind of information about professional audio products you won't find in the brochures.
Hands-on experience. First-hand information. Our sales staff does more than quote brochures. Every one of us has extensive hands-on experience with the equipment we sell. And even with equipment we do not sell. We can answer your questions about the differences and provide you with comparison information you can get nowhere else.
More Professional Brands
We sell more brands of professional consoles and tape machines than any other dealer in the western United States.
Call us for a list of the more than 200 brands of professional audio equipment we sell.
Equipment Sales
Factory-Trained Service Technicians
Studio Design
EVERYTHING AUDIO
16055 Ventura Blvd., Suite 1001
Encino, California 91436
Phone (818) 995-4175 or (213) 276-1414
were just doing it for fun! The producers would pick up on that free-form playing, and take us in that "Fusion Direction," because fusion was happening, and they thought that was what we wanted to do. So we made a number of records and demos that were over-played and over-produced — Man, just a flurry of notes! It was just crazy! When we listen to it now we like it, because it was good stuff, but it was far from commercial!
Finally, Bernard went into the studio to do a disco song for a television show. They needed a B-side, so Bernard started jamming on the bass line that eventually became "Dance, Dance, Dance." They liked what he did so much that they said, "Man, this guy has really got something; we just ask him to play a groove, and he came up with something that we've never heard before."
R-e/p (Mel Lambert): Bernard set the whole character of the song with just the bass line?
Nile Rodgers: Right. As it came down, they thought that if Bernard had that much talent in a primal sort of way, I might be able to come up with something a little bit more "intellectual." They called me up and said, "Bernard did this on bass; what would you do on guitar?" I thought: "Now hang on wait a minute. They're gonna get Bernard and me to write this whole song, and produce it, but meanwhile we won't get any of the credit!" So I called Bernard, and he came over to my house. "Let me hear what you worked on," I said, and he played me the tape. "Why don't we do it together," I asked? But he and I were already in a band; we had Chic, but we didn't call it that back then. So we worked on the song and wrote "Dance, Dance, Dance." It was the first song that he and I had written together, because prior to that, I had written all the songs — they were much more jazz-oriented tunes, with thousands of chord changes and things like that. So the very first thing we wrote together became a monster hit.
R-e/p: So the production role was more a matter of being responsible for the finished sound? How to set up the instrumentation, the playing; how to record it; and how to blend the elements together in the mix.
NR: Yes. We sort of lucked out on those early Chic singles, and kept doing more and more. We started working in a friend of mine's recording studio; actually he was just the maintenance engineer. When the studio closed at two o'clock in the morning, we would sneak in after hours. We recorded the essence of our first album at that studio.
After we did "Dance, Dance, Dance" — with a full production — the record company said, "Well, what else do you have?" And boy, did we have a lot of other material! We said, "Glad you asked that question; listen to this." And we had hit record after hit record.
R-e/p: You have been reported as saying that, in both a playing and a production sense, you are "comfortable with any musical situation." Has that universality helped you to move into producing other, diverse styles beyond what was your "personal music" with Chic? There are obvious and distinct differences in the musical directions and styles of, say, Sheena Easton, Mick Jagger, Jeff Beck and Madonna.
NR: Absolutely. For example, imagine — and this is purely hypothetical — if Mick Jagger had said to me: "I'd like to do a country and western track." What am I supposed to say: "Hey Mick, I can't hear it?" If I'm producing his album, I'd have to do it, and try to make it sound the best it could possibly be. You have to have some knowledge of all musical styles, even if it's only as a fan, because then you'll know how to achieve the end result. I have a very varied musical background: I played folk music for years; I played classical music for years; and I played jazz for years — and still do. I just never made records in those styles.
R-e/p: So you feel that it is important for a producer to have that overview during a session?
NR: No, it's not necessarily important for everyone; it's just important for me.
Having a diverse musical background has definitely helped me. That's not to say that producers who only produce Heavy Metal, for instance, are bad — they have that down. For me, unfortunately or fortunately, depending on how you look at it, because I'm such a "workaholic" I'd be very, very frustrated just working in one musical genre. A lot of the work I do never sees the light of day, but that's what fuels my fire — it keeps me interested in this whole musical lifestyle. I love this job; it's a great way to make a living!
R-e/p: I notice from the album credits of your recent productions that you also tend to play on a lot of the sessions. Do you try and keep your chops alive through the musical contributions you make in the studio?
NR: Yes. I play on every record I produce. On the Thompson Twins sessions [Here's to Future Days] I played guitar. In fact, I played more bass on the Sister Sledge sessions [When the Boys Meet the Girls] than any other album I've done, apart from my own solo album.
I started playing keyboards on the Madonna album [Like a Virgin] and she says: "Nile, if you're playing it right why are you calling these other people?" I suppose that I just never had the confidence to think that I could come up with a part, and then play it as well as these session guys. I'd always figure out the parts, and tell them what to do . . . well not always, because I work with excellent keyboard players. In certain situations when you're trying to interpret another person's compositions, you know there are certain elements they want to bring out and the players aren't hearing it. You just say: "Wait a minute, it just goes like this; can't you hear it? It's so simple." So Madonna said, "Well Nile, if you hear it why don't you just play it. It takes five minutes for you to play it, and it takes an hour and a half for you to explain it!" I said, "You're right; but I
"I love putting pressure on myself, and consequently on the artist too, because whenever a person has to [reach] for a note, it translates to the listeners as emotion."
CREATIVITY IN THE STUDIO:
A CONVERSATION WITH NILE RODGERS’
SESSION ENGINEER JAMES FARBER, AND SKYLINE
SECOND ENGINEER SCOTT ANSELL
Since relocating his base of operations from The Power Station to Skyline Studios, New York, Nile Rodgers has been working almost exclusively with session engineer James Farber who, prior to joining Rodgers’ staff, was employed as a staff engineer at Power Station. What was it like to engineer for such a creative producer, we queried?
“It’s great,” Faber enthuses. “After maybe nine months or so of working together, we’re at the point now where we do not have to say things to each other anymore. I might turn around and look at him and say, ‘Oh, I’ve got a great idea how to do this’ And he’ll say: ‘Yeah, I hear. You just do it.’ Nile knows exactly what I’m thinking, which is great. He is one of the most talented guys in the business, and it’s a pleasure to be working with him.
“He gives me a lot of freedom. When things feel good to him he knows it. But when things are not feeling right, he’ll say, ‘Tell me what that track needs.’ He won’t say ‘Add more 10-thousand cycles!’ [Laughter] Or he might ask for it to be made a little brighter, or that the drum sound needs to be more ‘aggressive,’ and I know exactly what he means. He’s not ambiguous; he doesn’t use words like ‘make it more blue or orange!
“As far as mixing goes, he lets me get the balance to the point where it’s getting towards being finished, and then gives me his input. He usually won’t say too much about the mix he wants before I start, unless there’s something he had in mind that we hadn’t discussed. Usually when we overdub we have a good monitor mix going. I spend some time getting the monitor mix right, and we get a lot of the ideas from that while we’re overdubbing. I always put effects on the monitors to try and make it sound as much like the finished record as possible when we’re overdubbing, so that the new part will fit in. I also send the monitor mix to the musicians’ headphones, rather than set up separate foldback busses. I haven’t set up a separate cue mix in quite a while; one thing I never get complaints about is the headphone mix. Although sometimes when you’re doing a huge session you need to have two different headphone mixes, so I’ll set them up. But even for basics, 99 out of 100 times I’ll use just one cue mix.”
From what Nile Rodgers has to say in the main interview, it’s very obvious that he is a firm convert to digital recording and mastering. What did Faber consider to be the operational and sonic advantages to working with digital?
“Just the fact that if you got the sound right at the time you recorded it, you don’t have to re-equalize it and add a little treble when you come to the mix — the sound stays exactly the same once it’s been recorded on the tape. Whereas if you keep using the same piece of analog tape, by the time you start mixing, the sound will be a little different. Plus you get tape compression — or whatever else happens when you hit analog tape with a hot signal — noise, wow and flutter, and distortion — it’s very different when it comes back off analog.
“The main advantage to digital is that there’s no noise, plus the fact that there’s no such thing as a punch you cannot do. With a Sony [PCM-3324] you can rehearse a punch-in and then program it. The machine also crossfades when it punches.
“On this recent Thompson Twins album, we bounced up the basic tracks that were recorded on a 3M 32-track to a pair of synchronized 3324s, and ended up with a lot of spare tracks with the 48-track system. The band members would go out into the studio and sing one at a time. Maybe Tom [Bailey] would do four tracks of the melody, and then four tracks
Nile Rodgers with first-call session engineer James Farber (left), and Skyline second engineer Scott Ansell with track assignment charts and SSL Total Recall automation floppy disks atop Skyline’s SL4000 console.
always felt weird about doing that.”
I’ve always had a talent for getting a sound out of any musical instrument. When I was very, very young I was able to mathematically figure out the patterns. I could play a clarinet right away, a saxophone, even a trumpet.
R-e/p: I’m almost tempted to say that your “instrument” is now the recording studio, because there are so many recording and production techniques you can use to extend your musical talents. The word “producer” doesn’t get it for me anymore, simply because you are also a player, arranger, synthesist, and a whole lot more besides.
NR: Yeah! It’s interesting that now we really can use the entire universe to produce sounds for a record. I have a guy working for me who basically goes around the city sampling sounds; for example, he goes down to the Brooklyn Bridge and samples the cars with different miking positions. We’ve also experimented with a lot of water techniques; recording things over water and under water. We don’t have as much time as I’d like to have for experimentation, because I’m usually committed to a recording project. I’m going to take off about six months next year, just to do a lot more exploratory work.
I have all these sounds sampled into a [Sony] PCM-F1, which we use in my [New England Digital] Synclavier [digital synthesizer], or to fly into a track. Kevin, the guy who works for me, said that “we’ve compiled such an extensive sample library that we could sell it!” We’ve had offers for about 10 grand, just for our tapes. They don’t seem that great to me, but when we start to analyze them — the full range of different grand pianos and other instruments; hybrid sounds that we sort of made ourselves — they’re incredible. You’ll hear a great example of this on the new Thompson Twins album [Here’s to Future Days]. There is this really strange percussion instrument that looks like a drum set, but is all metal; it looks like a metal sculpture that resembles a drum kit. We came up with the most fantastic sound on the John Lennon song, “Revolution.” We also used it at Live Aid. Now that sound is in my library.
R-e/p: There’s a lot of original sounds that Jan Hammer programmed into the Fairlight CMI for the song “Escape”, one of the tracks you co-produced for the Jeff Beck album, Flash. And, like Jan’s underscores for . . . continued overleafThe choice of those who can choose.
Lexicon 224XL digital reverberator, room simulator and effects processor.
CREATIVITY IN THE STUDIO — continued...
of his harmony. Then the other members would go out and add different harmonies; somebody would double the melody. We ended up with a lot of vocal tracks, and would bounce them down into two digital tracks. We would do that on one chorus, and then use the [New England Digital] Syncclavier to 'fly' them into the rest of the song where we needed them. Sometimes we had as many as 20 tracks of vocals that we comped together. Sometimes we would treat the backgrounds a little differently by piping them out to the studio through a couple of those tiny, self-powered Radio Shack speakers, and miking them to pick up some room ambience."
For the Thompson Twins sessions, the pair of synchronized PCM-3324 multitracks were linked via timecode tracks and a pair of RM-3310 remote control units, which make the pair behave as a single transport. Skyline's engineering staff has also managed to link the studio's SSL console automation system to the remote control units, so that transport functions can be operated from either the mixing position, or from the 3310s. Has that helped speed up multitrack operations, we asked?
"Yes, because I can now run the machines from either place," Farber acknowledges. Although the butt on layout on the SSL [master multitrack control panel] is different from the Sony remote, I'm used to punching left-handed, so I use the 3310 [which are located to the left of the console], and which also has the rehearse function. So I always use the remote for punching, and also in the early mixing stages because it's faster for the two digital machines to locate [the slave to match its position to that of the master] via their control tracks than it is for them to go by timecode."
At which point in our conversation we were joined by Skyline engineer, Scott Ansell, who was acting as second engineer/studio liaison on the Thompson Twins sessions. For an alternate viewpoint of working in the studio with Niles Rodgers and James Farber, we asked Ansell for the second engineer's perspective on "life in the control room."
"Ever since Nile came into Skyline about four months ago," Ansell confides, "I've really been the only person from the studio working with him, because he has a lockout from Monday thru Friday. My job is to look after the documentation for the [SSL] Total Recall automation, and to make notes of every piece of outboard that we might use, and how it's patched, to make sure that the whole mix is completely documented.
"That's one of the great things about James: he's got it really organized on this project. As well as the settings of every console knob and button that's stored onto floppy disk by the Total Recall, we have this system of noting what's coming in on each module and where it's patched from; all the subgroups; the sends and returns; and all the outboard-gear setting. We have these sheets [Ansell holds up several front-panel layouts of effects units that were generated by Skyline staff on an Apple Macintosh PC] so we can document every setting we use. We've got [Mac-generated drawings of] the Quantec QRS, AMS [DMX-1580], [Eventide] Harmonizers, Lexicon Prime Time and PCM-41, Marshall Time Modulator and Tape Eliminator, and many more. One thing the SSL doesn't memorize [under Total Recall] is the master section, so we have a separate sheet for making notes of the master-send settings, etc.
"We are also using an outboard Harrison 'consolette' [fitted with a pair of mono and six stereo PRO-7 modules] on the Twins' remix session, because the SSL [SL4000E 56-input mainframe with 40 4000-series and six 6000-series modules] doesn't have enough inputs. We are using the Harrison for returns from the various digital reverbs and delays, which are bussed across to the SSL's stereo returns. All those settings also have to be written up.
"This is the first project where I've actually been keeping the paperwork as we go along, instead of going through the patch bay and figuring out everything at four in the morning when the mix is finally done! Before we got the SSL we used to take Polaroids of the board; Total Recall works great."
How would you summarize Nile Rodgers' production technique? "It's really unique compared to a lot of the other producers that I've worked with. He's a musician; he concentrates on getting the track and the arrangement right. He's not in the control room watching every move saying, 'Well, I think you should try 3K instead of 4K.' He'll say, 'I need more "crack" on the snare drum.' When it gets down to mixing, he leaves and lets James and me do our thing in here. He comes back every few hours and gives us a little direction. You should see him when we're overdubbing — he's the easiest going guy to work with but, when he gets on an idea and he's moving, he wants to work!
"The first time I worked with him, Nile Rodgers was sitting on a stool playing guitar; I'm ready to run tape, and we're getting into it. I had an idea for the piece and opened my mouth to say something to him. Assistants are always taught not to say a word on a session! You are supposed to feel out the vibe, and see what it's like as far as offering suggestions goes. Nile was looking across, and going like, 'What, man?' So I said: 'I had this idea for a part.' 'Well, what is it?' I'm always open to suggestions. I suggested the part, and he dug it. There is a very open atmosphere for making suggestions and, being a second, you feel really good. I'm young and that's the thing I have to keep thinking about. James is an amazing engineer, but I'm very anxious to participate. You walk in with producers who don't have one hit record, who are doing demos all their lives and they treat you like you're the lowest of the low. Nile and James are very open to suggestions."
Nile Rodgers
Miami Vice, the track is mixed real hot!
NR: Jan is fabulous on that track; it's one of my favorite songs on the entire album.
The interesting thing about that track is that we also have to give credit to the engineer, Jason Corsaro. Jason was so in love with those sounds that it was really his idea to mix them at those drastic levels. As a matter of fact, that track became our test tape for checking out studios all over the world. You don't know how many studios we've gone to with that mix, and almost blown the speakers off the wall. Fried the UREIs!
R-e-p: Let's move on to talking about the studio environment. You've worked at a lot of different facilities, including Power Station, Atlantic, Media Sound, Hit Factory, Skyline and Record Plant here in New York, as well as Maison Rouge in London with Duran Duran. What attracts you to a particular studio?
NR: Basically, I think that any studio that's good has to have a good-quality console. I just happen to prefer the SSL now. I like to use a Neve to lay the tracks, and an SSL for mixing — although I don't mind laying tracks with an SSL. And a Sony PCM-3324 [digital multitrack]. Of course, the echo that a studio provides is also important to me — especially after being at Power Station for all these years, and growing accustomed to their live chamber. We've successfully recreated a live chamber here [at Skyline Studios, Rodger's new base of operations] from the little gallery in front of the elevator. Unfortunately, you can only use it at night. The sound is incredibly smooth; the decay curve in there feels a lot better than what we enjoyed at the Power Station. I shouldn't say a lot better, because Chamber #2 was pretty amazing, but we've compared mixes and it sounds fine.
...continued on page 51...
JVC Digital Audio.
The artist's editing system.
Digital audio editing takes on new speed, simplicity, and flexibility with JVC's 900 Mastering System. Anyone with a trained ear can learn to operate it in minutes and be assured of professional results of outstanding fidelity, accuracy, and clarity. And while sonic excellence is surely the 900's most persuasive feature, flexibility runs a close second; for not only will the 900 operate with 3/4" VCRs, but with VHS cassettes, too, with total safety and confidence making it ideal for mastering digital audio discs and the increasingly popular hi-fi video discs.
The DAS-900 consists of four principal components.
**VP-900 Digital Audio Processor.**
Two-channel pulse count mode processor. Several 16-bit microprocessors make it compatible with other professional production equipment such as cutting lathes, synchronizers, and encoders. Dynamic range of more than 90 dB, frequency response from 10 to 20,000 Hz (± 0.5 dB), and low recording bit rate of 3.087 Mbits/s at 44.1 kHz. Transformer-less analog I/O circuits further improve sound quality, and the analog-to-digital, digital-to-analog converter reduces distortion to less than 0.02 per cent, while an emphasis circuit improves signal-to-noise ratio. Logic circuit uses CMOS LSI chips for high reliability, compactness, light weight (48.6 lbs) and low power consumption.
**AE-900V Digital Audio Editor.**
Simplicity itself to operate, this little number puts editing right in the hands of the artist, if need be. Precise to within microsecond accuracy, edit search can be carried out by manual cueing, automatic scan, or direct address. It will confirm cut-in, cut-out points independently by recalling signals stored in memory. Digital fade control for adjusting relative levels between original and master tape. Shift function for changing edit points backward or forward in 2-ms steps for super-fine adjustment. And variable-gradient cross-fading function for smooth continuity at the edit point, variable in 0, 10, 20, and 40 microsecond steps. Auto tape locate function enables the user to locate the desired address on the original tape, automatically.
**Audio Editor Control Unit.**
Electronic governor for routing, coordinating, and executing all edit functions, both automatic and manual. All commands, from digital dubbing of original to master for continuous programs, to repetitive point-to-point manual cueing are regulated here.
**TC-900V Time Code Unit.**
Actually two time code units in one, this unit reads and generates SMPTE standard time code and synchronizes the JVC exclusive BP (bi-parity) time code. Thus, the DAS-900 will operate effectively with both time codes; a necessity when the System is to be synchronized with video equipment.
For a demonstration of the DAS-900 Digital Audio System, a Spec Sheet, or JVC's complete catalogue, call toll-free
1-800-JVC-5825
©1985 JVC Company of America
For additional information circle #129
...and some people start a session without a Klark Teknik Reverb.
To find out how much potential you’d have at your fingertips with a DN780 Digital Reverberator/Processor contact your local dealer or Keith Worsley at Klark Teknik Electronics Inc., 262a Eastern Parkway Farmingdale, N.Y. 11735, USA. Telephone East Coast (516) 249 3660. West Coast (415) 482 1800. Omnimedia Corporation Ltd., 3653 Côte de Liesse/Dorval, Québec H9P 1A3. Canada. Telephone (514) 6369971.
For additional information circle #130
KLARK TEKNIK
A Decade of Commitment and Achievement
At the New York AES Convention in October 1975, Harrison introduced the first advanced inline audio console system.
During the past ten years, the Harrison product line has grown to sixteen application specific products for multitrack music, teleproduction, film post production, venue and broadcast.
We invite you to celebrate with us our tenth anniversary at the AES in New York October 13 thru 16, 1985 (Booth 733-740) and to be a part of the introduction of our most important product since 1975:
THE HARRISON SERIES X
HARRISON SYSTEMS, INC. • P.O. BOX 2964, Nashville, Tennessee 37202 • (615) 834-1184 • Telex 555-53
For additional information circle #131
Nile Rodgers
R-e/p: You use the live chamber on vocals?
NR: Vocals and mainly drums — any instrument that's in need of a very resonant high-frequency sound, but still the natural reverb that we get in the room. It adds a nice dimension to the record. However, I do love all of the digital reverb units; I haven't found one yet that we can't find some use for, even the cheap units. Even effects devices that people find pretty inferior can give you some very dramatic sounds. That's one thing I learned through working with people like Peter Gabriel.
I just finished a song called "The Terminator," which I had written for the movie *The Terminator II*. We tried to recreate the sound of Arnold Schwarzenegger smashing through the police station wall. But we couldn't get it right no matter what we used. "Wait a minute guys," I said, "turn on the microphone that's on my guitar amp." I ran out in the studio, lifted up my guitar amp and dropped it on the ground with the screaming reverb. Bang! It was perfect. Now that sound is also in my effects library.
R-e/p: Would you describe yourself as a technical producer?
NR: Not at all. I sort of stumbled over the technology. I get into a few pieces of equipment, and feel compelled to learn about them. But basically I feel that my strongest point is as a musician, arranger, orchestrator, and player — I can just hear something and say, "I don't think it's good, but if we do this it could be better."
R-e/p: Do you do have a regular session engineer who looks after technical side of production?
NR: Yes. I work pretty exclusively now with James [Farber] who came here when I moved from The Power Station. I'm not an engineer at all; I'm only an engineer when I'm making demos!
Producer/musician Nile Rodgers in the studio with Laurie Anderson (above), Jeff Beck (center), Peter Gabriel, Adrian Belew and Rob Sabino (top right), and Madonna (lower right).
R-e/p: You don't get behind the console, apart from maybe gain riding a lead vocal line, or something like that?
NR: I do touch it, but I think James is more than qualified to handle that side of things. What I really love to do is to see what he might come up with. If he thinks of something that is working I won't say a word; if, on the other hand, I do think of something that could be better I will say something. Although I might touch knobs every now and then, James doesn't come over and sit on the piano and play parts, even though he's a fantastic piano player! You don't want to distract a person, and I really don't like people who do that. If your engineer is working on something, and you don't know what's in their mind at that time, if you turn the knob and all of a sudden upset what they've been focusing on, it's totally unfair. In the same way, if I'm working on a guitar part and someone comes in and starts playing something else I'd get pissed; I feel that it must be the same way with an engineer.
I've sat behind the board and had someone else soloing other instruments trying to get a sound while I'm thinking about a specific sound — it's completely distracting! And I know a lot of people who do that; I don't think it's hip. They listen to the bass and say, "What if the bass was a little brighter?" Then they go over and start making it brighter, and meanwhile the engineer might be working on the drums.
I've been in the studio with a lot of my friends who are producers, and they do that all the time. I've learned to avoid doing that because, I guess, I've worked with great engineers for whom I have a tremendous amount of respect.
R-e/p: Do you communicate with James on a technical or on a musical level?
NR: The wonderful thing about James and I is that we seem to be made for each other, because he is an amazing musician; he's a phenomenal jazz piano player, and has played on our records already. So we have very good communication both musically and technically. We have an extraordinary affinity about other things outside of the studio too, and these things seem to help us with communication in the studio.
R-e/p: Why did you decide to move your base of operation from Power Station to Skyline Studios?
NR: When James and I were out in California working on the Sister Sledge album [*When the Boys Meet the Girls*], it was the first time I had a studio block-booked since I worked with Duran Duran in London. I said: "Man, this is the way to work." If we're doing a mix we know that the console isn't going to get torn down, and we know that there's no one in this studio who doesn't belong here. It's sort of uncomfortable if you're working with Mick Jagger, and meanNile Rodgers
while there are lots of people coming in and out, because the studio has three different rooms. Sometimes fans don't understand that because you're concentrating on a lead vocal you can't be jovial, friendly and wonderful.
Working here in the garment district, people aren't so keen to come over and stand about at two o'clock in the morning. Also, there's no one in this studio who doesn't belong here. Skyline is just better for me.
I thought that the people I work with might find Skyline somewhat uncomfortable, because it's not nearly as cosmetically comfortable as the Power Station, or some of the other nicer studios. But I think the one thing that's great is that this studio gets us back to the basics: The Record. When artists come in here they feel that they're in a real working situation. This is a studio: we don't live here; we don't party here — we work here. Also, not to sound like an ogre, because we have a wonderful time here, but it's an atmosphere that's very conducive to work.
[Editor's Note: It transpired from my conversations with Nile that he has a lock-out booking arrangement with Skyline Studios, whereby the producer has exclusive use of the facility Monday thru Friday, the weekends being open for outside clients. In addition, the producer now uses the studio as a permanent home for his personal-use Sony PCM-3324 digital multitrack, and also bases his production company at the studio — M1...]
R-e/p: I seem to recall that you first worked with the Sony PCM-3324 digital multitrack during production of some Peter Gabriel tracks for the film Gremlins. What was your initial reaction to recording on the 3324?
NR: To be honest with you, I was totally blown away! After recording for the last three or four years on the best equipment available, to hear the difference between the Sony and the Studer A800 — and man, I love A800s — I was ruined. I couldn't believe it when they took the machine back. Later, Jason [Corsaro, engineer] put up the analog tapes, and I said, "What are you doing, man? What is this, a new concept you're exploring?" He looked at me, and said, "No, what are you talking about?" I said, "What is all that high-end hiss? Do you have a tape echo going, and we're just hearing the tape?" He said, "Nile, that's the multitrack; that's what it sounds like." I couldn't believe it.
For this new Thompson Twins album, they were working digitally in Paris [with a 3M 32-track Digital Mastering System], and when they came to Skyline they were working on the Sony. We made some rough mixes to analog two-track, and were playing them for the record company. When the record heads left, Alannah [Currie] — the girl in the Thompson Twins [with Tom Bailey and Joe Leeway] — says, "Nile, what were these terrible mixes you were just playing? What was all that 'shhhhhhh' — what was all that hiss?" Which was amazing! Even she realized right away, because the band were so accustomed to hearing playback from the digital multitracks and [Sony] PCM-1610 digital stereo masters.
R-e/p: You don't hear any "funnies" in the top-end, with cymbals, or very high-frequency harmonics? Some people say that the one thing they don't like about the sound of digital is that, because of the low sampling frequency being used, the A-to-D processors have to roll off the HF too steeply — the so-called "20-kHz brickwall."
NR: If that has a real place in the field of music, fine. That's their choice. I'll tell you: in my music, I've never heard anyone complain about that [top-end loss]. I've never seen a kid standing in line at the store saying, "I would've bought the record, but when that cymbal cut off at 20k . . ." [Laughs at the absurd image he has conjured] It's totally ridiculous!
In my world, I want my records to be as technically as perfect as possible; I love it when I listen to my CD player and hear my records coming back but, I tell you, I'm not nearly that sophisticated. They sound unbelievable, compared to what I used to listen to. If I had a percussion outfit or something like that — say maybe 70% of the record had cymbals — and digital made it sound lousy, then I might get pissed off. But digital sounds dynamite to me, and I honestly don't hear it.
R-e/p: How you prepare for a session? Do you spend time in rehearsal with the band or artist, listen to their previous projects, or start with a fresh ear?
NR: I think most of the artists I work with are incredibly surprised at how nonchalant I am about their records. It's not because I don't care: as a matter of fact, it's the exact opposite. It's because I care so much that I want to be just as excited as they are about their music; I want to feel like I'm in the band, and I'm learning the song for the first time. I love to pick it apart and put it together with everybody: I love the creative process. Anybody can rehearse a song and play it back. But the learning of the song and putting it together while working together as a band — as a unit — that's what my strength is. It's the easiest thing in the world to go into rehearsals for an hour, and then play the song back. That's what I love to do — in the studio. Not outside of the studio.
The only time I've ever done pre-production with a group was for one song, "Wild Boys," with Duran Duran. And it's just because I was there. I would've rather they just worked on the song by themselves and, once they got it down and felt comfortable with it, had called me up and said, "Let's go into the studio and record." Usually, I don't even want to hear the demo! I listened to a Thompson Twins demo once, and they kept calling me back. I almost didn't do the project, because I was so nervous that they wanted answers from me, and I wasn't prepared to give them any. I listened to the songs one day at the Power Station, and felt so confident that we'd make a good record in the studio that I didn't have to listen to them again.
R-e/p: What, for you, forms the foundation of a track? Is it the drums and bass, or a more complex combination that you regard as being the "bedrock"?
NR: I guess it's the drums, bass, and guitar — maybe because I'm a guitar player, I always want interesting guitar parts. Sometimes I put the melodic portion of the rhythm section in the bass; I like melodic, moving bass lines — probably from working with Bernard for so long! I like to have very static guitar parts: while the bass has major melodic movement, the guitar is just basically giving a rhythm feel. That's not written in blood though; it could change at any point in time.
R-e/p: Do you tend to try to keep as many of the basics that you track, because there's often a coherence to them, since the band was playing together?
NR: No, not really. I think it's just something that happens. I'm honestly not conscious of anything that I do when it comes to making a record. I really do look at each song individuSTEREO IMAGING FOR MONO COMPATIBILITY
The B & B SYSTEMS’ IMAGESCOPE™ graphically displays the complex stereo audio signal, showing you the dispersion of energy as it will sound in the typical listening environment. It gives you an instant indication of L+R phase errors which lead to “flat” and “mushy” mono playback and transmissions. IMAGESCOPE™ is used to position, precisely, any track or tracks within the stereo image. Stereo phase errors are now easily avoided.
In one affordable, simple, self contained package, IMAGESCOPE™ gives you a true visual representation, in real time, of the balance, separation, and level of the stereo signal.
Creative tools for stereo audio.
B & B SYSTEMS, INC.
28111 AVENUE STANFORD • VALENCIA, CA 91355 • (805) 257-4853
Stereo program material with good separation and balance. This material will transfer well to monaural.
Single input, with tone, panned mid-right at 75% modulation.
Single input, with tone, panned mid-left at 75% modulation.
For additional information circle #132
ally. If something is out of the "norm" for me, I listen to it very carefully before I make a final decision.
Re/p: You may change your mind, and go back and do something else.
NR: Usually I make my decisions right there on the spot; it's very rare that I change something, because I usually like to live with my decisions. If I do something that's not working, then I'll put on a different part to make the previous part work. I think that comes from working in a band. A person may have a part that they just love, but you've somehow got to do something to accent that part, or help it along. You just don't say to the player, "No man, this part sucks! Get out of here!" Instead, you try to do another part that brings it out, and actually makes the part more musical.
Re/p: Do you make much use of MIDI-controlled synthesizers and electric percussion?
NR: I love MIDI. It cuts down some of the time that I would spend layering parts. The only problem with using MIDI is that sometimes I tend to make the textures too "thick" right away — as opposed to doing it one instrument at a time, and somehow leaving some of the space free. Instead of doubling a part exactly, you only double certain portions of the part to give texture to the composition. We use the [Friend Chip] SRC Box to slave our MIDI sequences to time-code, and it's fantastic.
Re/p: You also have a New England Digital Synclavier II digital synthesizer which, I seem to recall, is equipped with the polyphonic sampling option.
NR: Yeah, I just happened to get the first Synclavier with that option; I guess because I had the dough at the time!
I use the Synclavier for a lot of different things. I just wrote a song arrangement for Sheena Easton's new album. I did the whole thing — bass, drums, horn parts, vocal pads, the "oohs and aahs," that sort of thing — on the Synclavier. I just push the button and it starts up. You can change the key, change the tempo, slow it down, speed it up, change the meter, do rubato moves — anything we want. Also, the stereo options inside the Synclavier are far more complex than the console's stereo features; it's like every track has a [dynamic] panning function. In other words, you can move in certain increments across the stereo spread; you just pick up each portion of the sound and move it throughout the pan spectrum. That's pretty unique!
The polyphonic option allows you to sample any sound that you want, and then simultaneously play back as many notes as you have memory for. It gives you more interesting sounds. Let's say you sample one hand clap. Traditionally, when you have hand claps in a drum machine, you have basically that one sound. If I take my fingers and do this [taps several fingers on the keyboard], it sounds like audience applause, because you have the cardinal pitch, and various overtones around it [from the polyphonic playback]. The substance of the sound is changing as you hit several notes of the keyboard, because once you move it around [in pitch] the combined sound gets fatter or higher. It sounds like an audience, rather than multiple, individual handclaps.
Re/p: How did the sessions go with him?
NR: Unbelievable! We finished the album in 21 days; in actuality, it was 19 days — I didn't even show up for work the last two days, because I knew it was finished. For those extra two days, he just had to play it for the record company, because we did that record without him being signed to a label. The fact that it was a spec deal [released eventually on EMI] helped create that magic; the fact that we were doing it on our own and didn't have anyone to answer to.
David's album is the first time I ever did what they call "head charts." The horn players were all sitting there, and I said: "I want you to do that, exactly." "What," the guys said? So I went over to the chart and wrote down the voicing I just did. I said, "That's what I want you to play." They said, "Sure!" [Sings] Bip! Bip! [Waits] Bip! [Waits] Dey, doot, dey, doot, doot. That was it. And David started singing parts that he had, like [sings] "Let's dance" doot doot "to the sound they're playing on the radio" ooh, bop, ooh, bop, ooh bah bah. That was the first time I had ever done a record like that: so very laid back; incredibly nonchalant. Now... I don't worry about whether or not I have the charts written out the night before, because I know I could write them on the spot with the guys, and it'll be fine.
With Madonna, I wrote everything out, because I really wanted her to feel like we were moving into a different phase in her life. I thought it was very nice for her to go in there and see a full string section, and for her to see charts labeled with her name as well as the instrumentation. To see a score written out, and the string part marked to wait for her; I just know that made her feel good.
Once she said, "Nile, I can't do this." And I said, "Of course you can, Madonna, because we're following you, you're not following us." Those little things are what the artist remembers, and they are the high points of the record, even though you never get to talk about it.
Re/p: And the Jeff Beck album, Flash? I understand that one difference between Jeff's album and, say, the Madonna sessions, was that most of the songs were worked out in the studio.
NR: Yes, Jeff was completely unprepared; all we had were very fragmented ideas. I had basically written
It's like holding an isolation booth in your hand!
Compared to older microphone designs, the ATM63 is far better at rejecting sounds from the sides and rear. Even with a stage full of monitors and amplifier stacks. And as you get farther from unwanted sound, the ATM63 advantage sharply increases when compared to the others.
Only the vocal comes through loud and clear, making both monitor and house mixes cleaner and far more controllable. With the punch and clarity that is the very definition of a great vocal microphone.
But the ATM63 works for more than vocals. Around a drum kit, for instance, the ATM63 provides outstanding bleed-through rejection to greatly reduce troublesome phase cancellation. Both musicians and engineers have more freedom...and more control.
If your "old reliable" microphones have reached their limit, reach for the new ATM63 from Audio-Technica. It's a far better sound value...for just a little more.
Learn all the facts from your nearby Audio-Technica sound specialist today.
ATM63
audio-technica
Audio-Technica U.S., Inc.
1221 Commerce Dr., Stow, OH 44224
(216) 686-2600
For additional information circle #133
what would have been a whole album, if we had wanted to go that route. My concept behind working with Jeff was for him to re-form the Jeff Beck Group because, as great a guitar player and musician as he is, I think that Jeff is really appreciated more in the commercial arena when he "steps out" from a band.
R-e/p: How did you get that incredibly distant guitar sound on the song "Ambitious?" Was there a live chamber on that track?
NR: No, we used the room. There were two amps set up at Power Station in stereo: a Seymour Duncan amp and a Fender Concert. Both were pointed up to the dome ceiling in Studio C, which is quite high — maybe 18 or 20 feet tall. The mikes are on a pulley system, and we put them all the way up at the top, with the amps on an angle so that the sound is bouncing off the walls — one wall is glass, the other one wood. We were getting a series of subliminal delay times before it gets up to the microphone mounted in the ceiling. It's subtle, and you really only hear it clearly when the guitar is soloed.
And on that track I'm also playing a very static guitar part that was taken direct. So you hear my direct part, Jeff's ambient parts with all the delays, and it makes for an even more distant effect; you feel the part in time, right in front of you, and you hear the delays. But they're not programmed delays; because it's natural the delay is changing all the time.
R-e/p: Your own recent solo release, B-Movie Matinee, was a concept album that tried to capture the nostalgia and mood of those trashy Saturday-morning movies we flocked to see in our youth. Like the Jagger album, you recorded it over several months. Was it very difficult for you to maintain a sense of continuity?
NR: Very difficult. And I'll never do that again: I'm gonna start treating myself the way I treat everyone else. And it's tough, too, because I get overly involved emotionally in every record I do. Like, I'm working this weekend, but I'll let these guys take off. I could use a couple of days off, too — I'm exhausted! Even when I finish here, I have to go home and write some songs.
R-e/p: The Thompson Twins album that you've just finished [early August], Here's To Future Days, was started in Europe on a 3M DMS digital 32-track. Back here you bounced those tracks up to a pair of timecode-synchronized Sony PCM-3324 digital multitracks, and overdubbed solos and vocals. Apart from your own personal-use 3324, I understand you rented a second machine for the dates from Electric Lady Studios, New York. What was it like working double digital 24-track?
NR: It's absolutely wonderful. The quality difference really shows in the vocals; listen to the difference in the vocals between the new Thompson Twins album, and, let's say, the Sister Sledge and my last few records. The difference is amazing. On a couple of these Thompson tracks, we have as many as 130 voices that we bounced down a number of times on the 3324.
R-e/p: Do you have an overall sound of the record in your mind before the production gets underway?
NR: No, not at all. It only develops once I get into the studio, start listening to the songs and working on them.
Usually when I listen to a song for the very first time, I start thinking of parts. And I always say to myself that if I can think of that while listening to the demo tape, then of course I'm going be able to think of it when you put the guitar in my hands! And that's what I feel very comfortable in doing. I like being a studio musician, I guess because I've done that all my life. So, I don't even think about the song.
I love putting pressure on myself, and consequently on the artist too, because whenever a person has to strain for a note, it translates to the listener as emotion. If I'm struggling to get to a part, when I finally get it right it feels fantastic! The sense of accomplishment; there's nothing that can replace that. In the studio you're dealing with people who have carved out a place in the musical world. How do you keep them inspired? You have to create obstacles every day, because these people are accustomed to getting over obstacles, or overcoming hardship. When a person is a star, they've made themselves stars; they don't just luck up! Usually, these are people who have studied for years, tried and failed, tried and failed, and finally they've made it.
R-e/p: I once had a conversation with Tony Visconti, who said that he'd find it very difficult to work with new bands or soloists. As you may know, Tony plays practically every instrument, and has worked extensively with Thin Lizzy, Bowie, Hazel O'Connor, and dozens of established acts. He felt that he would have a hard time producing somebody who couldn't play an instrument at least as well as he could. You wouldn't have a problem with that?
NR: None whatsoever. I worked with a group called INXS. To most people's minds, the band was relatively new, and still haven't broken big in America. To me, it was just like being in Chic again. They are great musicians, and I didn't care if somebody had said that the guitar player couldn't play as well as me. Because the record is their energy, and their individuality — that's what makes that band sound great.
I understand Tony's frustration; I know exactly what he means. Just like Madonna said, "Nile, if you can play it, what are you doing explaining it to the guy for? Go in there and play the part." But the other side of the coin is that this is a band. These are kids who have struggled and fought and stuck together through all kinds of adversity to make it. Now that they've made it, you can't take that away from them; you can't just say, "Gimme that damned guitar. Let me show you how to do it."
You have to be sensitive to that side of an artist's creativity, because it's their career and future. A lot of times I'm working with people and the engineer will know that I could play the part better that the band member. But I always think back to the times I was in a band, and Bernard would do that to me, because he can play great guitar. With Chic that was okay, because we were in the band, and I could go over to the drums and say to him, "No, play this part." But if the producer had walked in, grabbed my guitar and played it, I might feel a little bad!
I work with vocalists in the same way. They might say: "What do you mean by that, Nile?" "In the second part," I'll say, "don't go [reads the word 'tonight' in various ways]." Sometimes your explanation is very complicated and confusing; you know what you hear, but the vocalist doesn't know what you're calling the second part. Every musician, every producer has a completely different language in the studio. I remember Bernard once said to Diana Ross that she was "Under". "What do you mean by 'under'?", Diana asked? "I think you're under the track — flat." "Flat!" She freaked out. You have to be tactful. Maybe I should always give people a glossary of my terms!
R-e/p: So, to summarize, there's no rules, except the person should know where you stand, and you should understand where they stand.
NR: Yes. And the more difficult a person is — when I say "difficult" I usually mean "egotistical", or what I prefer to call "confident" — the happier I am, and the better the record comes out. Because I love that interplay between the artist and myself. I really need that energy; the more you have to say, the better a record's going to be.
CLARIFIED SOUND
THE SPII PROJECTOR™ FROM DOD
Two very pleasant things happen when you plug the DOD SPII projector into your audio system.
1. You hear unsurpassed clarity... both in individual vocals/instruments and in frequency range.
2. Hearing fatigue from distortion caused by high volume and excessive high end EQ is practically eliminated.
The technical term for the DOD SPII projector is 'psychoacoustic audio processor.' Yes, the SPII is a very sophisticated piece of electronic bandwork. It is not a limiter, equalizer or compressor. The SPII's unique circuitry is designed to 'shape' sound specifically for the human ear without frequency shifting, cutting, distorting or 'out noise.'
Although the SPII is the perfect compliment for just about any mixing or recording application it is also ideal for live performance. Used on solo instruments or vocals it will enhance and clarify even the subtlest passage or whispered lyric. It will project the upper reaches of the human hearing spectrum without the necessity of increasing volume. It is a 'purest' audio component. It is state-of-the-art. It comes only from DOD Electronics.
DOD
DOD Electronics, Largo, FL 33771 South Haley Lane • Salt Lake City, UT 84107 • (801) 265-1200
To many recording engineers, the disk mastering process is often considered a creative art form, with the cutting engineer successfully making subtle but vital enhancements to the final mix. The mark of a good disk mastering engineer is that he or she knows by experience what kind of changes will occur during the transition from tape to the final release medium.
In essence, disk mastering can be looked upon as much as a "philosophy" as a technical skill. The transfer of recorded information to a master lacquer — or to copper, as with Direct Metal Mastering — requires that the mastering engineer be conversant with both the recording and the disk mastering aspects of engineering. Their job forms the very core of the record-manufacturing process.
To understand this somewhat complex process, and to bring up to speed those readers unfamiliar with conventional disk-mastering techniques, let's trace the progress of a master tape from cutting through plating, and finally to record pressing.
The cutting engineer's first job is to choose a lacquer disk suitable for cutting. In a good batch, between 10 to 20% of the blanks may be rejected. Many mastering engineers estimate that up to 75% of all flaws on a record result from the use of a bad or damaged lacquer. This degree of critical requirement is easy to comprehend if one remembers that a lacquer blank comprises a 36-mil (0.036-inch or 0.914mm) aluminum substrate, on which is sprayed a nitro-cellulose (lacquer) coating of about 7 mils (0.007 inches or 0.178mm), containing plasticizers, resins and modifiers used for adhesion, cutting and plating purposes. Since the lacquer coating is akin to a paint layer, it must be treated with extreme care, beginning foremost with the quality of equipment available at respective cutting, plating, and pressing facilities. [For a comprehensive review of disk-mastering philosophies, see the article "Disk Mastering: The Misunderstood Link," by Bernie Grundman, published in the February 1985 issue of R-e/p — Editor.]
Once that master tape — either analog or digital — has been loaded on the cutting room's replay machine, the engineer will listen to playback through the transfer console, setting rough levels, checking for out-of-phase material, and re-equalizing the material if necessary, followed by compression and overall limiting. The signal normally passes through an elliptical equalizer, which filters out-of-phase low-frequency information and converts it to mono. (Otherwise out-of-phase low-end signals result in excessive vertical motion of the stylus cutting a deeper groove, which is often difficult to track on playback, and reduces the overall music capacity of the record.)
From the transfer console, the signal is passed to the disk cutting amplifiers and to the cutterhead, and also to the computer that controls operation of the cutting lathe. Normally, the disk-cutting computer adjusts the groove pitch and depth according to the dynamics of the material being mastered. To enable the computer controlling the lathe to anticipate loud transients, or bursts of low-frequency material (both of which require increased groove spacing or depth, or both) the cutting system is fed with a preview signal, which arrives before the program signal. This identical, anticipatory signal can be provided by a dedicated preview head fitted to the replay tape machine; for digital mastering, a digital delay line is used to provide the cutting signal, with a direct feed providing the preview.
Following one or two (or possibly more) reference cuts, the mastering engineer will make any additional EQ and compression changes. (It might be interesting to note that although the DMM process centers on mastering onto copper material, the producer can still cut and play a 12-inch copper disk for a reference master.) Once the session producer has approved the reference lacquer, a final master is cut and sent to the plating facility. Time is often of essence in this critical stage of record production.
"Optimum time to plate a disk after cutting is within one hour," offers Greg Kuhn, manager of quality assurance at Sheffield Matrix Labs, Santa Monica, Calif. "Subtle, high-end losses occur if the lacquer sits around unproAN AUDIO TAPE MACHINE FOR BOTH SIDES OF YOUR MIND
Whether you're an engineer, an artist, or both, Otari's MTR-90 will satisfy your most demanding ideas of what a multi-channel recorder should be.
Once you, the engineer, have put its servo-controlled and pinchroller-less tape guidance system through its paces, no other will do. And when the artist in you experiences the MTR-90's sound, you'll know its superlative electronic specifications will never compromise your recordings. And when the both of you need total session control, the MTR-90 is equipped with a full-function remote, and an optional autolocator.
Post-Production professionals will quickly discover that the MTR-90, when equipped with Otari's new EC-101 chase synchronizer, is absolutely the finest performing tape recorder in the world—nothing else even comes close.
And, of course, you're a businessman so you'll appreciate that the "90" is also the best bottom-line decision... because it delivers performance without extravagance. From Otari: The Technology You Can Trust.
Contact your nearest Otari dealer for a demonstration, or call: Otari Corporation, 2 Davis Drive, Belmont, CA 94002, (415) 592-8311 Telex: 910-376-4890
OTARI
THE LEGEND CONTINUES
When you're in the studio, tape that's good enough is not enough. Which is why for ten years Ampex has continued pushing the potential of recorded sound. Through a decade of increased fidelity and reliability, Grand Master 456 remains an audio tape obsessed with performance. Which is why more top albums are recorded on Ampex tape than any other tape in the world. For Grand Master 456, the beat goes on.
AMPEX
Ampex Corporation, Magnetic Tape Division, 401 Broadway, Redwood City, CA 94063, 415.367.3809
AND THE BEAT GOES ON
For additional information circle #136
cessed, in addition to pre- and post-groove echo. After a week, the losses and lacquer degradation in general would be too great to plate a good master."
Prior to the plating process, the master lacquer is thoroughly cleaned, and the edges and center hole sanded to enhance adhesion of a silver-nitrate solution. Then, after receiving a very thin coating of a highly refined silver, the lacquer is lowered into a pre-plating nickel sulfamate bath through which flows a low electrical current. During this electro-chemical process, a layer of nickel slowly deposits onto the surface of the master, resulting in the initial silver coating taking a "mirror" effect. "After the preplating," Kuhn continues, "the master is washed again, then placed into a high-speed, nickel plating bath until about a pound of nickel is deposited on the disk."
Separation of the deposited metal master from the master lacquer provides a "father," but destroys the original lacquer. After a solvent and cleaning solution are applied to the father, between five and 12 nickel "mothers" can be grown in a similar way to that described above. When separated from its mate, the mother can be played on a phonograph to check for any ticks, clicks, or similar anomalies generated from the silver or nickel-plating processes.
After approval of its quality, the mother is then returned to the nickel bath for stamper plating. To increase the life of a stamper, and to decrease oxidation, some facilities will sometimes plate a thin layer of chrome onto the 10 to 20 stampers made from each mother.
The final step in record manufacturing falls to the record pressing plant. Here, the stampers — one for each side of the single or album — are placed in a mold shaped with the basic record profile, steam-heated vinyl at a temperature of approximately 300 degrees F (149° C) injected into the mold, and the two halves pressed together for about 25 to 50 seconds, followed by 10 to 20 seconds of cooling. (During this stage, the labels are incorporated into the album center.) The finished records are then inspected, and sleeved.
Since the processes described above are automated, and susceptible to the introduction of surface noise, pressing techniques vary at each facility. Some typical inherent noise characteristics introduced during the manufacturing process include "rumble" (low frequency noise), "ocean roar" (from lead-in grooves), and "swishes" (again from the lead-in grooves). All these episodic degradations are pretty much common in every plant, and dealt with in equally episodic ways.
An Alternative: DMM Technology
Direct Metal Mastering has existed for just over five years, with 15 facilities around the world currently utilizing... continued on page 61.
NOW FOR SOMETHING
Recording consoles sold today were designed over four years ago. They are patterned after designs that are a decade older. Think about how recording has changed since then. There were no synthesizer virtual tracks, no music videos, no MIDI, no VITC, and no requirement to better the noise and distortion of digital recorders and CDs. Today split monitoring consoles have gone the way of the brontosaurus, with in-line consoles soon to follow. What contemporary studios need is something completely different.
NEOTEK’s new ELITE consoles meet the challenges of today’s recording with a dual-channel design that sets the new standard for system flexibility, with Direct Digital Interface for universal logic control, and with sonic performance that’s a step ahead of even other NEOTEK consoles.
FLEXIBILITY means more than having 32 effects sends and 60 line inputs to faders, more than freedom from the limitations of status switching and other antique ideas, more than having a four-way solo system and multiple mute groups. It is the ability to quickly and easily control these powerful functions that makes the unique dual-channel Elite today’s most flexible console system.
DIRECT DIGITAL INTERFACE allows logic signals from audio/video editors, MIDI controllers, personal computers, and a growing number of studio devices with digital outputs to directly control channel and master mute groups of an Elite. Turning one channel off and another on can change levels, EQ, echo sends, and routing. DDI means that the Elite will follow the most complex editing, mixing, or composing session, locked to SMPTE code, without any further operator attention . . . without automation.
PERFORMANCE in the Elite comes from minimum path design and new hybrid and discrete circuits at all critical points, a new state variable equalizer, and new components selected for audio quality. There are no polar caps, no unity gain buffers, and no CMOS switches. Not even the limitations of IC op amps. A powerful CAD system as optimized routing and isolation. The result is that the linear response of the Elite has been extended several octaves wider that other consoles, with the payoff of unmatched sonic performance.
Join the Elite. Whether you chose one of the many stock models or a custom unit with A.K. disk-based automation or moving faders from Massenberg Labs.
Let others compromise.
NEOTEK CORPORATION
1154 West Belmont Avenue Chicago, Illinois 60657 U.S.A. 312-929-6699
© 1985 NEOTEK CORP. 8509231
COMPLETELY DIFFERENT
ELITE
ing DMM-equipped cutting lathes. It has been estimated that, to date, approximately 60 million records have been pressed from DMM-mastered material.
Its advent, however, is new for the American recording industry. Europadisk, an independently-owned cutting facility based in New York City, is the first record processor within the U.S. to fully manufacture, master, plate and press under the DMM license. Jim Shelton, Europadisk co-owner with Christian Lach, says that his facility has always been a record manufacturer, and "our decision to utilize DMM came from a natural progression. The choice of DMM was because the system is better — much better than lacquer technology. Technically, DMM sounds cleaner with a musically wide dynamic range.
"From our point of view," he continues, "we have now joined the Europeans in disk technology; pretty consistently they made better records than the Americans for a variety of reasons, not the least of which is their constant R&D. DMM results directly from trying to improve the LP record."
A jointly developed product of Telefunken-Decca Schallplatten GmbH (Teldec), West Germany, working in conjunction with Georg Neumann GmbH (which is 25% owned by Teldec), DMM technology is based on the ability of a modified Neumann lathe to cut onto a 4-mil (100-micron) thick coating of copper layered onto a 300-mil stainless steel substrate, rather than a conventional aluminum/lacquer disk. Following the cutting of a DMM copper mother master, the plating facility can directly plate nickel stampers or fathers, bypassing the expensive, and flaw-provoking, silver-plated father stage, and the ensuing mother-nickel step.
The advantages claimed by DMM can be summarized as follows:
- The elimination of all lacquer-related problems;
- Substantial improvements in high-frequency and transient response;
- 15% more playing time per side;
- No "horns" or plating anomalies;
- The virtual elimination of recuts; and
- Because two plating steps can be eliminated, faster stamper production is possible.
DMM utilizes the same, basic mastering technologies as conventional cutting facilities. Europadisk uses a Telefunken M15A quarter- and half-inch replay machine with preview heads linked to a Neumann SP-79B console. The latter features options for routing the cutting and preview signals through Neumann U473 combined compressor, expander and limiters, and Neumann OE DUO three-band equalizers, calibrated in 1-dB steps. A pair of JBL 250 TIs serve as monitor loudspeakers. For replaying digitally mastered tapes, the facility uses a Sony PCM-1610, and Studer DAD-16 digital delay to provide the delayed cutting signal.
After the signals pass through the transfer console and outboard gear, they are then fed to the lathe and cutting rack via a VAB-84 vertical amplitude limiter, which replaces the formerly used elliptical EQ. During the cutting stage, the Neumann VMS-82 lathe's computer translates the preview signal into vertical (groove depth) and lateral (groove width) signals for control of pitch and depth. In essence, pitch concerns itself with the "sum" of the left and right channel, while depth looks at the "difference" of the those same channels; each wall of the groove holds discrete left- and right-channel information.
By applying the sum/difference equations on the stereo preview signal, the frequency-dependent VAB-84 unit blends left and right information to mono, at a rate of 6 dB per octave below a dynamically determined frequency — usually 150 Hz. By acting dynamically, the VAB-84 only "blends" the signals when necessary to limit vertical stylus motion, thereby preserving the stereo image. (The use of fixed-frequency blending in conventional cutting is a serious drawback of the current, elliptical EQ.) The VAB-84 unit limits vertical excursions in steps of 0.4 mils, from 1.2 to 4.0 mils over the basic groove depth (usually 1.8 mils), with the low-frequency compression ratio fixed at 20:1. In addition, the VAB-84 insures longer playing time on the disk, and ultimately gives stampers a flatter backside, which significantly reduces rumble on the final pressing.
The signal then passes to the VC-82 Vertical Tracking Angle Processor, which electronically generates — and compensates for — the cutting-stylus angle. The development of the VC-82 results from the knowledge that the process of cutting a groove into a copper surface causes more stress on the cutterhead and stylus than mastering on lacquer. Therefore a new
A complete Direct Metal Mastering System comprises a Neumann VMS-82 cutting lathe, SAX-84 cutter head and stylus assembly, and optional video camera and monitor attached to microscope unit.
Listen Closely.
Dolby
A Quiet Commitment.
For additional information circle 138
731 Seasome Street, San Francisco, CA 94111 415-392-0300 Dolby and the double-D symbols are trademarks of Dolby Laboratories Licensing Corporation © Dolby Labs 1985 S85 6496-6601
approach was sought to provide optimum cutting with reduced Frequency Intermodulation Distortion (FIM).
To understand the situation further, consider that, many years ago, the recording industry established an IEC standard tracking angle of 20 degrees (+5 degrees) off perpendicular for the sake of consistency in consumer cartridge design. Most lacquer mastering lathes cut at this standard 20-degree angle. Subsequently, most consumer cartridges also track at that same angle to decrease FIM within the groove. However, if a cut was made into copper at that same 20-degree angle, because of the vertical component of the force vector, the cutting stylus would literally eject from the groove, accomplishing only a surface scratch. As a result, Teldec adopted a zero-degree stylus cutting angle (90-degree angle to the record plane). This reduces the subsequent cutting force required to a sufficient level so that DMM uses normal cutter-amp power. (In other words, the rotational force of the turntable is directed against the face of the stylus, attempting to break it, rather than push it away from the record at a 20-degree angle.)
But, since a consumer cartridges still tracks at 20 degrees, the grooves need to be cut at that particular angle; otherwise playback will be heavily distorted. To compensate, the VC-82 adds a tracking correction component to the cutting signal. Put another way, DMM electronically "tilts" the signal to a 20-degree angle, so that the resulting groove has a negative, 20-degree component of FIM introduced into the signal. When played back at the normal, 20-degree tracking angle, the two variables cancel each other, creating low FIM for the consumer.
Next in the signal path lie the BSB-84 high-frequency limiter — which, when necessary, limits high frequency transients for better trackability of consumer styli — followed by the CAT-84B left- and right-channel cutting amps, capable of delivering 550 watts per channel; the SAB-84B signal processing pre-amp; and the SEL-84B circuit-breaker system. By means of this latter unit, should the maximum temperature of the cutterhead drive coil exceed 200 degrees Celsius, or more than one ampere of current flow through the coils, the cutterhead automatically disconnects from the power amps, avoiding a possible cutterhead burnout.
For reference monitoring, the rack houses a PUE-84 pick-up cartridge equalizer amplifier, a MWS-84 monitor playback selector, a MSA-84 monitor equalizer, and two LOV-84B monitor power amplifiers, providing 185 watts per channel.
It should be noted that conversions are available for existing SAL-74B Cutter Drive Logic units to enable DMM cutting. In most cases, modifications to the circuit cards are needed, and the former TS-66 Tracing Simulator is replaced with the VC-82.
The VMS-82 Cutting Lathe
Since the Neumann VMS-82 lathe is used to cut into metal, its design necessitated a more sturdy format.
48 Track Digital Recording—Nationwide Rentals • Consulting • Engineering
The Digitals: 4 Sony PCM 3324 Digital Multitracks • Complete PCM 1610/DAE 1100 Electronic Editing System • RTW/F1 Portable Digital Recorders and Format Transferring
The Facilities: Any studio, remote, or post production facility your project may use. Our Remote Truck and Studios offer complete Digital or Analog multitrack production with automated mixing and SMPTE interlock.
The Experience: First Sony multitrack owner/operator • First Digital Audio to Film Optical Transfer for Movie Sound • Our clients include Neil Young, Frank Zappa, Neil Diamond, Barbra Streisand, CBS Masterworks, Telarc Records, Talking Heads, Texas Chamber Orchestra, plus many others.
John Moran (713) 520-0201 Houston, TX
David Hewitt (914) 425-8569 New York, NY
Elliot Mazer (415) 644-4321 San Francisco, CA
Mark Wolfson (818) 506-5467 Los Angeles, CA
Digital Services
2001 Kirby Dr., Suite 1001, Houston, TX 77019 Telex 790202
For additional information circle #140
First, a new carbon-vein vacuum pump acts much like a rotary engine with an elliptical chamber, and is serviced by a two horsepower motor. The pump extracts up to 15 cubic meters of air per hour, removing the dense, copper "chip" during the actual cutting process. Since the weight of the copper chip varies with groove depth and width, the vacuum pressure is regulated by the depth-control circuitry through a series of valves. A second purpose for the vacuum is to hold firmly the 24-ounce copper blank onto the turntable.
With conventional lathes, the force relating to stylus/disk interactions were thought to be expressed in terms of linear equations. But, due to the additional force needed to cut copper blanks, it was discovered that a quadratic equation — involving the square of the former linear equation — is actually correct. As a result, a new suspension and depth-control computer was developed to deal with the forces generated by the cutterhead as it cuts into the copper surface. Also, because of the forces generated, the new suspension system uses rigid, friction-free pivot bearings to support the DMM cutterhead, rather than the conventional ball bearings previously used.
Other features of the VMS-82 lathe
**TECHNICAL SPECIFICATIONS FOR A DMM COPPER MASTER**
The following data were obtained with a Shure V15-V pickup. In addition, these values refer to single channel recording at 280 mm (11 inch) recording diameter. The values for smaller diameters conform to playback theory.
**Frequency Response:** 20 Hz to 20 kHz, ±1 dB, referring to the entire cutting radius up to the innermost allowable modulation diameter. In contrast to lacquer masters, there are no losses applicable up to a wavelength of 10 microns. (See accompanying figure in main article.)
**Frequency Intermodulation:** Less than 0.3%. F₁ = 315 Hz; F₂ = 3.150 kHz; U₁:U₂ = 4:1; recording peak velocity \( V_{\text{total}} = 5 \text{ cm/s} \) referring to RIAA equalization for 1 kHz.
**Total Harmonic Distortion:** Less than 0.5%; DIN 45503; F = 1 Hz; recording peak velocity \( V = 5 \text{ cm/s} \).
**Difference Tone 2nd Order:** D₂ is less than 0.1%; F₁ = 1 kHz; F₂ = 1.2 kHz; recording peak velocity \( V_{\text{total}} = 5 \text{ cm/s} \).
**Signal-to-Noise Ratio:** Greater than 70 dB, typical 75 dB; A-weighted DIN 45633, IEC-179; referenced to a peak velocity equaling 10 cm/s.
**Spectral Distribution of Noise:** See accompanying figure in main article.
**Pre- and Post-Groove Echo Attenuation:** Greater than 65 dB with 10 microns of land between grooves, 75 dB with 150 microns of land between grooves; F = 1 kHz; recording peak velocity \( V = 8 \text{ cm/s} \) single channel.
**Recording Losses for Small Wavelengths:** See accompanying photography in main article.
**Transient Response:** As above.
**DMM STANDARD DATA**
**Maximum Recording Level:** Identical to lacquer.
**Maximum Groove Width During Modulation:** 180 microns.
**DMM Cutting Stylus:** Utilization period of 20 LP-sides per polishing; life expectancy is a minimum of 50 repolishings.
**Storage Times:** Copper Blanks: three months; after mastering: "unlimited."
**Matrixing:** Number of stampers by direct plating: minimum 20; by three-step plating: equivalent to standard processing.
**Groove Integrity:** "Lack of horns is avoided in DMM cutting, therefore maintaining a high degree of groove integrity."
---
**The Aphex Compellor.™**
**Invisible Compression in Stereo or Mono.**
The Aphex Compellor is the most acclaimed compressor/leveler/peak limiter ever made. With good reason... you simply can't hear it work. It doesn't add any color or other sonic effects. Best of all, the Compellor is easy to use. Set it once and it goes to work automatically... inaudibly controlling your dynamics.
Ask your professional sound dealer for a demonstration of the remarkable Aphex Compellor. Available in monaural and stereo versions. Or write us for the name of your nearest dealer and more information on the full line of innovative Aphex products.
Aphex Systems Ltd.
13340 Saticoy St., N. Hollywood, California 91605
(818) 765-2212 TWX: 910-321-5762
Compellor is a trademark of Aphex Systems Ltd. © 1985 Aphex Systems Ltd.
The Sweetest Selection For All Your Sound Connections
Treat yourself to a wide choice of very sound accessories for nearly every audio application in studio/control and inter-building communications. From stereo phone jacks to quick ground connectors, from versatile audio adapters to advanced mixers and switching systems, Switchcraft makes just the right connection. And every one meets the highest performance level and operational standards set by the world leader in professional audio equipment.
Why not discover how you can plug into our delicious assortment of audio delights, and deliver the cleanest, clearest sound around.
For the sweetest selection of sound connections, contact Switchcraft or your Switchcraft rep.
include a crystal-controlled turntable motor and hydrodynamic oil bearing, which provide, it is claimed, a 12-dB rumble improvement. In addition, a new DC-servo control keeps the turntable at a constant speed despite the increased cutting forces involved, and reduces dynamic flutter to an "unmeasurable" level.
Finally, a real-time digital processor was built into the lathe to control pitch and depth which, in turn, increases the number of grooves per side of an album. By means of a digital delay and a vertical dependent pitch control, the processor stores the preview signal of one-half a turntable revolution prior to cutting. According to magnitude and time relationships, the processor extracts the left-hand signal, the right-hand signal, and the base pitch (the basic groove width set by an engineer prior to cutting). These three signals are summed by the processor, and an intermediate peak pitch control signal (IS) calculated every 1/16th of a turntable revolution. The processor then calculates the least amount of land space necessary between grooves to ensure a successful cut. (In essence, the processor is "sampling" the signal, and then storing, summing, and calculating the next groove position. In many cases, space between grooves can be reduced to nearly zero.)
According to Russ Hamm, president of Gotham Audio, Teldee's agent for DMM distribution within the U.S. and Canada, the latest DMM development is a modified depth-of-cut board. Serving as a type of error-correction device, the new addition increases groove depth for wide-excision lateral information. This use of the device will allow a playback stylus to ride in the normal position of the groove, rather than being driven out of the groove by the "pinch effect" during wide excursions, which is particularly problematic with low-quality tone arms commonly found on cheaper, domestic systems.
Since the depth corrections are only necessary for short, lateral signal bursts, the devices described above allow the system to cut with a smaller
---
**New Auratone® Multi-Channel Video/Broadcast Monitors**
Auratone's new Model SMC is a shielded magnet three channel audio monitor for the Video/Broadcast and Recording industries. It is the equivalent of three Auratone SC Super-Sound-Cube™ mix down monitors contained in a three compartment ultra-compact enclosure (HWD: 5¼" x 16½" x 8½"). The SMC may be mounted in standard 19" relay racks with optional metal rack ears or placed on consoles, desks, stands or wall mounted horizontally or vertically.
The SMC was developed specifically for radio broadcasters to provide separate audio channels for a variety of feeds such as cue, program, emergency channel, talk back, news and sports.
Near field A-B comparisons of stereo-mono mixes may be made using the two outside channels for stereo with the center channel for mono.
Other uses are: audio monitors for multiple zone security systems, teleconferencing, close-up stage monitoring, and small sound columns, or in horizontal series/parallel stacks for high sound levels.
Anechoic on axis frequency response is ±3½ dB from 150 Hz to 12.5 kHz. Shielded magnets reduce flux leakage to minimize deflection of nearby CRT images. Impedance is 8 ohms and program power handling is 30 watts per channel. Pro-Net: $159.00 each. Rack Mounting Kit: $10.00.
AURATONE CORPORATION, P.O. BOX 698, CORONADO, CALIFORNIA 92118, USA
basic groove depth, instead of the normal practice of cutting deep, basic grooves for security purposes. Smaller basic groove depth gives a greater dynamic range, and increased playing time.
Gotham Audio says that DMM conversions for a Neumann VMS-80 lathe — a new cutterhead suspension, a new turntable equipped with the enhanced vacuum system, and new turntable drive motor being required — costs approximately $40,000. A complete cutting system, comprising the VMS-82 lathe, SAL-84 cutting amps, and SX-84 cutterhead, costs approximately $140,000.
The SX-84 Cutterhead
One of the unique aspects of the DMM system design is the SX-84 cutterhead, which uses samarium/cobalt magnets. Because "chatter" (a stick/slip effect) results when a lathe cuts any substance — be it lacquer, copper or even wood — Teldec found that ultrasonic modulations in the 80-kHz range were advantageous when cutting into a copper master, resulting in an extremely smooth groove wall. In turn, these modulations keep the mechanical loading of the cutterhead structure and electrical power requirements within reason.
The stylus mount holds a diamond rather than a sapphire stylus found in most lacquer cutting heads. Also in contrast to conventional cutting styli, the DMM stylus contains no burnishing facets; instead, it is a "feathered-edge" jewel. The burnishing facets, which are required to cut lacquer, result in a "self-erasure" and "blurring" of high frequencies. Moreover, the DMM stylus cuts without the heat needed to lower the surface noise for lacquer cutting.
With conventional lathes, melted lacquer can cling to the heated sapphire stylus, meaning that — under the worst circumstances — a mastering stylus could last only two passes per session. With DMM, the diamond stylus cuts copper with a highly polished surface, guaranteeing a high signal-to-noise ratio, and results in a longer stylus life of at least 20 passes per diamond polishings, and sometimes as high as 100 cuts. (Diamonds can be repolished at least 50 times.)
A direct cause of plating problems and high-frequency loss when mastering into lacquer are horns, partially caused by a heated stylus, but alleviated to a certain extent by burnishing facets. (The horns look like ridges on the groove edges.) With a feathered-edge diamond DMM stylus, on the other hand, these horns are said to be eliminated in DMM mastering. Plating plants no longer have to remove these artifacts by polishing and buffing the mothers, a process that can be destructive to the overall sound quality.
The combination of two primary features — cutting into metal and the control system of the lathe — allows the grooves to be layered more closely together. With DMM, up to 40 minutes of material can be recorded on one side of an album while still maintaining inner-diameter high-frequency response, and recording levels equal to those of lacquer-based technologies.
One of the main advantages offered by DMM technology is the ability to control the production of the copper blank. It is interesting to note that melodia, the Russian state-owned record company, is reported to have adopted DMM technology to free itself from having to depend on the world's primary supply of lacquer material. Inherently, the electroforming process used to produce the copper blank results in a perfectly homogeneous material, eliminating all impulse-type noises. In addition, there are no cutting "swishes" caused by unmixed lacquer resins, nor lacquer "pinholes" (air bubbles) or imbedded dust particles. When Europadisk manufactures its amorphous copper blanks, they are stored at zero degrees F, and have a life expectancy of three months at that temperature. After mastering, the disks have an "unlimited" shelf life.
ABCD?
IS IT A CD OR A CASSETTE DUPLICATED AT 64:1?
You may be surprised at what you will hear when you come to the New York Hilton, Room 615, at the New York AES. We are going to do an A-B comparison between cassette recorded on CMP's new "Cobalt CS-1" duplicator cassette tape and a CD.
"Cobalt CS-1" is a medium bias cassette tape in which a very high quality ferric oxide has been micro-encapsulated with a thin layer of cobalt which optimizes it for best performance in pre-recorded music cassettes. It offers the full, rich bass and freedom from distortion found in the best ferric oxide tapes and the sparkingly clear high frequency transients of the best "high bias" tapes. Unlike some high bias duplicator tapes which often strain the electronics and the recording heads on the high speed slaves to their design limit, "Cobalt CS-1" works at low bias levels which are comfortably compatible with all slaves. Thus, "Cobalt CS-1" can accurately capture every nuance of even the most sonically demanding digital masters.
So drop by at our demo room, listen, and get more information on the "Cobalt CS-1." Also, take a sample cassette with you, play it on your own equipment and marvel.
CAPITOL MAGNETIC PRODUCTS
A Division of Capitol Records, Inc.
6902 Sunset Blvd.
Hollywood, CA 90028
1-213-461-2701
© 1985 Capitol Magnetic Products, a division of Capitol Records, Inc.
Visit our Demo Room 615 at the AES Show
at the New York Hilton Hotel, October 13 - 16, 1985 and listen for yourself!
During preparation of the DMM copper blank, a stainless steel substrate receives a 4-mill coating of pure copper (left). At the matrix-preparation facility, a nickel stamper is "grown" directly by electrolysis from the copper mother.
and the amorphous copper surface returns to its original, crystalline state.
Licensing and Cost Advantages with DMM
To be licensed for DMM technology involves the signing of an agreement with Teldec in order to use the company's technology and trademark, and the payment of a one-time, start-up fee. In the U.S., Europadisk is protected by the Teldec trademark, and is licensed to use the DMM patents for record manufacturing.
In mid-September, a second DMM system had been installed at Sterling Sound, New York City. Owner Lee Hulko states that he has purchased a Neumann SAL-84 cutting rack, VMS-82 lathe and SX-84 cutterhead, and that his facility is licensed for disk mastering only; he will obtain copper blanks from Europadisk. In addition, a third DMM system, the first such system on the West Coast, is scheduled for delivery by mid-October to Amigo Studios, North Hollywood, CA. Owner Chet Himes says that his existing Neumann disk-mastering system will be upgraded to handle DMM technology by incorporating enhanced electronics to the existing SAL-74 cutting rack — bringing it up to the SAL-84 circuitry — plus adding a new VMS-82 cutting lathe and SX-84 cutterhead. Amigo will be licensed to master DMM copper masters, with Sheffield Matrix Labs plating the masters for album production.
Other facilities, CBS Records and RCA — both located in New York — are reported to be testing the DMM process. The first major pressing plant to meet the approval of Teldec DMM standards, Warner Bros. Speciality Record plant in Pennsylvania, is geared up to begin manufacturing DMM disks in the near future.
Europadisk actively employs a combination of DMM licenses, including one for the disk mastering process (a $25,000 start-up fee to Teldec), for the plating process and for the record-pressing process. The fee for the record pressing process is based on the short-term cost savings effected by using DMM technology. Any of the licenses, or combination of the licenses, may be obtained individually from Teldec. In other words, if one facility operates under one license, it does not necessarily need another to operate under the DMM trademark. However, to
place the "DMM" logo on an album jacket, the project must be mastered, plated, and pressed at DMM-licensed facilities.
After mastering a copper blank at a DMM-licensed facility, the client has the option of either finishing his product through final pressing under the DMM license and trademark, or he may go to other, non-DMM licensed facilities to complete the album, thus foregoing the identifying logo.
For this reason, Europadisk has two ways of plating a DMM-mastered disk. The first way is to take the DMM master and keep the processes "in-house" through plating and pressing. However, if the client wants to take his DMM master to another non-DMM licensed facility, he must first let Europadisk create a "sub-father," from which the subsequent nickel mothers may be taken to another facility. The copper master never leaves Europadisk, which is a stipulation within Teldec's contract concerning any DMM-licensed facility.
When speaking of cost advantages of DMM, three major points can be specified: First the existing plating and pressing facility will have no major capitol investment when plating and pressing DMM products. In essence, the facility may use the same equipment that it currently possesses. Secondly, DMM technology eliminates the difficult, and expensive, silvering process, which is the major source of quality defects and a primary cause for most recuts. It is estimated that a large plating facility could save up to $100,000 per year by eliminating the silvering stage.
Finally, concerning jazz, classical, and other limited release product of under 50,000 pressings, stampers can be produced directly from DMM masters, resulting in a typical savings of $100 per side.
Perhaps the cost advantage explains why DMM technology is taking hold so well in Europe. Noble is the goal which enhances the ability to limit, or actually eliminate, some of the extraneous problems related to record manufacturing. To many observers, DMM is the natural "evolutionary" step for black vinyl records. And it looks set to offer the quality edge over lacquer mastering for many years to come.
Acknowledgments
My thanks to Joe Gastwirt, independent cutting engineer, formerly of the JVC Cutting Center, Hollywood; Gary Rice, co-owner of Future Disc Systems, Studio City, Calif.; and Michele Stone of KM Records for providing background information; and Russ Hamm of Gotham Audio for technical data on DMM.
DIRECT TO TWO-TRACK
Capturing the Energy of a Live Session
by Jeffrey Weber
The guitarist was in the process of ripping off a great solo; it was burning. Everyone in the control room silently held their breath. Some wanted to go home after nine hours of guitar solos. Others wanted the soloist to come through with this last four-bar section, so that they could assemble the various other parts of the solo, recorded earlier that day. Continuity was an obvious concern. When the solo part was completed, the rush of expelled air was evident; it was a wrap. The producer noted he was only three hours over budget. It could have been worse. He began to smile, knowing the evening was at its conclusion. The guitarist, after listening to the segmented playbacks with satisfaction, started to tear down. Until the solo was put together during remix, it would be difficult to get the sense of an emotional flow, but the player remained confident that, when pieced together, his solo would be seamless.
The producer checked and noted that he was scheduled to record more overdubs the following day. A wave of fatigue hit him as he gathered his gear and walked down the hall towards the exit sign. The adjacent studio was open, and he noticed some familiar faces. He stuck his head in the control room and saw it packed with musicians listening to a playback. There was an unusual energy coming from the monitors as well as from the band in the room — they were up, smiling and intense. He learned that what he was listening to was a live two-track digital recording, and that the entire album had been completed over the last two days. A second wave of fatigue hit him. As he nodded goodbyes to his friends, he marveled at the punch and vitality of what he heard, and wondered about the process, especially the part about completing an entire album from start to finish in just two days of session time.
Everyone, at one time or another, has experienced the fatigue of a multitrack project. The above, semi-fictional — or should I say semi-factual — scenario represents two distinctly different theories of recording music. Having recorded both in the multitrack medium and live to two-track, I am convinced beyond any doubt that recording direct to two-track is more emotionally satisfying, sonically superior and financially more feasible than a multitrack effort. This article will explore the advantages, applications and common resistance to...
IT TOOK JBL SCIENCE, A NITROGEN EXPLOSION, AND PURE TITANIUM TO GIVE YOU PERFECTED HIGH FREQUENCY SOUND.
High frequency sound has always fought with the technology that brings it to the ear. The driver diaphragm has been most vulnerable, pushed to the breaking point, unable to hold uniform frequency response.
JBL scientists decided to fight back. They exploded nitrogen into a remarkable metal, pure titanium, encircling their unique diaphragm with a vibration-absorbing "Diamond Surround," so revolutionary it warranted its own patent.
The result? A diaphragm that delivers and sustains a power and purity to high frequency response never before approached in the industry.
Perfecting titanium technology is just one of innumerable ways in which JBL science is re-shaping the quality of sound. From ultimate accuracy in laying down your initial tracks, to capturing the full power and subtlety of your final mix, JBL audio systems are focused on the most exacting demands of the recording studio professional. To find out which system is designed to meet your specific requirements, contact your authorized JBL professional products dealer today.
JBL Incorporated,
8500 Balboa Boulevard
P.O. Box 2200,
Northridge, CA 91329
U.S.A.
For additional information circle #148
October 1985 Re/p 75
this the direct-to-two-track recording philosophy.
The main goal of live, two-track recording is not simply to make another record, but to create an event. I have found that the emotional bond between a performer and the listening audience creates a desire for the listener to buy the artist's recordings. When that listener attends a live concert, his or her aural sense is augmented by sight; it is often this latter factor, as expressed by body motion and facial expression, that supplies much of the emotion to a song's rendition.
The function of live, two-track recording is to achieve the heightened emotional sense of a live concert without the aid of sight. The stores are filled with thousands of records one can listen to, but I wonder how many of these one can feel. When vocalists and instrumentalists play together, the bilateral merging of energies creates a musical force that is almost euphoric. Each player serves to encourage the other and that, in turn, continually fuels the energy of the session. It is that type of contagious, emotional energy that is so often lacking in most recordings. I have noticed that a female singer, for example, will actually change the phrasing of a line, depending upon whether she sings to pre-recorded tracks through headphones, or surrounded by musicians. The emotional intensity of live recording creates a belief — and if the listener believes the song and believes the singer — then the listener becomes involved. This involvement translates into sales.
Sonic Quality Advantages
While added emotion is by far the most important attribute of live, two-track recording, the sonic superiority of the medium enhances the emotional dynamics. With multitrack recording, the two-inch reel is mixed down to a quarter- or half-inch master. Initially the tracks are detrimentally affected by being run back and forth across the tape heads while building a tune. To this is added the fact that analog tape does not have the greatest high-frequency memory. Over the course of a week or two, the harmonics and subharmonics of an instrument or vocal track have a tendency to soften and become "cloudy." And it is these harmonics that give character and personality to an instrumental or vocal. Dynamics are also affected in this fashion: peaks become "round," and the "punch" is softened. Not to be ignored is the ever-present tape hiss which, in varying degrees, accompanies all analog tape, and increases during the subsequent mixdown and playback sessions. Although such problems will disappear with the widespread use of digital multitrack recorders, at this time their presence cannot be deemed a major factor.
The same problems that beset a two-inch reel also invades the effectiveness of the quarter- or half-inch master. The use of a digital two-track during the mixdown process alleviates such second-generation-type problems.
When recording live to two-track, the first noticeable improvement is the absence of the two-inch multitrack reels and their attendant problems. If a digital two-track recorder is used as the primary storage medium, all normal tape problems are eliminated.
Because the console feeds the two-track directly, I can record to a variety of formats simultaneously, using separate pairs of master outputs from the console. I like to give myself a number of options, and record to analog quarter-inch, 15 ips; quarter-inch, 30 ips; half-inch, 30 ips; and, of course, digital — either JVC VP-900 or Sony PCM-1610 processors with companion U-Matic VCRs. Based on the music, we then decide which medium best conveys the emotional impact we're trying to get across.
Believe it or not, it's not always the digital recording we choose in the end. I recently did a live-to-two-track album with Toni Tennille, titled More Than You Know, for which we felt that the analog master beat digital in terms of richness. For that record, a collection of standards from the Thirties and Forties, we wanted a more romantic feeling which, we considered, was best conveyed by analog tape.
Microphone Techniques
On many live-to-two-track projects, I've found digital recording and tube microphones to be an ideal combination. Tube mikes have a wonderful accuracy and fullness and, when combined with the digital medium — which has been accused of being harsh and strident in the top-end — the results can be punchy, tight, accurate and listenable. If you're recording a digital project with a lot of horns, for example, tube mikes can help you compensate for any unnatural stridency. Given the subtle nuances of the music interacting with the microphones and the acoustics of the room, it's nice to be able to choose among several different tape formats after the album has been recorded. This is one area where live-to-track can offer more flexibility than multitrack.
The setup for an actual live-to-digital recording date is similar in most respects to that for a multitrack sesAMERICA'S MOST ADVANCED RENTALS
AUDIO AFFECTS
213/871-1104 818/980-4006
800/252-2010
AUDIO/VIDEO TAPING/SYNCING
ANALOG 24 DIGITAL 32
DIGITAL DELAYS & PITCH CHANGERS
NOISE REDUCTION
DIGITAL REVERBS
COMPUTER KEYBOARDS
E.Q.
DRUMS
MICROPHONES
SPECIAL FX
COMPRESSION
AUDIO AFFECTS
AMERICA'S MOST ADVANCED RENTALS
P.O. BOX 6327 BEVERLY HILLS, CA 90212
For additional information circle #149
BEYER RIBBON MICROPHONES AND
THE DYNAMIC DECISION
Digital technology holds forth the promise of theoretical perfection in the art of recording.
The intrinsic accuracy of the digital system means any recorded "event" can be captured in its totality, exactly as it happened.
Naturally, the ultimate success of digital hinges on the integrity of the engineer and the recording process. But it also depends on the correct choice and placement of microphones, quite possibly the most critical element in the recording chain. This can make the difference between recording any generic instrument and a particular instrument played by a specific musician at a certain point in time.
The exactitude of digital recording presents the recordist with a new set of problems, however. The sonic potential of total accuracy throughout the extended frequency range results in a faithful, almost unforgiving, recording with no "masks" or the noise caused by normal analog deterioration. As digital recording evolves, it places more exacting demands on microphones.
Ribbon microphones are a natural match for digital because they are sensitive and definitively accurate. The warm, natural sound characteristic of a ribbon mic acts as the ideal "humanizing" element to enhance the technically perfect sound of digital.
Beyer ribbon mics become an even more logical component of digital recording due to an exceptional transient response capable of capturing all of the nuances and dynamic shifts that distinguish a particular performance without the self-generated noise and strident sound generally attributed to condenser mics.
Beyer is committed to the concept of ribbon microphones. We manufacture a full range of ribbon mics for every vocal and musical instrument application.
The Beyer M 260 typifies the smoothness and accuracy of a ribbon and can be used in stereo pairs for a "live" ambient recording situation to record brass and stringed instruments with what musicians listening to a playback of their performance have termed "frightening" accuracy.
Because of its essential double-ribbon element design, the Beyer M 160 has the frequency response and sensitive, transparent sound characteristic of ribbons. This allows it to faithfully capture the sound of stringed instruments and piano, both of which have traditionally presented a challenge to the engineer bent on accurate reproduction. Axis markers on the mic indicate the direction of maximum and minimum pickup. This allows the M 160 to be used as a focused "camera lens" vis a vis the source for maximum control over the sound field and noise rejection.
Epitomizing the warm, detailed sound of ribbon mics, the Beyer M 500 can enhance a vocal performance and capture the fast transients of "plucked" stringed instruments and embouchure brass. Its diminutive, durable ribbon element can also withstand extremely high sound pressure levels.
The Beyer M 130's bi-directional pattern enables the engineer to derive maximum ambience along with clean, uncolored noise suppression. Two M 130s correctly positioned in relationship to each other and the source can be used as part of the Mid-Side miking technique. The outputs from the array can be separated and "phase-combined" via a matrix of transformers to enable the most honest spatial and perceptual stereo imaging — sound the way we hear it with both ears in relationship to the source.
Given the high price of critical hardware used in digital recording, the relative price of microphones is nominal. Realizing that microphones are the critical sound "source point," no professional can allow himself the luxury of superficial judgements in this area. Especially when one considers the value of ongoing experimentation with miking techniques. For this reason, we invite you to acquaint yourselves with the possibilities of employing Beyer ribbon technology to enhance the acknowledged "perfection" of digital recording technology.
Beyer Dynamic, Inc.,
5-05 Burns Avenue, Hicksville,
New York 11801
For additional information circle #150
SESSION EXAMPLE:
SCOTT PAGE AND MEMBERS OF TOTO RECORDING A DIRECT-TO-DIGITAL SESSION AND SIMULTANEOUS VIDEO SHOOT AT GROVER HELSLEY RECORDING.
Jeff Weber’s most recent project involved the production of a live, two-track digital and 46-track analog recording with saxophonist Scott Page, during a simultaneous six-camera video shoot. Page, a veteran of numerous tours with Toto, Duran Duran and Supertramp, has developed what many would consider to be a unique perspective for creating music. Along with his writing and playing partner, Tony McShear, Page felt the need to create an “audio image that would be totally integrated with a visual image.” The pair perceived that in most Music Videos there is often a distinct separation between what takes place visually and what occurs aurally. Page and McShear wanted to create an in-studio environment conducive to inspiration and “spontaneous combustion,” as they describe it, rather than total acoustic isolation and cleanliness.
The goal was to create a musical moment of unparalleled “heat” and to capture it in all formats — audio and video. Having heard Weber’s previous two-track recordings, in addition to having worked with him in the past, Page and McShear believed that the producer would be able to capture and preserve the audio “heat” they desired.
For the audio/video recording, a lighting gantry and stage area were assembled in Studio A at Grover Helsley Recording (the old RCA Complex, Hollywood). The band was set up on various levels of risers in a semi-circle surrounding Page, and a complete stage monitor system supplied by Schubert Systems Group allowed the band to avoid the use of headphones to hear one another. The band consisted of Scott Page, saxophone and recorder; Tony McShear, vocals, guitar, keyboards and percussion; Jeff Porcaro, drums, Simmons Electronic Drums and E-mu Systems Emulator II; Bob Glaub, bass; Lenny Castro, percussion and Simmons; Steve Lukather, guitars; Cal David, vocals, guitar and
At the Neve console during the direct-to-digital session (L to R): producer Jeff Weber, head engineer Franz Pusch, Paul Ray, Shep Lonsdale and Dan Voss, Jnr. (in rear). Pictured right are the featured artist, saxophonist Scott Page, and The Brunettes vocal backing group.
The compact 8-track format worked like a charm from day one. Hard as it was to believe something so small could produce sound so good, people became believers after they heard the reality.
Now the next step. The next generation. Microprocessor controlled transports, and record/play functions. Dolby* C noise reduction on the Model 80, center channel SMPTE on the Model 20.
8 balanced mic inputs each with phantom powering, solo and in-line monitoring on the Model 450 8 x 4 x 2 mixer.
All available now; high tush processors (left) available soon. Your next step.
Finally. A cost effective synchronizer that sets up fast and operates painlessly. Locks video to audio or audio to audio, rock solid. Resolution to 1/100th of a Frame. In CHASE, the slave will follow the master regardless of tape mode. All displays, status and command functions are clearly marked, so operation is fast and positive. About 1/5 the cost of comparable synchronizers.
Fostex
15431 Blackburn Ave.
Norwalk, CA 90650
(213) 921-1112
*TM Dolby Labs Licensing Corp.
SESSION EXAMPLE — continued...
electric sitar; Bill Payne and Chris Boardman, keyboards; The Brunettes, background vocals; and the Heart Attack Horns.
The seven-song recording session was completed in two days. A total of three consoles were necessary in the control room to handle the numerous vocal, instrument and direct-inject inputs: the studio's main Neve Model 8078 board, plus a pair of Schubert Systems Group custom 24-in/eight-group consoles used for subgrouping keyboard and drum inputs before routing to the Neve console. The live stereo mix was recorded on a JVC VP-900 digital processor and U-Matic VCR supplied by CMS Digital of Altadena, CA, while backups were made to a Threshold-modified Nakamichi DMP-1000 EIAJ-Format digital processor and half-inch VCR, an Ampex ATR 102 quarter-inch, 15 ips two-track, and a half-inch VHS HiFi format.
Five engineers were utilized on the session, led by Franz Pusch: Shep Lonsdale, Paul Ray, Chris McNary and Jeff Cowan. The associate producer for audio was Dan Voss.
The two-track mixes are intended for a eventual Compact Disc release, with the 46 track analog tracks destined for later remix against the edited video. In addition to the normal studio microphones, the mikes attached to each video camera were routed to various multitrack inputs; these will be integrated into the final audio mix during video post production.
DIGITAL TWO-TRACK
mikes may be all that's necessary to cover the drums.
Capturing room ambience is one of the prime objectives in this type of live recording. I like to rely on two AKG C-12s or C-12As placed way up high in the room. I often combine the output from these mikes with a natural chamber, in order to achieve the ideal sense of room ambience.
A live-to-two-track date generally will take up every available console input. Along with the microphone inputs, some instruments are taken direct, just as they would be on a multitrack session. Each outboard effect will be put on its own channel and bussed to the correct instrument at the correct time. Occasionally, I'll have to use a separate submixer, but I can usually get by with a 36 x 24 board. All mixing is done live while we're recording, most settings and levels having been determined in advance, and we know where there will be solos and other sections that have to be pumped up.
After a live take, we'll all get together in the control room for a playback, and make an evaluation on perspectives and relative levels. The musicians will see where they made their mistakes on our side of the glass. Within a couple of takes — never more than four — we've got it on tape. The performance we capture on two-track is, to my mind at least, far more vital than what one can get from isolated, individual multitrack overdubs. A great performance is our perfection.
Financial Advantages
Apart from the sonic and artistic advantages of live, two-track recording, the dramatic financial advantages are romantic enticements to any record company. As a conservative estimate, an average multitrack project takes 10, five-day weeks to complete. With studio and engineer costs averaging $2,000 per day, it would take $100,000 to complete the project, a price that does not include tape stock, musicians, fees, producer fees, rentals, arrangements, cartage, etc. Two-inch multitrack tape running at 30 ips costs $600 per hour — if you can find it for only $150 per reel!
With two-track digital recording, on the other hand, tape costs are $35 per hour of recorded music. Each record project requires only two days of studio time: we spend about eight hours each day recording four to five tunes. Studio time, tape costs and engineer fees rarely exceed $5,000 for the entire album, and yet nothing is sacrificed in terms of quality. Every aspect of the recording is state-of-the-art.
Orchestral and Jazz Sessions
Live, two-track recording is an obvious asset to classical music, since orchestras rarely record by sections.
The improvisational nature of jazz also lends itself easily to the medium. What most people do not consider is how the direct-to-two-track medium can serve the raw energy of rock and roll. I have found that live, two-track recording of rock sessions have a punch and a vibrant raw edge that multitrack simply cannot achieve. People dismiss two-track rock by stating that there is a loss of creativity and experimentation unless multitrack techniques are used. Actually, there is plenty of room to experiment; it just takes place before the recording date rather than during the mix, as is usually the case with multitrack projects.
An incredible amount of advance planning goes into a live-to-two-track record. In order for any music to meet my recording criteria, it must have a wonderful left-to-right stereo spread, a sense of depth — with the instruments placed in front-to-back perspective — and a carefully arranged frequency spectrum. I therefore begin a project by going through the arrangement for each song, and create a schematic diagram showing where each instrument is going to appear in the overall sonic perspective. Such a schematic forms the basis for pre-determining panning, EQ settings, outboard effects, echo, etc.
As part of the preparation process, I'll often bring the artists into a small studio, and do some preliminary recording, just to see how things sound. This gives the players a great warm-up for the actual date, and lets us do a lot of experimentation. If we feel a guitar sound is lacking something, we can try any effect or piece of outboard gear to get what we want. Everything is carefully noted, and the effect will be added to the instrument during the final recording, rather than later during the mix. Basically, anything that can be done "post" tracking can also be done "pre" recording.
Any excess number of parts can be sampled and synchronized with today's modern equipment or, if one needs four keyboard parts, four keyboard players can be utilized — it only serves to heighten the energy of the session.
It used to be that certain effects, such as inverted looping or reverse cymbal effects, just could not be done in the live, two-track format. With sampling keyboards and other pieces of recording technology, virtually any effect for any type of music can be prepared for live-to-two-track sessions.
Besides studio recording, live concerts can also be recorded and, if there is film or video being shot simultaneously, a direct two-track feed can be sent to the video truck, thereby virtually eliminating post-audio sweetening. Live TV broadcasts or simulcasts immediately become state-of-the-art in terms of audio, and necessary audio sweetening costs diminish.
The unnatural breakdown of music into its component parts and the eventual re-assembly of those parts in a layered, artificial manner simply does not have to be done. Working direct-to-stereo means that music can be recorded in 95% less time, for 75% less money, and can easily sound up to twice as vibrant as a multitrack recording. At the end of each two-track recording I notice three natural occurrences: I do not hate the players (and vice versa); I still love the tunes; and we have all experienced a great deal of enjoyment. What more could you ask of a session?
NOW IT COSTS MUCH LESS. AND IT DOES EVEN MORE.
Great News! Increased production of Eventide's SP2016 now makes it possible for you to get the best sounding, most versatile effects processor/reverb for thousands less. You get the densest, most natural-sounding reverb programs available at any price (just ask anyone who's used an SP). And you get our Generation II software package, with more great effects than any competitive unit. At its new low $6895 (U.S. list) price, we think you'll find the SP2016 irresistible. Call Eventide at (201) 641-1200 and we'll set up a demo for you.
More Great News! Eventide's newest SP2016 Generation II software release includes the hot Reverse and Non-Linear Reverb sounds (just like that British unit—except Eventide gives you TWO independent channels and some additional parameters). Ditto for our stereo Gated Reverb program. Our optional Automatic Panner program has lots of nifty tricks. And wait 'til you hear our exclusive 18-band Channel Vocoder program option. By all means, check out all the latest SP2016 software soon. You'll see... We can do everything the others do, but they can't do what we do!
Innovative Sound Companies Assimilate a New Bass-Driver Technology
by David Scheitman
Few inventions traditionally seem to bring skeptics right out of the woodwork. In any industry, be it audio or aeronautics, it is usually difficult to change the direction of the mainstream of progress in a given field. Established, conservative factions have been known to denounce new technologies when they first arrive upon the scene. (In the early 1900s, a noted English scholar staked his reputation on a published statement that man would never fly a heavier-than-air machine . . . it was mathematically impossible. Less than a year later, the Wright Brothers flew at Kittyhawk.)
One of the most challenging problems faced by portable sound-reinforcement design has been the development of low-frequency generators that are compact enough for easy travel, yet offer audio fidelity rivaling traditional, large-bass cabinetry.
Two decades ago, the RCA-type folded-horn "W-bin" represented the commonly accepted compromise between full-length, gigantic bass horns and the practical considerations of mass production. A transducers were developed that could handle greater amounts of power input, direct-radiating bass enclosures came into their own. Today, a survey of typical touring and installed sound systems will show a mixture of different types of low-frequency cabinetry to be in use. One thing seems probable, however: the design parameters of paper-cone, voice-coil transducers have nearly approached their working limits with known materials. Since bass enclosures comprise the greatest amount of mass in any given full-bandwidth sound reinforcement system, a further reduction in the size of today's standard systems will require a different technology.
New Technologies
Younger, aggressive sound companies are often the first to try out unproven technologies. While a majority of rental firms attempt to duplicate those systems made popular by industry "giants," and model their own efforts after what has gone before, some entrepreneurial companies are willing to take a chance on something totally new, when it appears that an innovative tool, product or process may be a real problem-solver. In the sound-rental business, this usually means regional, "new-generation" companies located in a major market area, and competing for a finite amount of available sound system contracts.
Pace Sound & Lighting (New Orleans) and Pyramid Audio (South Holland, IL) are two such companies. Both firms are directed by young, aggressive managers intent not only upon gaining a lion's share of their respective regional markets, but who look forward to national touring and installation work. These companies are but only two of the growing number of regional rental concerns trying to produce the maximum amount of audio output from the lowest amount of loudspeaker system mass. Smaller systems mean lower labor and transportation costs, easier storage, and a competitive edge.
Pace Sound and Pyramid Audio have no relation to each other in a tangible business sense, yet the two companies have something in common. Each has developed its own propriety mid/high cabinet that is a modular unit intended to be used with a new bass technology: servo-drive loudspeakers. Before we look at the two firms and their respective needs and applications, a brief look at this new type of loudspeaker device may answer some questions about what it is, how it works, and why these regional sound companies are staking their future growth on it.
Servo-drive Speakers: A Brief History
Before the current, familiar loudspeaker was invented, only acoustic horns existed for sound "reinforcement." Early inventors mated a small compressor to the acoustic horn, and achieved truly high sound pressure levels from a very small device. This approach was left in the dust, however, by the rapid acceptance of early voice-coil loudspeakers. The earliest loudspeakers were surprisingly sensitive (due to their relatively small moving mass), and enclosures were typically quite large. (Have you ever closely studied a 50-year-old sound system behind the movie screen of an abandoned theatre?)
As heavy-duty loudspeakers were developed, a greater amount of moving mass was involved in the speaker cone. The efficiency of the speaker unit dropped, requiring more power amplification. Larger voice coils were developed that increased input power capacity. Bass reflex cabinets came into vogue. Nearly everyone had forgotten about the old, experimental compressor/horn hybrid devices. Nearly everyone except Thomas Danley, an electro-acoustic researcher with Intersonics, Incorporated, an Illinois-based firm that has developed a "sonic levitator" used during onboard Space Shuttle experiments, among other things. It is the company's Sound Physics Division that has developed the SDL® line of loudspeakers currently being marketed.
"Truly deep bass requires either a big enclosure, or a large cone area,"
SERVO-DRIVEN SPEAKERS
explains Danley. "If you increase the moving mass, you are going to move more air. In today's new sound systems, smaller boxes make more sense. What we have done is take the old idea of coupling a 'compressor' to an acoustic horn. We move the speaker cones with servo-drive, brush-commutated low-inertia motors that are mechanically connected to the heavy-duty cones with a driveshaft and a rotary-to-linear motion converter." Figure 1 shows the basic principle of the SDL units.
Intersonics' SDL Series sub-bass systems theoretically have enough power to offer more than twice the amount of cone excursion as traditional, voice-coil loudspeakers. As the cone excursion increases, so does the unit's acoustic output. Company engineers estimate that only one-eighth the enclosure volume is taken up when compared with a traditional bass loudspeaker enclosure for a given frequency range and acoustic output.
"Our TPL-3, with its two, 15-inch speaker drive units, is able to replace as many as four large horn-loaded double-15 bass cabinets," Danley offers. "The potential space and transportation cost savings for the sound-reinforcement industry should be obvious.
"The TPL-3 measures only 45 by 45 by 22½ inches, and will develop up to 116 dB at 22 Hz with a 32-volt RMS input when measured at three feet from the effective source center." A sectional view of the Model TPL-3 is shown in Figure 2.
Pretty serious claims, indeed. And claims that could have serious consequences for traditional bass cabinetry, if the servo-drive technology is sonically acceptable to discriminating sound reinforcement companies. I have had the opportunity to listen to two different sound systems featuring the new technology. While some well-known sound companies have used the units on a trial basis for special subwoofer effects — notably, Stanal Sound with Neil Diamond, and Sound On Stage with Huey Lewis and the News — both Pyramid Audio and Pace Sound rely on the units as the sole reproduction units for all audio frequencies below 80 to 100Hz.
Case Study #1: Pyramid Audio
"We've put together the most compact, most efficient and most cost-effective audio system on the planet," states Pyramid Audio president Rob Vukelich (Figure 3). "I have shown photographs of the 'Wonderbox' system to people, and they think we are joking. They don't believe we can get the results that we claim from such a small speaker system. All I can say is that they have to come out and listen to it."
Vukelich has been interested in physics since high school, and was awarded a scholarship in that field to attend the University of Illinois at Champaign-Urbana. He turned out to be an "A" student, but was reportedly kicked out of the physics program for being a "rebel." In the decade since 1975, Vukelich has built up Pyramid Audio from a garage operation. The company now boasts two, 16-track recording studios, a 24-track studio, and a retail showroom housed in a 15,000-square foot building facility.
"We had been looking at compact cabinet designs for some time," Vukelich explains. "When the Intersonics SDL units came along, I realized that this was it — the low-frequency drive unit upon which we could base a truly revolutionary concert speaker system. The Wonderbox system will offer 33,000 clean watts of power in a package small enough to fit in an 18-foot straight truck."
The "Wonderbox" that Vukelich
YANKEE INGENUITY IS COMING ON STRONG:
Professional audio technology is developing rapidly on a worldwide scale. As in other technological areas, America is under a great deal of pressure to produce advanced, high-quality equipment at highly competitive prices. As a U.S. manufacturer we are helping to prove that America can be both the leading world think-tank and one of the most competitive production sources both domestically and globally.
In just three short years we've developed an impressive line of unique professional audio equipment, combining creative new product concepts with many significant technological breakthroughs, all at price points that are turning heads worldwide. Pictured above are Constant-Q graphic equalizers, time correcting crossovers, innovative analyzers, a unique splitter/mixer, matrix mixer, parametric/notch equalizer, headphone distribution console, crossover alignment delay, and a powerful six-channel amplifier, all providing a new generation of performance capabilities that will give you a substantial competitive edge in both quality and cost of installation.
Our name hasn't been around for decades; but then again our innovative technology hasn't either. Whatever your application, pick the product that does the job best. And when you're looking for the best, be sure to consider Rane. We'll keep on giving you some of the finest professional audio equipment in the world -- conceived and built right here in the U.S.A.
RANE CORPORATION
6510 216th SW, Mountlake Terrace, WA 98043
(206) 774-7309
For additional information circle #156
October 1985 R-e/p 87
SERVO-DRIVEN SPEAKERS
refers to measures 30 by 27½ by 22½ inches, and is loaded with Electro-Voice components. Eight pairs of the boxes use only 60 inches of truck space (Figure 4). The box is all horn-loaded and tri-amplified, and houses two, 12-inch speakers, two aluminum-diaphragm compression drivers, and two titanium-diaphragm compression drivers. All components are housed in a single enclosure that features foam-dampened, moulded fiberglass horn flares. Frequency response is said to be 80 Hz to 20 kHz, when used in conjunction with the Power Demand Processor (PDP).
The Power Demand Processor is a continuously-variable, microprocessor-controlled, four-way electronic crossover with protective circuitry and active equalization for the optimization of power-band responses. The unit contains equalization memories for several combinations of Wonderboxes in varying acoustical environments. (Would you like an automatic transmission and cruise control with your new PA system, Sir?)
One "stack" of a 30,000-watt system comprises four Intersonics TPL-2 enclosures and four Wonderboxes. The three-way cabinets are built to the same face dimensions as the TPL-2s for an easy truck pack, and are intended to be suspended from simple flying hardware such as the Genie hoists favored by lighting technicians (Figure 5). Compressed air bottles are used to raise and lower the towers, as shown in Figure 6.
Actual-Use Situation: In June of 1985, this writer attended a Concert Showcase of the Pyramid Audio Wonderbox system, which was held in New Orleans, during the NAMM trade show. Vukelich and his crew had set up the system in an 800-seat theater for use by a Canadian rock group that goes by the name of Clearlight. The five-piece ensemble features a tribute to Pink Floyd as part of the program material, complete with high-level synthesizer parts.
Two of the TPL-2 units were set up in the aisle of the theater on each side near the stage, and Genie-lift towers hoisted one pair of the Wonderboxes on each side. "One bass box per side would be more than enough in here," explains Vukelich. "We usually team up one bass box with two, three-way boxes. Here, I wanted to make sure that everyone could hear these things really work!" Figure 7 shows the pair of TPL-2 cabinets.
Rob Vukelich needn't have worried. During the group's performance, the impressive low-frequency tones swelling up throughout the venue actually shook doors and windowglass out in the lobby. Dispersion did not seem to be a problem, either. Kick drum, bass guitar and synthesizer lines seemed to be issuing forth from a 40-cabinet, traditional arena system rather than the small boxes visible near the stage.
"We are experimenting with special-purpose subwoofers that are capable of reproducing frequencies to 15 Hz at high pressure levels," notes Tom Danley of Intersonics, who was present at the event. "Remember, this is a brand-new technology. We have only scratched the surface."
Such a system, with its simple two-point hanging package, high-powered Crest amplifiers, and brand-new Neotek consoles is certainly a step ahead of many companies that attempt to compete with Pyramid for local rentals and regional tours. Even so,
... continued overleaf —
Figure 3: Pyramid Audio president Rob Vukelich (left) and partner Mike Acklin, beside the compact Wonderbox.
Figure 4: Pyramid Audio's Wonderbox measures 30 by 27½ by 22½ inches (H×W×D), and weighs 180 pounds when loaded with two speakers, two two-inch compression drivers, and two one-inch compression drivers.
If results are your bottom line in mixers, then we invite you to put our SR Series through its paces. Only then do we feel you will be convinced that AHB offers the features you have been looking for, at a price you can afford.
**Stateside Savvy**
One reason is because the SR Series Mixers were designed here in the "States" by our own R and D department, allowing us to incorporate features from customer input, into its Development... and Producing a Mixer for the Perfectionist in all of us.
**The Results**
Features such as 4 aux sends, 4 band E.Q., long throw faders, multi-source peak indicators on input channels and primary mix buses, stereo and mono outputs, and external power supply with 48 volt phantom power are all provided as standard on all SR models.
**Pro Performance**
For 4-Track Recording and more demanding Sound Reinforcement situations, 16 and 24 input models are available with 4 submaster/group outputs as well as the addition of channel mute and E.Q. bypass switching on all input channels.
**The Bottom Line**
For more detailed information on the AHB SR SERIES of mixers, Call or Write Today. Give yourself "The Edge" with the Mixer That Achieves Recognition... Through Your Results!
For additional information circle #158
**AHB**
Mixing ART With SCIENCE
Allen & Heath Brenell (USA) Ltd.
Five Connair Road
Orange, Ct. 06477 / (203)795-3594
Allen & Heath Brenell Ltd.
69 Ship Street, Brighton, BN1 1AE England
Telephone: (0273) 24928/ Telex 878235
Canadian Distributor: Heirl Electronics Inc. / 1-416-727-1981
October 1985 □ Re/p 89
SERVO-DRIVEN SPEAKERS
Vukelich is finding it difficult to break into the national touring market with the system. The reason? Production managers that have not heard the system refuse to believe such a small package will really live up to its developers' claims.
Case Study #2: Pace Sound
Glenn Himmaugh and Peter Shulmann are partners in Pace Sound & Lighting, based in New Orleans. PS&L is a diverse regional sound systems contractor with experiences and capabilities stretching from festival-sound reinforcement to multi-track recording and permanent installations. When the company was awarded the contract in early 1985 for the New Orleans Jazz and Heritage Festival, the partners decided that the time had come to build a new sound system. Not just more of the same traditional cabinets, but something new... smaller, lighter, more powerful, and one that made use of their own ideas regarding enclosure design.
"The bottom-end of a large portable system is always the tough decision," explains Himmaugh. "Your decisions in that frequency band not only affect the sound, but the overall size and portability of the whole system as well. When we first heard the Intersonics servo-drive products, we knew that it could meet our needs. We didn't have the time to experiment a lot on our own, as we had only weeks to put everything together."
Pace purchased an initial four of the TPL-3 enclosures, and coupled them with a proprietary box that was produced in-house and labeled the T-3 (Figure 8).
The T-3 unit comprises a trapezoidal, 14-ply birch enclosure loaded with JBL components. A pair of Model E-140 15-inch loudspeakers are mounted onto a fiberglass horn section; each cabinet also houses a two-inch Model 2445 compression driver loaded on a proprietary, flat-front midrange horn. A Bi-Radial compression tweeter completes the package. Two of the T-3 enclosures rest easily on a single Intersonics TPL-3 unit (Figure 9).
"Besides the big festival, we had a major convention booked," Himmaugh recalls. "We had about a month's worth of time to pull it all together. The Jazz Festival is an important event for everyone in this city, and we wanted to do a really good job."
Pace debuted the new system at the festival, using a total of 10 T-3 boxes and four TPL-3 enclosures for outdoor crowds that approached 50,000 persons. The event drew a total of 250,000 persons over 10 days, with over 500 different musicians performing on eight stages, including such regional favorites as the Neville Brothers, Doc Watson, and Albert King.
"We used to do that event with..."
Spacious, Sparkling, Expansive, Effervescent, and yes, Beautiful.
The new XT digital reverb from Alesis. With features usually found only in units bearing sky-high price tags. Like 14Khz frequency response. And extended delay options like Pre-Delay and Slap-Back. Wide range decay time control. Diffusion, Damping, and filters. With the XT you can fatten drums, fill in dry strings or simply set vocals back into the deepest, smoothest acoustic cushion imaginable. All in full stereo.
The XT mixes reverb into a signal directly from the front panel or connects to any board with echo sends.
Digital reverb technology finally comes down to Earth. See the Alesis model XT at better music stores and studio supply centers everywhere.
Made in USA, suggested retail price $795.
For additional information circle #251
Alesis Corporation, P.O. Box 3908, Los Angeles, California, 90078 Telex 855310.
VERSATILE.
V-43
In a survival situation, there's no time to worry about how your sound system will perform. That's why the best American and European performers rely on Cerwin-Vega sound systems!
The Cerwin-Vega V-43 is a direct offspring of C-V's renowned family of full-scale touring concert systems. Featuring our brand new high-efficiency driver design, the V-43 has been engineered and built for a variety of applications.
A high sensitivity M-150 compression driver in a new throat design replaces "power hungry" cone midranges from 300Hz to 3kHz. This makes the V-43 ideal for smaller clubs, halls and theaters where you need more output per watt. High efficiency means no costly or complex bi-amping. Its versatility means musicians can use the V-43 for vocal and keyboard performance.
Compare the Cerwin-Vega V-43 to other "high performance" systems on the market. You'll appreciate the difference between using an ordinary P.A. and owning the most uncompromising music reproduction system in the world.
For more information, please write or call:
Cerwin-Vega!
12250 Montague St.
Arleta, CA 91331
(818) 896-0777 Telex: 662250
SERVO-DRIVEN SPEAKERS
sound wings that were 12 feet high and 10 feet wide," Himmaugh says. "This year, they were only five feet wide. We put the subwoofers under the center of the stage, and used a lot of power for the system. New Crown Microtech 1000s gave us a good bit of headroom, and the new T-3 cabinet projects really well. The highlight of the whole thing though, was that the servo-drive speakers really worked. One TPL-3 replaces eight, 15-inch speakers in horn-loaded boxes. It is a tremendous savings in time and space, and the sound is noteworthy. You could hear the low end all over the New Orleans Fairgrounds."
Actual Use Situation: While I did not hear the Pace system at the above-described event, I was able to observe it in action outdoors at a private party that featured two separate stages for approximately 3,000 persons. Each of four sound wings housed a single TPL-3 and two T-3 enclosures. The site was New Orleans' Louis Armstrong Park, and the program material included rock music and a society orchestra. At approximately 100 feet from the stage areas, the sub-bass units were audibly the most obvious part of the speaker system. Extremely low frequencies were reproduced, giving the whole affair a feeling not dissimilar to what an outdoor "disco" might sound like (Figure 10).
Pace's proprietary T-3 enclosures represent an interesting and effective first attempt at proprietary loudspeaker systems. The trapezoidally-shaped cabinets make the assembly of compact hanging arrays quite simple. While no dedicated processor as yet has been developed for the cabinets, the unique system seems to be giving the firm its own competitive edge in negotiating for sound-system rental contracts.
"There are lots of pre-built, pro-sound enclosures available on the market today," remarks Glenn Himmaugh. "Those are fine for somebody just purchasing a couple. It was definitely worth the time and trouble it took to get our own enclosures happening. It was not such a large capital investment, and we were able to build something that fits our needs exactly. Putting together a large sound-reinforcement system today requires some very careful thought and advance planning. With a large system, any mistakes are multiplied beyond belief."
Conclusions
The two regional sound-rental firms profiled here are both successfully negotiating the Eighties. This decade, however, offers a distinctly different
economic climate for growing sound companies than did the last one. Both Pyramid and Pace have looked to a manufacturer to solve what has been perceived by many as one of live sound's biggest challenges with portable systems: How to reproduce full-bandwidth musical material without having to hire an entire trucking fleet to get the loudspeaker system to each temporary site.
It is perhaps interesting to note that the solution to both companies' problem in that respect came not from one of the major loudspeaker manufacturers, but from an innovative, little-known firm with a high amount of technical expertise in a specialized field. While Pace Sound and Pyramid Audio rely on JBL and Electro-Voice, respectively, for loudspeaker components with which to load their own proprietary, custom-designed enclosures, servo-drive units from Intersonics were eagerly acquired as soon as they came to the attention of those two sound companies.
Hardly two years ago, a technical paper was presented at the 74th Convention of the Audio Engineering Society, October, 1983, by Thomas J. Danley, Charles A. Rey and Roy R. Whymark of Intersonics, Incorporated, titled, "A High Efficiency Servo-Motor Drive Subwoofer." Since that time, the idea has gone from prototype stage to the realization of a marketable, problem-solving tool that is changing the way some regional sound companies design their systems. This development would appear to represent a good example of how a new technology gains a foothold in the ever-changing business of concert sound.
*Pre-print #2043 (E-8), available from the Audio Engineering Society, 60 East 42nd Street, New York, New York 10165.
THE ONLY THING IT DOESN'T DO IS BREAK FOR LUNCH.
SONY APR-5000
30 Point Locator
9 Bias Presets
3 Speed
Cue Amp
16 Bit Processor
± 50% Varispeed
MVC
12.5" Reels
Easy Maintenance
Center Track Time Code
Available with Internal Reader/Generator and Chase Lock Capability.
We also carry a complete line of industry standard equipment.
CALL DAVID BELLINO OR DAVID BEHUNIAK
212/586-1662
1841 BROADWAY/SUITE 1203/NY, NY 10023
WESTEC AUDIO VIDEO
We sell performance.
For additional information circle #164
With the advancement of keyboard-oriented synthesizers, sequencers and related devices comes a complexity that does not readily lend itself to portability. As emphasized in Part One of this article, published in the August issue of *R-e/p*, long gone are the days when a session player simply walked into the control room with his Sequential Prophet IV under his arm, and asked the engineer where he wanted it set up. Today, a well-equipped session player may charge as much as $200 in cartage just to get his or her array of synthesizers and outboard gear from point A to point B. Due in part to this increasing complexity, and today's overall music trend towards synthesizer-based sessions, a number of studios are slanting themselves towards a keyboard-oriented clientele.
But beyond this growing emphasis on keyboards in the studio, there are a number of session players who have found it to their advantage to build their own personal-use multitrack facilities. In other words, instead of bringing the keyboards to the studio, why not bring the studio to the keyboards? Somewhat of a switch has occurred — the session player no longer has to lug equipment from studio to studio, session to session; now the client need only walk into the player's studio with multitrack tape in hand.
In addition to the "standard" 16- or 24-track and mixing console, the heart of such player-owned studios is usually some type of a sampling computer system: a Fairlight CMI, Ensoniq Mirage, New England Digital Synclavier II, Kurzweil 250, E-mu Systems Emulator II, etc; the rest of the facility's synthesizer setup seems to revolve around use of such digital synthesizers.
All in all, the new synthesizer technology has had a tremendous effect on the way songs, jingles, and live performances are conceived and produced. The remainder of this article profiles several players, engineers and producers that are using the new technology extensively to make their musical lifestyles more efficient, productive and creative.
**TERRY FRYER — Colnot-Fryer Music**
Terry Fryer and partner Cliff Colnot, of Colnot-Fryer Music, Inc., are best known for their commercial jingles for such clients as McDonalds, Nutra-Sweet, RCA, Computerland, Pizza Hut, Levi's and Doritos (the latter being the commercial with the catchy hook, "D-D-D Doritos"). The two put together their studio in downtown Chicago during the summer of 1983, with advice from such synthesizer experts as Robert Moog, Tom Rhea, and David Luce.
Having had his own 24-track facility since 1980, it was at the time of starting this new studio that Fryer
Ramsa just pulled the plug on your patchwork.
The Ramsa 8-Group Recording Consoles with program mix control.
Why struggle with wires and plugs, when Ramsa lets you go from record to mixdown by just adjusting a few switches. Both the Ramsa WR-T820 and WR-T812 have an eight-group output section. To give you control over each track, through each phase of production. With minimal repatching.
The exclusive program mix control allows you to mix both tape and live tracks simultaneously. Effectively increasing the number of inputs for overdubbing.
You can also record and overdub through the eight-group output section, or go directly out from the input modules. And on the WR-T820, the eight-group output sections have two outs. So it literally functions as a 16-out console for 16-track recording.
Both boards have separate inputs for mike, line and tape. To let you mix any vocal, instrument or tape track you want.
And if you need to isolate any part of the mix, just hit the stereo-in-place solo switch. And the tracks you want will come up in stereo.
You'll also find that both consoles have a powerful three-band EQ with sweepable frequency to give you greater precision.
And to help you keep an eye on everything, you've got the option of combination LED and VU meter bridges so you can monitor all inputs and outputs.
So if you'd rather spend more time making music and less time making patches, discover the recording consoles that'll make the most of your time. And your money. The Ramsa WR-T820 and WR-T812.
Panasonic Industrial Company
For additional information circle #152
See Us at AES Booth 650-656
purchased his Fairlight CMI. According to the Clio Award-winning producer his choice of synthesizer system was based on sound quality and flexibility: "The Fairlight was the first high-end system that I heard actual sounds coming out of that would be useful," he recalls.
While the Fairlight CMI may have won out as the studio's sampling device, it has not replaced Fryer's need for other synthesizers, and he says they are all used fairly evenly. "The sound of the [Sequential] Prophet oscillator is one of those sounds that, when you need it, there's no other way of getting it. I think the same applies to Oberheim, Roland, PPG and the [Yamaha] DX-7," he offers.
As a jingle writer and studio owner, Fryer says the MIDI and the other new interface technologies have affected him in several ways. "One thing the MIDI has done is to allow a sequencer to become a composing tool," he confides. "Before, a sequencer has been one of those devices that made particular types of sounds; played certain parts; or made you write a certain way, etc."
But now, the producer continues, because of MIDI a player can assign one device to make drum sounds, another to do a bass line, another to play a rhythm part, another to play a lead line, and so on. "You can experiment with parts and see how they sound together. And, if they are clashing, the writer can simply change a part; I have no extreme, vested interest in something that has been put down to tape," he points out.
Continuing, Fryer explains that although obvious changes in textures, via MIDI control, are now possible, and that "sounds have evolved quickly" through the ease of programming synthesizers, manufacturers are also progressing with manipulative features that are not directly related to sound itself — factors such as aftertouch, sustain pedals and breath controllers. "But in a negative direction," he adds, "if I MIDI together two or three instruments, it's a major problem to hook up a sustain pedal or breath controller to all of them. If I want to use a breath-controller, I'm limited to using three of these six instruments [in my studio]; if I want to use some kind of sustain pedal, I'm limited to another couple of them."
Fryer recalls another situation that most players and engineers involved with sequencers and drum machines can relate to: "It's one of those minor nuisances that makes you want to kick your sequencer down the stairs," he states calmly, before explaining that when he uses an off-tape sync code to drive his Garfield Electronics Dr. Click synchronization system — which is providing the pulse train for the sequencer or drum machine — he almost inevitably forgets to change the unit's drive clock from internal to external.
"Countless times I sit there and punch-in to record and wait...[the unit] is out of sync with what's going on, and you're trying to figure it out in your head. We built a box that takes either the click generated [by the Dr. Click] or the off-tape click, depending on whether or not the tape machine is in record mode. It's real slick; you don't have to worry anymore!"
Concluding, Fryer says the overall effect of the new technology has been to increase his productivity. "With instruments that have internal memory capabilities, you can actually have an on-going project — you just store your sounds on a disk, and can build up the sequence. You come back to it a couple of days later, and it's all still there — you don't have to start over again."
ED FREEMAN — Quantum Leap
The recording studio conventionally has been thought of as a place to take a piece of music that has been composed previously, and to then capture its performance on tape. Yet, as time goes by, recording and keyboard equipment are becoming more and more of a compositional tool — as opposed to strictly a performance tool — a development that has allowed set limits of creativity to be approached and challenged. Along with executive producer and Record Plant owner Chris Stone, Ed Freeman and a host of synthesizer players/programmers — collectively known as Quantum Leap — recently challenged those boundaries of creativity through an experimental album project called The War on Attitude.
Freeman, a veteran producer, and currently operations manager at Motown's Hitsville Studios, Hollywood, explains his original concept for the project: "The music was going to be largely improvisational, and it was going to be improvising with computers [sequencers]. But it evolved into something that was much more composed, much more static than had originally been planned." The finished album displays the current capabilities of digital recording equipment, as well as keyboard and synthesizer technology — in addition to stretching creatively from a compositional standpoint.
At times, as many as 28 synthesizers, nine programmers and six recording engineers were simultaneously employed to create the sounds of Quantum Leap. Some of the equipment used included: Sony PCM-3324 digital multitracks, PCM-1610 stereo digital processors, and DAE-1100 digital audio editors; Oberheim DSX sequencers, OBX-As, Xpanders, and more synthesizers from Yamaha, Sequential Korg, Roland and Kurzweil. In some passages, Freeman points out, a number of these synthesizers were combined to formulate sounds with the richness of a Stradivarius — yet with the bizarre quality of sounds never heard before.
"Most of the music was written on a DSX sequencer, which I felt was the most advanced sequencer available at the time," says Freeman of the recording sessions that started in October last year, and finished in March.
54 key reasons for Martin Audio's continuing success
For most, success is measured by what products a dealer carries or by its dollar volume, and by those standards Martin Audio is certainly successful. Martin sells over 90 major lines of professional audio and video equipment, including such recent additions as B & W, E-mu, Kurzweil, and TimeLine. We have consistently been the top dealers for lines like Harrison, Lexicon, Neumann, Otari, and Valley People. And our growth rate over the last five years has been little short of phenomenal.
But for us at Martin Audio, true success is not measured in numbers, but in how well we serve our customers' needs and interests. As we've grown bigger, we've also grown better by assembling a staff of people who are not just well-qualified, but also concerned and dedicated. Whether you are buying a console or a connector, renting a mic or having a tape machine overhauled, we are committed as a company and as individuals to satisfying you, our customer.
Come take advantage of Martin's success.
martin audio video corp.
423 West 55th Street/New York, NY 10019 Telex 971846
Phone (212) 541-5900 • Rentals Hotline (212) 265-4646
October 1985 □ R-e/p 99
For additional information circle #166
"But last year was 'pre-history' when you're talking about technology," he adds. In most instances, the playback/performance configuration of the sequences and synthesizers was set up to include one or more DSX, whose signals were sent into one or more Oberheim Xpander, and then out to any number of other synthesizers being used.
The reason for this configuration was that DSX sequencers were not MIDI controllable at the time, but employed control-voltage gates for triggering. "The Xpanders convert CV-gate information into MIDI. So, in every case, we were using the Xpanders as CV-to-MIDI converters — and beyond the Xpanders was a variety of synthesizers, depending on what sound we wanted to get at the time."
"I would never do it that way again," Freeman continues. "There are some theoretical advantages [to the use of CV gates] that may transfer into technical advantages; I think CV gates are less prone to some problems, especially [processor time] delay, but the advantages of MIDI are a thousand times over CV gates."
As it turned out, MIDI-generated processor delay generally was not a problem throughout the project. In most cases, Freeman and crew did not chain synthesizers through MIDI Out, Thru, etc.; instead, MIDI splitter boxes were utilized to string synthesizers together.
In addition to the sound-layering possibilities available through MIDI, Quantum Leap set out to exploit the track bouncing and overdubbing capabilities of digital recording to further create and manipulate individual sounds. "We were looking at what was possible on a digital machine that [could not be achieved] on analog; one of the first things is that you can do a lot more generations on digital than you can on analog. We managed to do some very, very complex sounds, which required a good deal of bouncing and sound shifting. Some songs have as many as 60 tracks on them."
The finished album is difficult to listen to all the way through, Freeman admits, and he hesitates to even call it "music," simply because there aren't really any verses, choruses and melodies per se. "I was doing a lot of things at the same time, not only experimenting with equipment, but with harmonic structures, composition techniques, and so on. We weren't told what not to do, and we all came out the wiser for it," he concludes.
PAUL FOX — Session Player
Musician Paul Fox and manager Rick Stevens went into the studio business together a little more than a year ago. While Stevens owns most of the keyboard and recording equipment, the studio is housed at Stevens' Suma Entertainment Group headquarters, West Hollywood. Fox has recently been doing session work for Barry Manilow, Jeffrey Osborne, Sara Vaughn, and is credited on a soon to be released Pointer Sisters album.
"Basically we set up the studio as an all-synthesizer writing studio, and then started getting into making it as state-of-the-art as we could," says Fox of the current facility. "We sought out the cleanest board for recording synthesizers that we could — for the amount of money we had — and to purchase the outboard gear that would serve all our requirements. Basically, we designed it around the synthesizers' needs."
The equipment that Fox and Stevens settled on was an AMEK TAC Matchless board and an MCI JH-114 transformerless 24-track. "The board is very clean for synthesizers," Fox confides. "There's a choice of a few boards in that price range, but this one is pretty much the cleanest one that we found."
Fox also points to a modification...
Close-up details of what Paul Fox describes as forming the "nerve center" of his keyboards-orientated studio. Pictured left is a J.L. Cooper MPB-1 MIDI Patch Bay, mounted on a Yamaha TX-816 rack, while right is a Yamaha PC and a Garfield Doctor Click sync system.
that has been made to the board for keyboard-oriented work: "When you start using a lot of different synthesizers you start eating up a lot of tracks. We have a modification to allow you to monitor from the monitor section what's already on tape, while still bringing up more synthesizers through the input faders — in this way, you don't lose the information on tape while doing more overdubs."
Other outboard gear includes an AMS RMX-16 digital reverb, BEL digital delay, two Roland SDE-3000 processors, dbx Model 160 compressor, Eventide Harmonizer, and several delay units. For monitors, Fox and Stevens are sticking with Yamaha NS-10Ms, which they feel provide a good standard reference for almost everybody in Los Angeles.
Instead of having his more than 17 keyboards and sequencers set up close to the patch bay, or relying on the use of direct boxes, Fox has provided patch points throughout the room housing the keyboards, console and tape machine. "Everything is line-in," he explains. "With 15 terminals on each box that can be used as inputs or outputs, there are ways of sending clicks, codes, delays, etc., around the room. If I want to send a click, for instance to a Dr. Click, and the source is on the other side of the room, you can send it down the patch bay and out one of the boxes."
A device Fox considers to be a real "lifesaver" is the J.L. Cooper MIDI Patch Bay, "I'm using the non-programmable one, because for now I just need to be able to switch things on and off quickly." With the MIDI Patch Bay, Fox can use eight different keyboards, or sequencers, as masters to control/program as many as 10 other MIDI-capable devices. "I have modular synthesizers, like the Oberheim Xpander and the Yamaha TX-816, and they're located at different places in the room. I have to be able to jump on a keyboard and be able to program. I can go from the Emulator II, Jupiter 6 or DX-7 as the master keyboard, or different sequencers; and it saves having to be switching from MIDI Ins and Outs all over the place. Basically, you hook up your MIDI and just roll."
CRAIG HARRIS — Sound Composer
Craig Harris is a prime example of today's session-player-turned-studio-owner. Specializing in voice effects, his movie credits include War Games, Footloose, The Last Starfighter and Twilight Zone. Harris' album work has included playing on sessions for Michael Jackson, Donna Summer, Tom Petty and the Heartbreakers, and Manhattan Transfer, although probably his most universally recognizable track has been the vocal effects on the bridge of Michael Sembello's "Maniac," from the movie Flashdance.
About 3½ years ago Harris purchased a New England Digital Synclavier with the concept of becoming a Synclavier session player, "running around town to different studios," as he puts it. Although he had a four-track at that time for pre-production purposes, it soon became apparent that an eight-track machine would enable him to actually produce commercials which, at the time, made up a large percentage of his work. "Within two months I was never going out-of-house," says Harris of the decision to move up to eight-track.
"At that point, it became very clear to me that if I had a 24-track machine, I would never have to charge cartage or costs like that, and have my equipment tied up in transit eight hours a day." Since the purchase of an Ampex MM-1200 24-track, Harris says, "in three years of recording commercials and other projects, I've only gone out once, to record a big-band sound. Other than that session I've pretty much done it all here."
Harris' studio, located in his San Fernando Valley home, houses a Soundcraft board, Ampex 24-track, and 32-voice Synclavier system, which he recently updated with Polyphonic Sampling (16 voices with a total sample time of 250 seconds), eight multichannel stereo outputs, and SMPTE interface card. (The latter allows the in-house Cipher Digital/BTX Softouch System to control the Synclavier's 32-track sequencer and Linn 9000, as well as the Sony VCR, MM-1200 24-track, and PCM-F1 digital system.)
One Synclavier capability that Harris uses extensively is that of resynthesis. Using as an example the tune, "Rock House," from Manhattan Transfer's current album, he explains the process. Having sampled one of the singer's voices into the Synclavier, "you take the voice and carefully mark where the waveform is changing throughout a given word, and the computer [the Synclavier's Digital Equipment Corporation VT100 video terminal and controller system] will then analyze each one of the points and load them into the synthesizer. I then use the FM synthesizers in the Synclavier to re-synthesize the sound. "It works amazingly well [when you need] to take voices out of their range, elongate parts of the words, twist them around and get strange vibratos, etc. Plus, it's an effect in itself, because the result doesn't sound like the original sample."
The concept also works very well with sounds that originate from synthesizers, Harris continues. "The other thing that I do a lot of is [sam... continued on page 104 — October 1985 □ R-e/p 101
Do you hear voices?
If you spend a lot of time thinking about music, you probably have all sorts of sounds running around in your head.
However. What you hear in your mind you just don’t hear in your ears. Because your instrument just can’t produce it.
Unless, of course, you’re playing the Kurzweil 250™. An instrument engineered to produce any sound anyone can imagine. In all its glory.
To get great sound, you have to start with great sound.
Our engineers use Contoured Sound Modeling™ to capture the sounds of real instruments. Contoured Sound Modeling has given the world the famous Kurzweil Grand Piano, Strings and Human Choir.
But if our engineers were merely good engineers, those voices would sound, well, engineered. Why don’t they? Because our engineers are even better musicians.
And as such, they not only use a careful blending of instrument, performer, acoustics, recording techniques and signal processing. They also depend on a tool older than music itself. The human ear.
Total control at your fingertips.
If you can press a button, you can call up any of the Kurzweil 250’s 45 resident voices and 124 keyboard setups. Instantly. Because they’re all in the Kurzweil 250’s massive on-board memory. That way, you won’t ever have to mess around with disks.
Then play any voice or setup on the Kurzweil 250’s velocity sensitive, 88-key wooden keyboard. Which is weighted to feel like a grand piano. So it can be as expressive as a grand piano.
And just as important, the
Kurzweil 250 has a built-in 7900-note 12-track sequencer that lets you compose your own complex arrangements. Edit any note, edit any track, insert notes or change tempos and volumes. And once you're satisfied with what you've got, there's quantization to clean up slight timing errors.
**You get more out of MIDI here.**
MIDI is available on a lot of keyboards. But no keyboard instrument lets you do with MIDI what you can do with the Kurzweil 250.
As either master controller or slave, it gives you access to virtually all of the Kurzweil 250 control functions.
**If you can sample it, you can play it.**
Say "hello" to the Kurzweil 250.
Next thing you know, you'll be hearing "hello" in 4-part harmony. Or 6 part harmony, or 12. How? With our Sound Modeling Program™.
You just plug in a mic (or patch in a line), and off you go. Our Sound Modeling Program will then let you sample your own sounds.
Afterward, do virtually anything with any sound. Compond it, trim it, loop it or play it backwards. Play it like you would any resident voice. Even layer it onto any resident voice. Imagine Kurzweil Strings that turn into the wind. Or thunder. Or vice versa. Anything you can hear in this universe, you can hear on the Kurzweil 250.
**Stay tuned.**
Today, the Kurzweil 250 is the single best, most expressive instrument you can buy. But it's not alone.
We've introduced the Kurzweil 250 Expander, a portable expander package that you can MIDI to any MIDI keyboard.
We've added the Kurzweil Sound Library, a growing collection of sounds on Macintosh™ disks.
And we will be adding even more in the future.
But whatever we bring to the Kurzweil 250 will always be for the same reason. So the voices you're hearing, everyone can hear.
To find out more about the Kurzweil 250, visit your local Kurzweil retailer, or call us at 1-800-447-2245. Or send $6.95 (Massachusetts residents add 35¢ tax) for a copy of our 50 minute demo cassette, to Kurzweil Demo Cassette, P.O. Box 863, Providence, RI 02901.
Kurzweil Music Systems, Inc., 411 Waverley Oaks Road, Waltham, MA 02154-8464.
Macintosh is a licensed trademark of Macintosh Laboratories, Inc.
DIGITAL SYNTHESIZERS
ple] the raw sound of a squarewave or sawtoothwave — take your favorite synthesizer — and re-synthesize those. That's basically what resynthesis is set up for doing: you just take the output and feed it through a filter and do the filtering yourself."
Harris tends to use the Synclavier to perform this task, or feeds the raw wave sample into his ARP 2600 synthesizer. He feels that such a capability has helped him to side-step problems of MIDI communication between various synthesizers of different manufacture; the Synclavier's on-board sequencer is employed when sequencing is needed.
In trying to accommodate the "I-don't-know-what-I-want-but-I'll-know-it-when-I-hear-it" attitude that Harris generally gets from producers while working on a movie soundtrack effect, the composer has designed what he calls a "master-monster" patch bay for voice processing. To create a typical voice effect, he may use two or more pitch-shifters, Vocoders, pitch-and envelope-followers, noise gates and several synthesizers. All of these devices are in one master patch that involves hundreds of individual patch cords. "Basically what I have is a 'portable' patch bay. I can change whether the EQ is before or after the Harmonizer, for example, or the Harmonizer is connected before the voice goes into the Vocoder, and afterwards, whether I want to harmonize the other effects, etc.
"Unfortunately patching 15 to 20 boxes together to create one effect is not applicable to most people's situations. The funny thing is that, after patching in all those effects and boxes, they [the producers] want it to sound as 'natural' and 'organic' as possible!"
This comment leads Harris away from the technicalities of computers and other devices, and into the plain world of communicating musical ideas to non-synthesizer programmers. "The big problem with film people is they have no language to communicate with as far as sound is concerned. So you kind of have to set up the ground rules — a language with which they can talk to you. That's something that's important to anybody, whatever you're doing. You have to meet them with what they're comfortable with. You have to come up with definite things they can identify and understand," he concludes.
SHEP LONSDALE — Toto's Engineer
With the price of sampling systems taking somewhat of a downward turn, the technology is available not only to those studio musicians able to afford the price tag of high-end systems, but also to Top-40 musicians playing local nightclubs. In addition to this, major artists are beginning to use sampling keyboards on tour, in lieu of playing live against pre-recorded tapes being played back through the house and monitor PA systems. Toto recently completed a tour supporting its new album, Isolation, during which several E-mu Systems Emulator II synthesizers played...
horn parts (on the tune "Rosanna" in particular), synthesizer parts, background vocals, and percussive effects.
According to the group's in-house engineer, Shep Lonsdale, Toto considered sampling as representing the only sensible way to perform live what they had done on the album — "without bringing along a million extra players." Lonsdale recently engineered the soundtrack for the movie Dune, and had worked with Traffic, the J. Geils Band and the Doobie Brothers before taking a "staff" position with Toto a few years back.
"The same day the Doobies broke up," he recalls, "Toto asked me, 'Why don't you quit the Doobies and come work for us?'"
Lonsdale points out that although many groups have been taking along a two- or four-track tape machines on tour to serve as an additional backup "musician," Toto did not feel totally comfortable with the idea. "What's the point of the band being there if you've got all this material going on tape? We figured it would be much better to put down the sampled parts, and at least play them ourselves." Previously the group had never used tapes but, as Lonsdale suggests, had never before tried to pull off live what the group had planned to do on its latest tour. [For a full rundown of The complex keyboard monitoring and sound-system design for Toto's recent world tour, see David Scheirman's article in the August issue of Re/p — Editor].
"The whole organization is very aware of what we are doing on the road, because it's the same people handling live sound as in the studio. So, we even went to the extreme of doing special re-mixes right after mixing a particular tune for the actual album. At the time we didn't know exactly how we were going to use it, but we had a whole backup reference of mixes that were to be experimented with in a live situation."
In most instances, an entire hook, horn line, etc., was keyed off one note. In other words, instead of sampling the sound of a horn section into the Emulator, and then playing a particular lick, the entire lick was sampled into the keyboard. And, rather than taking the parts from the studio 24-track masters, or the special mixes mentioned above, and directly sampling them into the Emulator II, a Sony PCM-F1 digital processor was employed as a go-between. "I took everything to an F1 and did things like 'one-pass stereo'; 'one-pass mono'; 'one-pass with effects'; 'one-pass without effects.' We were then able to experiment with what we had on the F1 by going into the Emulator to see actually how much information we needed to make it work.
"It gave us a lot of flexibility, because we ended up with a log of all the parts that would be difficult to reproduce live. And, because we took the F1 with us, anytime during the tour we could update and amend any of the samples we had; we just whipped out the F1 and redid the sample." There was no loss of sonic quality either, Lonsdale adds: "Digital-to-digital is just fine; the Emulator loved it."
The main advantages of using sampling in a concert setting, Lonsdale
During Toto's recent World Tour, MIDI-equipped keyboards were controlled from an IBM Personal Computer (shown left at the stage-right keyboards mix position) and Compaq PC (right) running custom software developed by Ralph Dyck.
DIGITAL SYNTHESIZERS adds, is that it allows greater freedom than playing against pre-recorded tapes provides. "When you use tapes, you're stuck to the format of the tune. You can't stretch it out; you can't let the tune 'breathe'; you've got to do it the same way every night, and it gets real dull; that's not what live music is all about."
According to Lonsdale, Jeff Porcaro's drum timing was meticulous, with the result that there was no problem with tempos alternating from one night to the next. "Could your average listener tell that they were parts that were sampled and not actually physically being played?" Lonsdale questions himself. "No, they couldn't tell at all."
MICHAEL BODDICKER — Keyboardist
It is safe to say that keyboardist-synthesist Michael Boddicker is one of the busiest session players currently working on the West Coast. His album, jingle and soundtrack credits include musical work with Lionel Richie, Quincy Jones, Mattel Toys, Back To The Future and Real Genius.
The following comments detail his particular uses, likes and dislikes of the various keyboards that comprise his multi-instrument session setup:
- Yamaha DX7: "I have two DX7s that have been modified to provide eight banks of memory each. They also have been modified to raise the..."
output gain so that they are a little hotter than line-level, because certain models don't have the internal output-level pot turned up enough.
"They have great percussive color and great digital sounds. When I need richness in a sound I don't use them; however, when I need real crisp, biting sounds, I use the DX7. Essentially what I use them for is to duplicate 'colors' that I might normally get on a PPG. I use one of my DX7s as a touch-sensitive master keyboard."
- **Roland Jupiter-6:** "Most of the time I use the Jupiter-6 as the master keyboard, even though it's a non-touch-sensitive keyboard, and I get the same velocity response all the time. It's also at the right height in my rack!
"The Jupiter-6 is also one of the instruments I find myself using more and more, particularly on orchestral colors — it works great for those sorts of sounds. It also has more filter effects than just about any other synthesizer I know of except the Oberheim Matrix 12, which I'm in the process of getting."
- **Roland Jupiter-8:** "I can use the -8 as kind of an auxiliary analog synth. It has a richer tone color than the Jupiter-6, but one of the major things that I use it for is triggering; I have had the envelope generators modified so I can fire them with an external signal. I have also had the memory modified to provide three times as much memory capacity. It also has arpeggiator output, through MIDI, which is something I use it to provide in the studio."
- **E-mu Systems Emulator I:** "I still use the Emulator quite a bit because I have such an extensive library of sampled sounds. Even though the sound bandwidth isn't as great as I would like it to be, it is a very useful instrument, and I don't foresee taking it out of the rack. I tried to transfer a lot of the sounds to the PPG and to the Emulator II, and they take on a different color."
- **E-mu Systems Emulator II:** "Of all the instruments I have right now, The Emulator II is the most exciting, because of the new software developments and what not. I've been able to create a really extensive — and exclusive — library of sounds that gives me such a scope. Besides natural instruments, I have samples of sounds loaded into the system that you just wouldn't be able to do with natural instruments."
- **PPG 2.3 with Waveterm:** "The PPG really crunches up the top-end of a digitized sound — you can hear the aliasing on the high frequencies. And that gives the keyboard a very unique color. Even though that's not what you really want from a sampling synthesizer, I think that you can afford to have a toy that makes that 'weird noise' — the PPG makes some real 'electronic' sounds."
- **Roland JX-8P:** "I'm amazed; I've been using the 8P on a Lionel Richie session for some bass color [instead of a Minimoog]. I also use it for real top-end, bright bell colors. I know it may sound like I'm contradicting myself to say the 8P sounds real bright and bassy, but on the top-end it's real 'sparkly' and as close to the DX7 or a PPG sound from an analog synth that I've found."
Other keyboards Boddicker still finds very useful include an ARP String Ensemble and Minimoog.
---
**Cipher Digital’s Model 735CD Time-Code Reader/Event Controller**
The Model 735CD — a full function, full speed Time Code Reader with eight-channel event controller/coincidence detector — incorporates features you won't find anywhere — at any price. Easily programmed from the front panel or optional RS-232/422 serial port, the 735CD provides frame accurate, contact closure control of remotely activated devices.
**TYPICAL APPLICATIONS**
**Video Production:**
- Character Generators
- Animation Stands
- Switchers
- Special Effect Generators
**Machine Control:**
- Activating VTR's, Film Chains, etc.
- Multiple ATR Sequencing
- Time-of-Day Events
- Alarms
**Invaluable. Incomparable.**
*In stock at $2,160.*
For detailed information or demonstration of the innovative Model 735CD, contact our Sales Department:
Cipher digital
Sales/Marketing Headquarters:
215 First Street • Cambridge, MA 02142
Tel: (617) 491-8300 • Telex: 928166
— an SMI group company —
For additional information circle #171
Northeast:
- **RAJEM RECORDING** (Gladwyne, Pennsylvania), claimed as Philadelphia's only Solid State Logic-equipped facility, has added an AMS RMX-16 digital reverb system and DMX 15-80S digital delay and sampling unit to its outboard rack, plus a Summit Audio Tube Limiter, Ensoniq Mirage sampling synthesizer, Rocktron HUSH II noise reduction, and a Lexicon Prime Time II digital effects unit. 1400 Mill Creek Road, Gladwyne, PA 19035 (215) 642-2346.
- **CRYSTAL CITY TAPE DUPLICATORS** (Huntington, New York) has added new Otari DP-85 slave transports to its DP-7500 high-speed cassette duplication system. The new slaves are equipped with Dolby HX Headroom Extension circuitry to improve high-frequency response of the duplicated material. According to Douglas Young, the facility's VP of marketing, the new equipment will double Crystal City's daily production capability. 48 Stewart Avenue, Huntington, New York 11743 (516) 421-0222.
- **SADDLE RIVER MUSIC** (Paramus, New Jersey) has relocated its facility, and acquired a New England Digital Synclavier II Digital Music System with 32-track memory recorder, 50-kHz sampling rate, and SMPTE timecode capabilities. In addition, an Otari MX-5050 Mk III and MB-5050 half-track have been linked to the facility's new Soundcraft Series 200 console. Partners Neil Fishman and Harvey Edelman say that their facility awaits added work geared towards music creation and radio/video production. 105 Fairview Avenue, Paramus, NJ 07652 (201) 843-3880.
Southeast:
- **SHEFFIELD AUDIO-VIDEO PRODUCTIONS** (Phoenix, Maryland) has added AMS RMX-16 and Lexicon Model 200 digital reverb systems, the latter for use in the facility's new digital audio remote truck, equipped with a Sony PCM-3324 DASH-Format 24-track. Recent modifications to Studio A's control room include the addition of more RPG Sound Diffusers. 13816 Sunnybrook Road, Phoenix MD 21131 (301) 628-7260.
South Central:
- **TIM STANTON AUDIO** (Austin, Texas) has upgraded from one-inch/16-track format to two-inch/16 track operation with the purchase of a new Sony/MCI JH-24/16 multitrack with Autolocator III remote control. Recently added reverb systems comprise an Ursa Major 8X32 with remote, and ART 01A units. Other new equipment includes eight Galex Audio noise gates, four dbx Model 160X limiters, and a Roland SDE-300 digital delay. West Fifth St., Suite 103, Austin, TX 78703 (512) 477-5618.
Midwest:
- **SWEETWATER SOUND** (Fort Wayne, Indiana) has added an AMEK TAC Matchless console and Soundcraft Series 760 24-track, and is offering a special introductory offer for recording time this month of $40 per hour. The studio's synthesizer and drum machine array includes a Kurzweil 250 (reportedly the only studio in Indiana to own such a digital sampling keyboard system), Yamaha DX-7, Roland JX-8P, Moog Memorymoog, Linndrum and Yamaha RX-11; all instruments are available for
Now you too can have that fabulous, modern Turbosound "look"!
We weren't searching for a new look when we developed the design principles embodied in Turbosound reinforcement modules. We were simply looking for a better method of reproducing the dynamic range, frequency spectrum and transient attack of live music.
The Turbo (not just a phasing plug) in our TurboMid™ device is the most identifiable result of our efforts, which is one reason why we're seeing so many imitations these days.
But the design developments that produce the patently superior sound performance of every Turbosound full range enclosure are the elements you can't see.
Tacking a simple phasing plug onto an obsolete design, for example, does not magically endow a speaker system with the ability to reproduce a seamless midrange from 250 to 3700 Hz, or to project scound further and more accurately. That kind of performance results only when our proprietary 10" driver, exponential horn and Turbo, all designed on Turbosound principles, function as an integral system — the TurboMid™ device.
Nor will adding a phase plug to a box enable it to generate the astonishing amount of clear, musical bass our TurboBass™ device does. And it certainly won't deliver the energy of live performance with the definition and dimension of a Turbosound reinforcement system. It's not the way an enclosure looks, but the reason it looks that way that's important.
Only Turbosound is Turbosound.
* TurboMid™ and TurboBass™ devices are covered worldwide by Principle Patents and not simple design patents. The concepts embodied in these designs are, therefore, entirely unique.
See Turbosound literature for full information.
Turbosound, Inc. 611 Broadway #411, New York, New York 10012 (212) 460-9940 * Telex: 969127
Turbosound Sales Ltd. 202-208 New North Road, London N1 7BL (01) 226-3840 * Telex: 265612
Worldwide distributors: Australia: Creative Audio Pty Ltd. Tel: (03) 354-3987 * Austria: Audiosales Tel: (2236) 888145 * Canada: Belisle Acoustique Tel: (514) 691-2584 * China: Expotus Ltd. Tel: (01) 405-9665 * Denmark: PerMeistrup Production Tel: (02) 151300 * France: Regscene Tel: (01) 374-5836 * Greece: Alpha Sound Tel: (01) 363-8317 * Holland: Ampco Holland b.v. Tel: (30) 433134 * Hong Kong: Expotus Ltd. Tel: (01) 405-9665 * Israel: Barkai Ltd. Tel: (03) 732044/735178 * Italy: Audio Link Tel: (521) 772009 * Japan: Matsuda Trading Co. Tel: (03) 295-4371 * Korea: Bando Pro-Audio Inc. Tel: (02) 274-5541 * Norway: Nordtek Equipment A/S Tel: (02) 2311590 * Singapore: Indonesia/Malaysia: Atlas Sound Pre Ltd. Tel: 220-4484 * Spain: Lexon S.A. Tel: (03) 203-4804 * Sweden: InterMusic a.b. Tel: (5111) 3420 * Switzerland: Studio M + M Tel: (64) 415-722 * UK: MST Tel: (0223) 314414 * USA: Turbosound Inc. Tel: (212) 460-9940 * W Germany: Adam Hall (Audio Developments) GmbH Tel: (69) 811 19531
For additional information circle #172
use by visiting engineers and producers at no extra charge. A Fostex B-16 half-inch/16 track also is scheduled for installation in the facility's Studio B later this month. 2350 Getz Road, Fort Wayne, IN 46804 (219) 432-8176.
KOPPERHEAD PRODUCTIONS (North Canton, Ohio) has completed a major expansion of its facility, and now offers 24-track recording in a new room designed by John Storyk. New control-room equipment includes a Sony/MC1JH-24 multitrack with dbx noise reduction and Autolocator III remote control, and a 32-input, transformerless Soundcraft Series IIB console; outboard equipment includes a Lexicon 224 digital reverb, EMT echo plates, AKG, MicMix and Orban analog reverb units, UREI LN1176, Audio-Design/Calrec Vocal Stressor, dbx Model 161 and 162 compressor/limiters, Valley People Kepex and DynaMite noise gates, Eventide Harmonizer and Instant Flanger, Orban Stereo Synthesizer and Parametric Equalizer, plus microphones from Neumann, AKG, Sony, Shure, Sennheiser and Electro-Voice. The company will continue to offer 16-track recording sessions with a Tascam 90-16, as a low-cost alternative for demo sessions, while custom music production with a New England Digital Synclavier II digital synthesizer system also will be offered by the company. 935 Schneider Road NW, North Canton, OH 44720 (216) 494-8760.
SOLID SOUND (Ann Arbor, Michigan) has acquired the following audio-for-video post production equipment: an Adams Smith 2600 modular timecode/synchronizing system which consists of generators, readers, synchronizers and character inserters, NEC 2505, 1905 and 1305 monitors; Otari MTR 12 Mk IV multitrack; and a Sony custom-Alphatized 5850 U-Matic VCR. P.O. Box 7611, Ann Arbor, MI 48107 (313) 662-0667.
BEAT 'N TRACK (Pittsburg) has added an MXR digital reverb and audio exciter to its outboard gear rack. 111 LeBouf Drive, New Kensington, PA 15068 (412) 339-0814.
Mountain:
BREEZEWAY STUDIOS (Waukesha, Wisconsin) has purchased an AKG C-24 stereo tube condenser microphone, Sony PCM-F1 stereo digital processor, Klark-Teknik DN780 digital reverb, API preamp and Model 550A outboard equalizer, Tannoy NFM-8 studio monitors, Aphex Type B Stereo Aural Exciter, Studio Technologies AN-2 Stereo Simulator, and a Shure SM-7 dynamic mike. 363 West Main Street, Waukesha, WI 53186 (414) 547-5757.
GRAMMIE'S HOUSE (Reno, Nevada) is a new residential, 24-track music recording and audio-for-video studio scheduled to open in late October. According to co-owner Robert Forman, "We planned this new facility as a 'resort studio' that would be close to the Los Angeles recording community, but which would provide the feeling of getting away from the pressure of the Coast — yet be only an hour away [by air] from L.A. The building — which we have re-modeled and furnished to look like a colonial-style house — has three bedrooms for clients, with a fourth now under construction. We also have arranged with the nearby MGM Grand Hotel..."
to offer competitive rates to us for people that cannot be accommodated in the house. We also plan to offer in-house catering for lock-out sessions." Forman's partners in the venture, which has a reported total budget of $2 million, are James Nicholson and Sig Rogich, who has strong connections with the local and national advertising community. The new facility features a Chips Davis acoustic design that utilizes the principles of a "Live-End/Dead-End type" environment. The 650-square foot control room boasts a 56-input mainframe Solid State Logic 6000E console currently fitted with 48 channel modules. Studio Computer fader automation, Total Recall and Three-Machine Synchronizer. The new SSL console is described as the first such board to be delivered to a facility in the Western States. Other control-room equipment includes a pair of Studer A800 24-tracks for 46-track sessions, and two A820 mastering machines equipped with half-inch heads; a digital mastering machine is also under consideration.
Outboard gear consists of Lexicon 224XL and Advanced Music Systems RX-16 digital reverbs; Eventide SP2016 delay/special-effects processor, plus H910 and H949 Harmonizers; Drawmer noise gates; and an extensive collection of vintage microphones. The facility also boasts a Fairlight CMI Series III digital synthesizer with sampling, timecode synchronization and MIDI interface options. The studio area measures 1,200 square feet, and contains separate piano and drum booths, plus a utility isolation room. Covered Foley pits are provided for audio-for video and film sweetening sessions. One wall of the studio is covered with RPD absorption panels to provide a diffuse reflection pattern. The control-room design also features RPG diffusors and polycylinders mounted on the rear and side walls, respectively, to provide a "dead-end" behind the mixing position. A second, smaller studio measuring 800 square feet is also currently in the planning stages, and will be designed for smaller music recording sessions, and jingle commercials work. 1515 Plumas, Reno, NV 89509. (702) 786-2622.
Southern California:
UNJITEL (Los Angeles), a video post-production facility, located in Los Angeles, has installed four new YNH ESC-02 mixing consoles in its pair of on-line video editing suites, off-line suite and telecine bay. Newt Bellis, the facility's West Coast president, explains that he selected the new boards because "they are very flexible, compact [and] computer-interfaced, [allowing] us to stay current with such imminent changes in the industry as stereo broadcasting." The modular designed, rackmount desk comprises two, 32-by-12 routers, and accepts six stereo audio sources for separate recording and monitoring, plus automatic crosspoint pre-sets, LED readouts, built-in routers, three-band equalization, and RS-232 computer interface — used for remote audio level control. The company's L.A. facility is the most recent addition to the existing Pittsburg and New York operations. The consoles were supplied by Audio Intervisual Design Systems, which has also equipped Pacific Video with a YNH board. 5555 Melrose Ave., Studio G, Los Angeles, 90038 (213) 468-4606.
The problem is noise, and it takes many forms. Cart machines, production decks, and video recorders; microwave STLs and telco lines — they're all noisy. A lot have hum, too.
dbx has the solution: Type II noise reduction. Type II doubles the dynamic range of the medium it's used with, which means at least 40 dB of noise reduction. You'll hear the results instantly. Indeed, a modest investment will make your current equipment sound virtually state-of-the-art.
What else would you expect from the company chosen to provide noise reduction for US stereo TV?
Get dbx Type II. Reduce your noise.
For additional information circle #174
Professional Products Division
71 Chapel St.
Newton, Mass 02195 USA
Telephone: (617) 964-3210
Telex: 92-2522 October 1985
EVERGREEN STUDIOS (Burbank) has replaced several, low-frequency drivers in its existing UREI Model 813 monitors located in Studio A with Cetec Gauss Model 3588 15-inch coaxials and 4583A 15-inch woofers. The facility's head of engineering, Marc Gebauer, says that the updated speakers "sound better than anyone imagined." The top end is much smoother. [with a] tighter bottom-end." When audio consultant George Augspurger re-tuned the studio after the speakers were installed, Gebauer commented that "the room EQ controls were virtually flat." The facility is now designing its own cabinets for the Gauss loudspeakers — which will be bi-amped — and plans to replace the Studio B monitors with Gauss components and custom cabinets. 146 North Golden Mall, Burbank, CA 91502 (818) 842-5163.
POST GROUP SOUND (Hollywood) opened its new sound department on August 1, offering post-production audio services for film and video. The two new studios are fully "floated," and acoustically treated with a copyrighted "undulating full-wall acoustic diffusing system" designed by Jeff Cooper Architects, A.I.A. The studios themselves were designed by Post Group engineers Tamara Johnson and Peter Cole. A modified, 48 track Neve 8128 desk (with NECAM automation) resides in Studio A, and a Neotek Elite board controls Studio B. Otari MTR-90 24-track, MTR-20 four-and-two-tracks (reported to be the first Otari center-track timecode machines to be delivered in this country) are shared by both studios, plus CMX 340 machine control throughout the entire facility. Outboard gear includes Adams-Smith synchronizers, UREI 813B studio monitors, Quantec QRS Room Simulator, Lexicon Super Prime Time and Model 1224 digital processors, Dolby and dbx processing, and SONY BVH-2000 and BVU-800 video recorders. 6335 Homewood Avenue, Los Angeles, CA 90028 (213) 462-2300.
SMOKETREE STUDIO (Chatsworth) has installed a new Neve 8078 A 76-input console linked to the facility's existing two Studer A800 24-track machines. Chatsworth, CA (818) 998-2097
Northern California:
FIG TREE PRODUCTIONS (Aptos) plans to open its MIDI-based composing and recording studio to outside songwriters and musicians for developing arrangements and recording song demos. Previously, the facility was primarily for in-house productions. The keyboard-oriented studio is based around a 36-track sequencing/editing software system running on a Commodore SX64 computer. Other instruments include a Yamaha KX88 Master Keyboard Controller and TX-7 synthesizer, Roland Juno-106 synth, Sequential Six-Trak and Drumtracks drum machines, plus Korg and Roland synchronizing and interface units. Recording hardware is based around a Fostex four-track, Tascam two-track, DeltaLab digital delay, E-V/Tapco spring reverb, UREI limiters, and TOA monitor loudspeakers. P.O. Box 1634, Aptos, CA 95003.
Cougars Run
Lake Tahoe
AN ARTIST'S RETREAT
Carl Yanchar/Lakeside Associates
Design and Construction
NEVE 8108 with NECAM I
3 Bedrooms, Lounge, Hot Tub, Heated Pool,
Clean Air, and Fresh Mountain Spring Water
Minutes to Fantastic Skiing & Year-Round Recreation
3 Acres of Freedom, Fresh Air, and Privacy in
a Park-Like Setting Overlooking Lake Tahoe.
FOR INFORMATION REGARDING BOOKING, CONTACT:
JODY EVERETT PETERSON
General Manager
702/832-7711 or write
P.O. Box 7418, Incline Village, NV 89450.
An Inspiring and Spectacular View of Lake Tahoe
From the Studio/Control Room.
For additional information circle #175
We've earned our great track record mixing great record tracks
After twenty-five years, Neve remains the industry standard in audio mixing; a name synonymous with quality sound.
Those who have benefitted most from our achievements are those who have contributed most to our success: our customers.
Twenty-five years is a long time to stay on top in a technically-sensitive business where the demands for greater track capacity and processing power show no signs of slowing down.
That's why today, as always, Neve responds: to our customers, our industry, ourselves.
We respond with the finest totally instinctive automation system, Necam; the state-of-the-art in digital recording, DSP; and millions of dollars in analog and digital research to ensure our record of leadership in the studios of today and tomorrow.
For twenty-five years, respected record producers and engineers have relied on Neve for audio excellence and the extra conveniences that help deliver a hit.
We've earned quite a reputation from their achievements. After twenty-five years we're still leading the race in audio technology. And we're about to establish even greater records!
Neve
RUPERT NEVE INCORPORATED Berkshire Industrial Park Bethel, CT 06801 (203) 744-6230 Telex 669638 Facsimile (203) 703-7863 • 7533 Sunset Blvd., Hollywood, CA 90046 (213) 874-8124 • P.O. Box 40108 Nashville, TN 37204 (615) 365-2727 Telex 786529 • RUPERT NEVE OF CANADA, LTD. represented by Sonotechnique • 2585 Boulevard St. Joseph 304, Montreal, PQ H3S 1A9 Canada (514) 739-3368 Telex 055-62171 • NEVE ELECTRONICS INTERNATIONAL, LTD. Cambridge House, Melbourne* Royston, Hertfordshire, SG86AU England Phone (0263) 60776 • RUPERT NEVE GmbH 6100 Darmstadt Bismarckstrasse 114, West Germany Phone (06151) 81764.
THE FINAL STAGE
The R-e/p Buyer's Guide of Cutting and Mechanical Services
- MASTERING • PRESSING •
- TAPE DUPLICATION •
- PACKAGING •
R-e/p's Unique Directory Listing of Disk Cutting and Tape Duplicating Services — the kind of services all recording and production facilities require as the "Final Stage" in the preparation of marketable audio product.
Key to Services:
DM = Disk Mastering
TD = Tape Duplication
PL = Plating
PR = Pressing
PK = Packaging
CD = CD Preparation
To be included in the next edition of "The Final Stage" send details to Rhonda Kohler, RECORDING Engineer/Producer, P.O. Box 2449, Hollywood, CA 90078. (213) 467-1111.
WORLD RECORDS
FREE MANUFACTURING and PACKAGING BOOK
Fully illustrated. The essential pricing and planning guide for all YOUR custom record and tape needs. Albums, 45's, EP's, 12" Singles, CrO₂ Cassettes, Jackets. All completely described and package priced, fully guaranteed to give 100% satisfaction. Our 16th year of providing dependable, fast, competitively priced service.
CALL NOW
1-800 263-7798
TOLL FREE
WORLD RECORDS
Baseline Rd. W., Bowmanville, Ont. L1C 3Z3
For additional information circle #178
DICK CHARLES RECORDING
130 W. 42nd St. #1106
New York, NY 10036
(212) 819-0920 DM, TD
COOK LABORATORIES, INC.
375 Ely Ave.
Norwalk, CT 06854
(203) 853-3641 DM, TD, PL, PR, PK
CREST RECORDS, INC.
220 Broadway
Huntington Station, NY 11747
(800) 645-5318 DM, TD, PL, PR, PK
CRYSTAL CITY TAPE DUPLICATORS, INC.
48 Stewart Ave.
Huntington, NY 11743
(516) 421-0222 TD
CUE RECORDINGS, INC.
1156 Ave. of Americas
New York, NY 10036
(212) 921-9221 TD
THE CUTTING EDGE
P.O. Box 217
Ferndale, NY 12734
(914) 292-5965 DM, TD, PL, PR, PK
O&G MASTERING
P.O. Box 370
Englishtown, NJ 07726
(201) 466-2411 DM, PL, PR
DIGITAL BY DICKINSON
Box 547, 9 Westhouse Plaza
Bloomfield, NJ 07003
(201) 429-8996 CD
DISKMAKERS
925 N. 3rd St.
Philadelphia, PA 19123
(800) 468-9353 TD, PR
DYNAMIC RECORDING
2846 Dewey St.
Rochester, NY 14616
(716) 621-6270 TD, PR
EXECUTIVE RECORDING, LTD.
300 W. 55th St.
New York, NY 10019
(212) 247-7434 DM
ESP
ABSOLUTELY the BEST QUALITY and SERVICE at ABSOLUTELY the BEST PRICES FREE BOXES with any order
Real Time Cassette Duplication Printing and Packaging
26 Baxter Street
Buffalo, NY 14207 (716)876-1454
EUROPADISK, LTD.
Europadisk, Ltd
75 Varick Street
New York, NY 10013
Audiophile pressing
Exclusively on imported TELDEC vinyl
Licensed for DMM Central Plating and Pressing
PHILADELPHIA NEW YORK
FRANKFORD/WAYNE MASTERING LABS
Computerized Disc Mastering
(215)561-1794 (212) 582-5473
FORGE RECORDING STUDIOS, INC.
P.O. Box 861
Valley Forge, PA 19481
(215) 644-3266 935-422 TD
GEORGE HEID PRODUCTIONS
(412) 561-3399
Otari Mastering and Bin Loop duplication. AGFA 611, 612, Magnetite and 627 BASF True Chrome Pro II We're dedicated to the finest stereo duplication at truly competitive prices
HUB-SERVALL RECORD MFG
Cranbury Rd.
Cranbury, NJ 08512
(609) 655-2166 PL, PR
IAN COMMUNICATIONS GROUP, INC.
10 Upton Drive
Wilmington, MA 01887
(617) 658-3700 TD
MARK CUSTOM RECORDING SERVICE
10815 Bodine Rd.
Clarence, NY 14031
(716) 659-2600 DM, TD, PR, PL, PK
MASTER CUTTING ROOM
32 W. 44th St.
New York, NY 10036
(212) 581-6505 DM
MASTERDISK CORPORATION
16 West 61st St.
New York, NY 10023
(212) 541-5022 DM
If you're reaching for Gold or Platinum, first reach for AGFA PEM 469
Because there's never been a mastering tape like it. Agfa PEM 469 captures your sound perfectly in its complete dynamic range: It's everything you've always wanted. Reach...and you'll succeed...with Agfa PEM 469. The only thing standard is the bias.
AGFA-GEVAERT 275 NORTH STREET, TETERBORO, N.J. 07608 (201) 288-4100
For additional information circle #177
October 1985 □ R-e/p 115
Professional audio from the number one supplier
Westlake Audio Professional Sales Group
Sales:
Westlake's sales staff is ready to supply you with up-to-date information regarding new equipment, its features, availability and competitive prices.
Demonstration Facilities:
Unequaled in the industry are Westlake's demonstration facilities—from Audio/Video sweetening to demo production, broadcast to world class studio equipment.
Service:
Before and after the sale, Westlake's technical staff is at work to assure a professional interface of the equipment to your system. Our staff is familiar with all of the various technologies in use today.
Ampex, 3M, MCI/Sony, Otari, Soundcraft, JBL, U.R.E.I., Westlake Audio, Aphex, AKG, Neumann, Sennheiser, Shure, White, Eventide, Lexicon, Crown, BGW, A.D.R., Yamaha, BTX, Valley People, DBX, Bryston, Studer/ReVox and many other professional lines.
from acoustic design to down beat...
Westlake Audio Professional Audio Sales Group
7265 Santa Monica Boulevard
Los Angeles, California 90046
(213) 851-9800 Telex: 698645
RECORDING COMPANY
Madison Ave
York NY 10017
(212) 308-2300 DM. PL. PR PK
PETER PAN INDUSTRIES
135 Korman St
Newark, NJ 07105
(201) 344-4214 DM. PR. PL.
QUIK CASSETTE CORP
250 W 57th St Rm 1400
New York, NY 10019
(212) 977-4411 TD
RESOLUTION, INC
1 Mill St — The Chase Mill
Burlington, VT 05401
(802) 882-8881 TD PK
SOUND TECHNIQUE, INC
130 W 42nd St
New York, NY 10036
(212) 865-1323 DM
SOUNDETEK INC
1780 Broadway
New York, NY 10019
(212) 489-0806 DM. TD. PL. PR. PK. CD
SOUNDWAVE RECORDING STUDIOS, INC
2 West 45th St #903
New York, NY 10036
(212) 730-7360
SPECTRUM MAGNETICS, INC.
1770 Lincoln Highway, East
P.O. Box 218
Lancaster, PA 17603
(717) 296-9275 TO. PK
Toll-Free 800-441-8854
BASF CHROME a specialty
Your audio cassette company!
STERLING SOUND INC.
1790 Broadway
New York, NY 10019
(212) 757-8519 DM
SUNSHINE SOUND, INC
1650 Broadway
New York, NY 10019
(212) 582-6227 DM. PL
TRACY-VAL CORPORATION
201 Linden Ave
Somerdale, NJ 08083
(609) 627-3000 PL
AMERICAN MULTIMEDIA
Route 8, Box 215-A
Burlington, NC 27215
(919) 229-5559 TD
PAT APPLESON STUDIOS INC
1000 N W. 159th Drive
Miami, FL 33169
(305) 625-4435 DM. TD. PL. PR PK
COMMERCIAL AUDIO
77 S. Witchuck Rd
Virginia Beach, VA 23462
(804) 497-6506 TD
CUSTOM RECORDING AND SOUND, INC
1225 Pendleton St
P.O. Box 7617
Greenville, SC 29610
(302) 269-5018 TD
GEORGIA RECORD PRESSING
262 Rio Circle
Decatur, GA 30030
(404) 373-2673 PR PK
KINURA RECORDS
377 Westward Dr
Miami Springs, FL 33166
(305) 887-5329 DM. TD. PL. PR. PK
MAGNETIX CORPORATION
770 West Bay St
Winter Garden, FL 32787
(305) 656-4494 TD. PK
MIAMI TAPE INC
8180 N.W. 103 St
Hialeah Gardens, FL 33016
(305) 558-9211 TD DM. PR. PL. PK
MUSIC PEOPLE STUDIOS
932 Woodlawn Rd
Charlotte, NC 28209
(704) 527-7359 TD. PK
NATIONAL CASSETTE SERVICES
613 N Commerce Ave / P O Box 99
Front Royal, VA 22630
(703) 635-4181 TD. PK
PROGRESSIVE MUSIC STUDIOS
2116 Southview Ave
Tampa, FL 33606
(813) 251-8093 TD. PK
SMITH & SMITH SOUND STUDIOS
214 Doverwood Rd
Fort Park, FL 32730
(305) 331-6380 TD. PK
South Central:
A&R RECORD & TAPE MANUFACTURING
902 N. Industrial Blvd
Dallas, TX 75207
(214) 741-2027 DM. TD. PL. PR. PK
ARDENT MASTERING, INC
2000 Madison Ave
Memphis, TN 38104
(901) 725-0855 DM
CASSETTE CONNECTION
41 Music Square East
Nashville, TN 37203
(615) 248-3131 TD
CREATIVE SOUND PRODUCTIONS
9000 Southwest Freeway, Suite 320
Houston, TX 77074
(713) 777-9975 TD
FINALLY... C-Zeros You Can Trust!
Our shells are engineered to give you:
★ Perfect azimuth control
★ Smooth and uniform loading at ultra high speeds
★ Easy, direct on-cassette printing or labelling
★ The most competitive pricing
Also available are pancake and bulk loaded premium cassettes
Call Us For Free Samples!
Jordax California Inc.
1513 Sixth St., Suite 204
Santa Monica, CA 90401
Phone: (213) 393-1572
DON'T READ THIS!!!
UNLESS YOU WANT THE BEST ALBUMS AND TAPES AT THE BEST STUDIO PRICES AVAILABLE. COMPLETE ALBUM AND TAPE PACKAGE.
—1 to 4 color U.V. Coated Jackets—
—Stock Jackets—
—7' Records and printed sleeves—
—Cassettes and 1 to 4 color inserts—
Call STORER PROMOTIONS Collect (513) 621-6389 for FREE information and quotations."If you want more than good you want the BEST!"
Storer Promotions
2149 W Clifton Ave. PO. BOX 1511
Cinti., OH 45219 Cinti., OH 45201
(513) 621-6389
October 1985 Record 117
THE QUALITY PACKAGE
1,000 pure vinyl records in paper sleeves
One color printed labels
All metal parts and processing
Mastering with Neumann VMS70 lathe & SX74 cutter
7" 45 RPM Record Package $399. (FOB Dallas)
12" Album Package Records and Printed Covers $1372.
(To receive this special price, this ad must accompany order.)
12" 33-1/3 Album Package includes full color stock jackets or custom black and white jackets.
Package includes full processing.
Re-orders available at reduced cost.
We make full 4-color Custom Albums, too!
For full ordering information call 1-800-527-3472
RCA Test Tapes
A Sound Measure
• Cassettes
• Open Reel up to 1"
• Custom Formats
For a catalogue of standard test tapes or further information contact:
RCA TEST TAPES
DEPT R
6550 E. 30th St.
Indianapolis, IN 46219
(317) 542-6427
DISC MASTERING, INC.
30 Music Square West
Nashville TN 37203
(615) 254-8825 DM
DUPLI-TAPES INC.
1545 Bessomnet, Suite 104
Bellevue TX 77401
(713) 432-0435 TD
HIX RECORDING CO., INC.
1611 Herring Ave
Waco TX 76708
(817) 756-5303
MASTERCRAFT RECORDING CORP.
437 N Cleveland
Memphis TN 38104
(901) 274-2100 DM
MASTERFONICS
28 Music Square East
Nashville TN 37203
(615) 327-4533 DM CD
MUSIC SQUARE MFG. CO.
50 Music Square West, Suite 205
Nashville TN 37203
(615) 242-1427 CD, DM, TD, PR, PL, PK
NASHVILLE RECORD PRODUCTIONS
469 Chestnut St.
Nashville TN 37203
(615) 259-4200 TD, DM, PK, PL, PR
TRUSTY TUNESHOP RECORDING STUDIO
RI 1 Box 100
Neko KY 42441
(502) 249 3194 TD
Midwest:
A&F MUSIC SERVICES
2834 Otego
Pontiac MI 48054
(313) 682-9025 TD
AARD-VARK RECORDING INC.
335 S Jefferson
Springfield MO 65806
(417) 866-4104 TD PK
ACME RECORDING STUDIOS
3821 N Southport
Chicago IL 60613
(312) 477-7333 TD, PK
ARC ELECTRONIC SERVICES
2557 Knapp N.E.
Grand Rapids MI 49505
(616) 364-0022 TD
AUDIO ACCESSORIES CO
38WS15 Deerpath Rd.
Batavia, IL 60510
(312) 879-5098 TD, PK
AUDIO GRAPHICS
13801 E. 35th St.
Independence MO 64055
(816) 254-0400 TD, PK
BODDIE RECORD MFG. & RECORDING
12202 Union Ave
Cleveland OH 44105
(216) 752-3440 DM, TD, PL, PR
DIGITAL AUDIO DISC
1800 N. Fruitridge
Terre Haute, IN 47804
(812) 466-6821 CD
ELEPHANT RECORDING STUDIOS
21206 Gratiot Ave.
East Detroit, MI 48021
(313) 773-9386 TD
HANF RECORDING STUDIOS, INC
1825 Sylvania Ave
Toledo, OH 43613
(419) 474-5793 TD
INDUSTRIAL AUDIO, INC.
6228 Oakton
Morton Grove, IL 60053
(312) 965-8400 TD
JRC ALBUM PRODUCTIONS
1594 Kinney Ave.
Cincinnati OH 45231
(513) 822-9336 DM, PR, PK
KIDERIAN RECORDS PROD
4926 W. Gunnison
Chicago IL 60630
(312) 399-5535 DM, TD, PL, PR, PK
MAGNETIC STUDIOS, INC
4784 N. High St
Columbus OH 43214
(614) 262-8607 TD
MEDIA INTERNATIONAL, INC
247 E. Ontario
Chicago, IL 60611
(312) 487-5430 TD, PK
MIDWEST CUSTOM RECORD PRESSING CO
P.O. Box 92
Arnold MO 63010
(314) 464-3013 TD, PL, PR, PK
MOSES SOUND ENTERPRISES
270 S. Highway Dr.
Valley Park, MO 63088
(314) 225-5778 TD
MUSICOL, INC.
780 Oakland Park Ave
Columbus, OH 43224
(614) 267-3133 DM, TD, PR, PK
NORWEST COMMUNICATIONS
123 South Hough St.
Barrington, IL 60010
(312) 381-3271 TD
PRECISION RECORD LABS, LTD.
932 West 38 Place
Chicago, IL 60609
(312) 247-3033 DM, TD, PR, PL, PK
Q.C.A. CUSTOM PRESSING
2832 Spring Grove Ave
Cincinnati, OH 45225
(513) 681-8400 DM, TD, PL, PR, PK
RITE RECORD PRODUCTIONS, INC
9745 Mangham Drive
Cincinnati, OH 45215
(513) 733-5533 DM, TD, PL, PR, PK
RON ROSE PRODUCTIONS
29277 Southfield Rd.
Southfield, MI 48076
(313) 424-8400 TD
SOLID SOUND INC.
P.O. Box 7611
Ann Arbor, MI 48107
(313) 662-0669 TD
SONIC SCULPTURES
636 Northland Blvd.
Cincinnati, OH 45240
(513) 851-0055 PM
STANG RECORDS MANAGEMENT
P.O. Box 256577
Chicago, IL 60625
(312) 399-5535 CD, TD, DM, PL, PR, PK
STORER PROMOTIONS
P.O. Box 1511
Cincinnati, OH 45202
(513) 621-6389 DM, TD, PR, PL, PK
STUDIO 91
University Blvd
Berrien Springs, MI 49104
(616) 471-3402 TD
STUDIO PRESSING SERVICE
320 Mill St.
Lockland, OH 45215
(513) 793-4944 TD, DM, PR, PL
SUMA RECORDING STUDIO
5706 Vrooman Rd.
Cleveland, OH 44077
(216) 951-3955 DM, TD, PL, PR, PK
TRIAD PRODUCTIONS
1910 Ingersoll Ave.
Des Moines, IA 50309
(515) 243-2125 TD
Mountain:
BONNEVILLE MEDIA COMMUNICATIONS
130 Social Hall Ave
Salt Lake City, UT 84111
(801) 237-2677 TD
## THE FINAL STAGE
### DIGITAL AND REAL-TIME CASSETTE DUPLICATION BY GRD
FOR TRUE REALISM AND PURITY PHONE
602-252-0077
P.O. BOX 13054
PHOENIX, ARIZONA 85002
### AUDIO CASSETTE DUPLICATORS
5816 Lankershim Blvd. #7
North Hollywood, CA 91601
(818) 762-2232 TD
### AUDIO VIDEO CRAFT INC
7000 Santa Monica Blvd.
Los Angeles, CA 90038
(213) 466-6475 TD
### AWARD RECORD MFG. INC
5200 W 83rd St
Los Angeles, CA 90045
(213) 645-2281 DM TD PL PR PK
### BAMCO RECORDS
1400 S Citrus Ave
Fullerton, CA 92633
(714) 738-4257 PR
### BUZZY'S RECORDING SERVICES
6900 Melrose Ave
Los Angeles, CA 90038
(213) 931-1867 TD
### CMS DIGITAL RENTALS INC
453-E Wapello St
Altadena, CA 91001
(818) 797-3046 CD
### CAPITOL RECORDS STUDIOS
1750 N Vine St
Hollywood, CA 90028
(213) 462-6252 DM TD
### CASSETTE PRODUCTIONS UNLIMITED
465 Delacey St, Suite 24
Pasadena, CA 91105
(818) 449-0893
### CUSTOM DUPLICATING INC
3404 Century Blvd
Inglewood, CA 90303
(213) 670-5375 TD PK
### DYNASTY STUDIO
1614 Cabrillo Ave
Torrance, CA 90501
(213) 328-6836 TD
### FILAM NATIONAL PLASTICS INC
13984 S Orange Ave
Paramount, CA 90723
(213) 630-2500 PK
### FUTURE DISC SYSTEMS
COMPLETE ANALOGUE & DIGITAL MASTERING SERVICES FOR COMPACT DISC, RECORD & CASSETTE MANUFACTURING
3475 CAHUENGA BLVD WEST
HOLLYWOOD, CA 90068 (213) 876-8733
### BERNIE GRUNDMAN MASTERING
6054 Sunset Blvd
Hollywood, CA 90028
(213) 465-6264 CD DM
### HITSVILLE STUDIOS
7317 Romaine St
Los Angeles, CA 90046
(213) 850-1510 DM CD
### JVC CUTTING CENTER
6363 Sunset Blvd. #500
Hollywood, CA 90028
(213) 467-1166 DM CD
### INOVONICS
Replacement Tape Electronics for:
- Ampex and Scully Magnetic Recorders
- High Speed Tape Duplicating
- Playback Only
- Mag Film
EXCLUSIVE DISTRIBUTION BY MARCOM
P.O. BOX 66507
SCOTTS VALLEY, CA 95066 (408) 438-4273
### POLYSET Audio Cassettes
PROFESSIONAL QUALITY in 46 Stock Lengths
C-02 thru C-92
In One Minute Per Side Increments
POLYSET DIV.
of Polyline Corp.
1233 Rand Road
Des Plaines, IL 60016
(312) 298-3073
For additional information circle #185
Sony Out-Sonys Sony
That's right, Sony's new CCP-110 audio cassette copier packs more features in a smaller and lighter package than its predecessor, the Sony CCP-100. Sony's electronic wizardry tells you if the recording cassette is too short, but only when there is a signal on the master. If in doubt, the CCP-110 stops right there so you can check the copy. End of audio sensing and track select let you combine and edit masters simply and automatically. And there's more: You can add a two-copy slave (CCP-112) and both models include the exclusive Sony brushless and slotless (BSL) motors and, of course, the record and playback heads carry the EXCLUSIVE SONY TWO-YEAR WARRANTY against head wear.
Call us for the name of the Authorized Sony Dealer near you.
SONY AV PRODUCTS NATIONAL DISTRIBUTOR
educational electronics corporation
P.O. BOX 339 • INGLEWOOD, CA 90306-0339 • (213) 671-2636
For additional information circle #187
We Listened To You...
For years, through our customers and market research, we have been listening in order to find out what you like and what you dislike about your tape duplicating equipment (ours or theirs) and to know what features you would include in the tape duplicator of your dreams.
The result is the 7000 Series by Magnefax
but don’t take our word for it.
Listen To Us
Name ___________________________ Company ___________________________
Address ___________________________ City ___________________________
State ________ Zip ________ Phone: (_____) ____________
To discover the new Magnefax, send for the whole story and a demo cassette.
October 1985 R-e/p 121
Although the first multitrack digital recorders, manufactured by 3M, were being delivered to recording studios during the early months of 1979, they were not used in motion picture post production at Hollywood facilities until 1982. An exception was the digital recording of motion picture scores; the music for such films as *The Black Hole* and *Annie* was recorded with the 3M Digital Mastering System, with conventional 35mm analog mixdowns used in the final film mix.
Probably the first film to utilize a completely digital soundtrack was the resoring of Walt Disney's *Fantasia* in early 1982 (described by this author in the October 1982 issue of *Re p*). The new score, conducted by Irwin Kostal, was recorded and mixed by Shawn Murphy on 32-track 3M Digital Mastering Systems at CBS Studio Center in Studio City, CA, in January of that year, with subsequent post-production at Evergreen Studios and Walt Disney Productions.
The primary release of the rescored *Fantasia* was in 35mm, Dolby-encoded four-track magnetic prints that were sounded from a 35mm four-track printing master. In other words, what was heard in theaters was all-digital plus two analog generations.
The digital re-recording of *Fantasia* was made possible in part by R&D that previously enabled Disney's trio of 3M digital multitracks to be used in the re-recording and music recording of all of the films prepared for the EPCOT Center, Florida. Since 1980, Disney has digitally recorded the scores for almost all of its feature films, including *Splash*, *Country*, and *The Black Cauldron*.
In 1984, three films were released that had used Sony PCM-3324 24-track digital recorders during post-production. Sound for two of these films — *Digital Dream* and *Metropolis* — was completely digital up to the analog print, while *Stop Making Sense* was recorded 24-track analog, the digital multitracks being used for all post production up to the final Dolby Stereo optical negative. (It might interest some readers to note that the Dolby Stereo Lt-Rt printing masters for *Stop Making Sense* and *Metropolis* were Dolby-encoded on the digital multitracks to allow for a 1:1 transfer during the subsequent shooting of optical negatives, without having to encode the tracks being replayed from the digital master. The 35mm Dolby Stereo prints of *Digital Dream*, on the other hand, were sounded from an analog 35mm Lt-Rt mix.)
*Digital Dream* was produced at the new Glen Glenn sound facility in Hollywood, utilizing Sony digital multitracks for all music, Foley and ADR recording, plus re-recording. (Although portable Nagra analog tape machines were used to record production sound, all dialog was later replaced in ADR.) Sound effects were recorded on Sony PCM-F1 processors, and bumped up to PCM-1610 format in order to interlock the resultant \( \frac{1}{4} \)-inch U-Matic cassettes with the Glen Glenn PAP (Post Audio Production) system that had been modified for use with the digital machines.
Giorgio Moroder's resoring of the Fritz Lang classic, *Metropolis*, was recorded and pre-mixed using the three Sony digital multitracks at the producer's Oasis Studio in North Hollywood, CA, with four-track LCRS (left-center-right-surround) pre-mixes being checkerboarded onto eight tracks of the digital edit master. Next, the two pre-mixes were combined into a four-track master by bouncing up onto the digital dubbing master. According to engineer Brian Reeves, although all mixing at Oasis had been done utilizing a discrete center speaker while monitoring through the Dolby Stereo 4-2-4 matrix, it was decided that a film dubbing studio...
When it comes down to it...
A recording studio's reputation rests on two qualities. The first is its ability to help each artist deliver the performance of a lifetime. The second is the ability to help each producer capture those performances in exact detail, and to shape them at will.
The truly great studios are distinguished by their uncanny knack for making this kind of magic happen time and again. They are staffed by people who have mastered their art and know how to bring out the best. Because of their creativity, each of these top studios is successful in its own unique way.
Which makes it all the more remarkable that out of all the possible choices, the world's leading studios have independently selected a common standard of excellence for their mixing consoles and computers—the Solid State Logic SL 4000 E Series.
If you're searching for a first class studio anywhere in the world, we'd like you to have the latest SSL Network Directory. It lists over 200 data-compatible SSL rooms in 72 cities and 24 countries, complete with booking information. And if you're a studio owner wondering why your phone doesn't ring as often as it used to, we'd like to send you complete details on the SL 4000 E. Just give us a call. We're here to help.
Solid State Logic
Oxford • New York • Los Angeles • Hong Kong
Stonestield • Oxford, England OX7 2PQ • (089) 389-8282
200 West 57th Street • New York, New York 10019 • (212) 315-1111
6255 Sunset Boulevard • Los Angeles, California 90028 • (213) 463-4444
22 Austin Avenue, Suite 301 • Tsimshatsui, Kowloon, Hong Kong • (3) 721-2162
Quantec Room Simulation
Quantec heralds a new era. A revolution in acoustic versatility. Every sound environment is obtainable at the push of a button. Acoustics are no longer bound by the specific configuration of a room, but can be used to emphasize a scene, enhance or improve a sound or enrich a musical composition.
Programs
**Reverberation program**
- **Room size**
- $1 \text{ m}^3$ - $10^6 \text{ m}^3$ with 7 steps
- **Decay time**
- 0.1 sec to 100 sec (up to 400 sec at 40 Hz)
- **Decay time at low frequencies**
- Coefficient of 0.1 to 10 with 8 steps related to selected decay time
- **Decay time at high frequencies**
- Coefficient of 0.1 to 2.5 with 8 steps related to selected decay time
- **Reverberation density**
- More than 10,000 per sec depending on room size
- **Density of resonance**
- Max 3 per Hz of bandwidth depending on room size
- **Reverb**
- Pre-reverb delay: 1 ms - 200 ms in steps of 1 ms, level –30 dB to 0dB in steps of 1 dB ‘Off’ Function
- **1st Reflection**
- 1 ms - 200 ms in steps of 1 ms, level –30 dB to 0dB, ‘Off’ Function
**Enhance program**
Simulation of rooms without perceptible reverberation
**Freeze program**
Special loop program with infinite decay time to add any number of acoustical entries
More than just a Reverberator.
The Marshall modified high performance Quantec Room Simulator is again available.
For additional information circle #223
MARSHALL ELECTRONIC Box 438, Brooklandville, MD 21022 (301) 484-2200
Quantec GmbH, Solnerstr. 7a, D-8000 München 71, Tel. 089 / 7 94 40 41, Telex 5 23 793
DIGITAL FILM SOUND
would provide the most accurate simulation of a movie theater, especially in regard to the level of surround information. As a result, the Dolby Stereo Lt-Rt printing master was made at Todd-AO Stage A, Hollywood.
The use of Sony digital multitracks for the Talking Heads' *Stop Making Sense* concert movie was confined to the post-production stage, with the analog 24-track masters being transferred, with timecode offsets, to a digital dubbing master. Minimal pre-mixing was done by recording engineer Joel Moss at this point, it having been decided to leave balancing of the final mix at Warner Hollywood to Moss and dubbing mixer Steve Maslow. The Dolby Stereo Lt-Rt was bounced up on the digital edit master.
Late 1984 saw the first use of the Mitsubishi X-800 digital 32-track at The Burbank Studios (TBS) for the re-recording of the film *Body Double*. The music score by Pino Donaggio was recorded on an X-800, and mixed down to a four-track 35mm editing copy, without timecode. After he had rehearsed the edits on 35mm mag, music editor Richard Stone noted their location in feet/frames from the Academy Picture Start mark and then transposed them into SMPTE timecode numbers. These in/out points were later used to create a digital edit master by copying the digital four-track mix (which had been bounced up on the original multitrack tapes) to another X-800, which in turn had been pre-stripped with SMPTE timecode for each reel.
This digital edit master reel went to the final mix at TBS Stage 5, where the music master mix was recorded on four-track analog 35mm mag, along with four-track dialog and four-track effects "splits." This single mag-film generation was used because there was insufficient room on the X-800 for all 12 tracks of the final mix. Additionally, it was felt prudent to keep the final mix elements (four-track dialog, music and effects) on the same medium, especially in the event that the final mix had to be cut to conform to picture changes.
The X-800 was used, however, to record the Dolby Lt-Rt printing master, combining the three, four-track splits. This mix was then transferred to a Mitsubishi X-80 digital two-track for convenience in transferring to the optical negative.
The Mitsubishi X-800 also saw extensive use in the music recording of Francis Coppola's *The Cotton Club*. Although prescoring for the musical sequences was made to 24-track analog, all of the music heard in the final film was recorded digitally. Approximately half of the vocals were post-synced, with the other half being transferred from the 24-track analog masters to the digital multitrack tapes.
Recording engineer Tom Jung premixed the music onto 24-track analog in preparation for the final mix at Zoetrope Studios in Napa, CA, thus providing sound designer Richard Beggs with an average of 12 tracks per song. These tracks were also "strung off" onto three-track 35mm mag elements for ease in editing, and "slipping" of sync during the final mix.
Digital Recorders in Production
For the past 15 years, almost all films shot in the United States (and most of the world, for that matter) have used Nagra ¼-inch portable tape recorders made by Kudelski S.A. of Switzerland. Although these trusted machines owe their ubiquity partly to high quality (73 dB signal-to-noise ratio at 15 ips), the unit's high reliability perhaps figures more strongly in their universal appeal. (The standard comment is that "you can throw a Nagra off a cliff" and expect it to work.) The QGX2-60 crystal in a Nagra 4.2, the standard mono recorder, is accurate to within 10 parts per million (0.864 frames per hour). Since the magazines of most 16mm and 35mm cameras hold a maximum of 11 minutes of film, the Nagra's crystal is more than accurate for even the longest, continuous take.
What, then, does digital recording have to offer the world of production film sound? Despite the Nagra's excellent technical specs, sound editors and re-recording mixers must still deal with tracks that are seriously under-recorded (peaking at -20 dB on a Nagra modulometer), with dialog buried in tape hiss. Or recorded too hot, with excessive distortion. In some instances, this is not quite the fault of the production mixer, since the rushed atmosphere on film sets today often precludes time for a rehearsal. Also, soft-speaking child actors present a big problem, especially when they are talking to adults in a scene. All of the situations listed above make gain riding (and, perhaps more importantly, gain anticipation) an essential skill that can be learned only through much experience.
Another problem that exists in spite of the Nagra is tape print-through. The public (and, many times, even the re-recording mixer) is unaware of the careful frame-by-frame handiwork that dialog editors perform on production tracks. A recent informal poll of experienced dialog editors revealed...
PRACTICAL APPLICATIONS OF F1 — continued...
The EIAJ-Format mixdown tape was made to 3/4-inch U-Matic cassette for convenience in post-production. Again, since there was no timecode used during the shoot, the code recorded on track #2 of the U-Matic EIAJ sound master bore no relationship to the timecode on the videotape transfers. "We used the video coming out of the playback deck as sync for the timecode generator," Nichols recalls.
Nichols had used the standard "linear" analog track to record camera slates with the DigiSlate system. Similar to the bloop light system used by documentary sound/camera teams over the years, at the beginning of a camera start the operator swings over to an assistant holding up a small slate. When a button is pressed, a bulb is lit at the same time that the DigiSlate sends a "bloop" tone to the recorder. The resultant sync point is easily found, and many systems include an LED readout for numbering takes. This slate information — the bloops plus the vocal slates, for example "camera two, roll four" — was transferred to analog track #1 of the 3/4-inch EIAJ master.
At Compact Video in Burbank, the 3/4-inch and one-inch videotapes were striped with sound from the 3/4-inch EIAJ sound master by finding the proper timecode offset, using the bloop light at the beginning of each camera slate as synchronization reference. "It was perfect, and never went out of sync throughout the whole concert," says Nichols. "The digital processor was locked to house sync [59.94] during the transfer. Therefore, the only variable in the system was the frequency of my crystal in my DMP-100 when I recorded the concert, versus the frequency of the crystal in the 16mm cameras."
After the off- and on-line edit sessions, the U-Matic EIAJ master sound videocassettes were locked to the one-inch master (again after finding the necessary timecode offset) resulting, effectively, in first-generation analog audio.
In the fall of 1982, just after the introduction of the Sony PCM-F1, Nichols had recorded 28 concerts for John Denver using a pair of PCM-F1s. Since genlock modifications were not readily available for the F1 at this time, Nichols was unable to route the video out of one F1 into the other.
Nichols' solution was to "take the clock out of one F1 and run it into the second, so they were synchronized. I also had a timecode generator which was looking at the F1-generated video, and using it as sync for the timecode that was going on the analog tracks of both Beta VCRs. Because there's no Beta II machine that you can lock up [for frame-accurate editing], I transferred the tape, using the copy mode, to 3/4-inch F1, regenerating the timecode."
The four tracks recorded during these earlier concerts were apportioned slightly differently than for the Russian shoot: one track for the vocal mike, two tracks for guitar, and one for audience. "I used the delay of the audience that bled into the vocal mike to fake a stereo audience track," Nichols explains.
When using a VHS deck to record with a PCM-F1 (or any PCM processor), Nichols advises that the deck should be modified to disable the dropout compensator, "so that the F1 can do all of its own error correction. Beta machines have a switch on the back marked 'PCM.' Many VHS machines don't have it, so their internal dropout compensator is still active [when using an F1]. So if there is a dropout on the tape, it [the VCR] substitutes a whole line, which makes the errors even worse. In most VHS machines you can wire a jumper to disable the dropout compensator."
Modifying a PCM-F1 Combination for Field Recording
A production mixer bringing a PCM-F1 on a movie set in 1985 probably attracts the same attention that was caused 20 years ago by the sight of a Nagra III. One difference,
Roger Nichols with his four-channel recording system, shown in close up above (L to R): Sony eight-input mixer, Nakamichi DMP-100 processor, Magnavox VHS Hi-Fi VCR and ART 01 digital reverb.
such as the Magna-Tech 93; or, 3, modifying the speed of the replayed \( \frac{1}{4} \)-inch tape to follow the crystal reference in the dubber. The latter two methods guarantee a proper transfer, while the first technique is virtually foolproof unless the crystals on both the production and transfer Nagras and the mag recorder are too high or too low in reference frequency. Even when the errors add up, a sync error would probably only be detectable in a long take. This is how films stay in sync.
**Synchronizing an EIAJ-Format Processor**
In the “record” mode, when using a standard half-inch consumer videodeck (such as the Sony SL-2000), a U.S.-bought Sony PCM-F1 feeds NTSC 59.94 Hz composite video into the “video input” of the VCR. During playback, the F1 processor locks to incoming video, which is governed by the 59.94 Hz field rate of the crystal in the video playback deck.
Which brings up the most obvious issue with regard to the sync capabilities of an unmodified EIAJ processor: *Is It Not Referenced to 60 Hz?*
We are again back to the issue of the internal crystals fitted to the processor and VCR: Is 59.94 Hz acceptable? The answers from experienced mixers and engineers was a “definite maybe.”
The “definite” stems from the concept that if the F1 VCR combination records and plays back with reference to the 59.94 Hz, even though the camera utilizes a standard 60 Hz crystal, picture and sound will match because they both obey the same “clock on the wall.” One second is one second.
David Smith, chief engineer of Editel, New York, has made extensive tests on the F1, researching its use in video editing. “When you are recording, the F1 is the master; in playback, the playback deck’s *crystal* is the master. If the playback deck and the F1 are both close to each other; it’s pretty good.
“In television post-production, where we use the F1 frequently, we have to pull it up about one frame every seven-to-10 minutes if it isn’t synchronously locked to the master video generator. If you are going to shoot three- or four-minute scenes, or 30- or 20-second clips, it works fine. If you are going to do the sound for a half-hour scene, however, you would have to accept an external timebase.”
Richard Topham, Jr., general manager of Audio Services Corp. of North Hollywood, is not only one of the largest Nagra dealers in the world, but also rents the PCM-F1 for film use, and has sold over 1,000 of the Sony processors. Although all of his rental F1s are modified to accept external sync, he recommends using the F1 on its internal 59.94 Hz crystal and has had good results with it. “I know it’s right and I stand behind it,” he says. “There are a lot of guys who think you *have to* resolve to 60 Hz because you are dealing with film. I do not believe that this is true. It doesn’t matter if it’s 59.94, as long you *resolve* it to 59.94.”
The most obvious way around the problem would seem to be to print a crystal-generated 60-Hz sinewave on the VCR’s longitudinal analog audio track, and then use this signal to drive the mag recorder during transfer. David Smith has utilized such a technique with a Sony SL-2000 Betamax deck, and reports that it works fine. It should be noted, however, that it is possible that the replayed sync signal will contain a significant wow and flutter content. If the phase-locked loop in the synchronization system tracks accurately during transfer, the mag copy might also be full of wow and flutter, just like the VCR. This problem could be avoided altogether by taking up one of the two PCM audio tracks to record the sync signal, but that is a compromise that many mixers would not like to make.
The other two methods of achieving 60-Hz sync on location involve either installing a 60-Hz referenced crystal...
DIGITAL FILM SOUND
in the F1, or modifying it to accept an external 60-Hz sync source.
An easy way to obtain a 60-Hz F1 is to purchase a PAL/SECAM version. As it turns out, PA/SECAM F1s come with the necessary peripheral circuitry, and the ability to alter sampling rate for both crystals is integral to the unit. However, the rates are not switch-selectable, and must be modified to be so. Although the PAL F1 runs at 50 fields, the sampling frequency is 44.1 kHz, which is a multiple of 60 Hz (as opposed to 44.056, which relates to 59.94-field NTSC color video).
Among the companies that offer modifications of Sony digital processors is Audio+Design/Calrec, of Bremerton, WA. Although PAL operation is part of the modification they make to the PCM-701ES, the company's Tom Gandy notes that "if you want an F1 to run at 60 instead of 59.94, you should call up Sony Parts, lay down a few bucks for a crystal and pop it in; it's a five-minute job. You need a service manual to see which circuit to pull, but it's clearly labeled."
The second way to achieve 60-Hz crystal sync is to modify the F1 to accept an external sync input. David Smith says that his machines "take 60-cycle sinusoidal sync and convert it to vertical drive component of video: squaring it, making it the proper level. The width must be adjusted to 10 lines, which is 640 microseconds. I then do an external video lock to this 'phony' video that has been made from the 60-cycle sinewave. You will then get 60 cycles and a sampling frequency of 44.1 kHz."
When working with F1-encoded material that has been recorded at 60 Hz — either referenced to internal crystal, or an external 60 Hz source — the mixer must be sure that the playback deck used during transfer can accept external sync. David Smith explains: "The F1 is going to spit out [during recording] 60 fields, 30 frames at [a sampling frequency of] 44.1 kHz, while the playback deck will be at 59.94/29.97 at 44.056. You will get a time slide — it will play back slightly slow — and a pitch shift.
"We modify the machine that is playing back the F1 material so that it runs at the exact 60 cycles that it was recorded at. In other words, the playback machine's internal crystals are all at the 59.94 field rate; we have to up it to 60 by feeding in an external source of 60 into the machine. Then you don't have to put 60-cycle sinusoidal on the longitudinal track to get a perfect playback."
Audio Intervisual Design, Los Angeles, offers two levels of modifications for the F1. Stage one allows the unit to lock to incoming composite video, while not affecting normal crystal NTSC operation. The second step enables the F1 to lock not only to composite video (60 or 59.94 Hz), but also to "just about any sync source you might have. This includes vertical drive, squarewave, sinewave, etc.," according to AID's Mike Novitch. A 60-Hz crystal is also installed, in addition to an RCA jack, which provides a sinewave output from the reference frequency. Whatever sync is coming into the machine the F1 is referenced to is now going out, so if anyone else needs to see it, you have that option."
Two switches allow conversion choice between internal 59.94 Hz and external sync, which has three options: 60-Hz crystal, external 60 Hz, and external composite video.
Despite his high regard for the quality of the F1 processor, Audio Service's Rich Topham recommends using a Nagra as a backup. "The F1 is too new and mixers have to cover themselves," he offers. "I'd like to have a $7,000 machine sitting next to a consumer video deck and PCM processor."
Some of the problems presented by an EIAJ-Format processor can never be "solved" in its present form: for portability, the unit has to be connected to a half-inch consumer VCR, and neither machine is film-ready. One cannot throw a F1/SL-2000 over the shoulder, for example, and run with it during a shoot without fear of accidentally hitting the wrong switch or power running down. Also, off-tape monitoring is not possible and winding back to listen to a just-recorded take is not as convenient as it is with a Nagra.
All of which leads everyone to the same question: Who will come out with the first "digital Nagra"? There is no official word from Kudelski — or, for that matter, any other major manufacturer — that they will make this happen in the near future.
The industry (and much money) awaits the first company on the block with a portable, one-piece, "bulletproof," sync-ready digital recorder.
RANDOM-ACCESS SOUND EDITING
The potential sonic benefits of digital sound effects and production recordings are minimized — some might even say eliminated — when the digital tracks are transferred to 35mm mag stripe and then re-recorded four times before reaching the final analog print. An "easy" solution would be to utilize a system based on digital 35mm recorders which, presumably, would feature an analog guide track to allow editing on Movilas, flatbed editing tables, and sync blocks. While this capability would take care of the quality/generation loss issue, it would do nothing in regards to what is probably the biggest problem faced by sound editors: tight deadlines, abetted by frequent picture changes.
Digital 35mm tracks would still have to be manually synced, edited, and leadered. Dozens of elements would have to be re-cut every time the director or editor decides to make a change. If two feet of picture are added at the 400-foot, three-frame point from the start mark, every sound unit must reflect the update or sync will be lost 400 feet and four frames into the reel.
Along with this recutting process comes the rewriting of the cue sheets, with perhaps a half-dozen assistant editors working overtime coordinating the whole show. (The record world should count its lucky stars that it has no analogy to this problem.) Multiply this headache by the 12 reels in an average film and you have many talented sound editors wasting much of their time just keeping up with changes.
The recutting problem outlined above is one reason why digital random-access sound editing one day will completely replace the venerable Movilas that have served sound editors for over 55 years. ("Random-access" is partly a misnomer, because digitized sounds would be pulled from Winchester hard disks or optical disks, etc., and not from random-access memory; it will be many years, if not decades, before RAM becomes inexpensive enough to store the large quantities of digitized sound effects needed during an edit/mix session.)
Not only would sound be able to remain in the digital domain, but the labor- and time-intensive manual steps involved in 35mm feature sound effects editing — auditioning ¼-inch tapes, transferring selected sounds to individual mag rolls, editing sounds to fit the picture, leadering and labeling the sounds onto reels, and writing up cue sheets — would be condensed dramatically.
Random-access editing would allow one person to audition, say, all of the explosions in a library, edit the chosen effects to fit the picture (perhaps adding digital processing), and print out a cue sheet in the time it would normally take to audition the sounds (assuming the library was cross-referenced on a computer!).
With a random-access system (as we can best envision them today), there will be little difference between the hardware used in sound editing
and mixing. The chosen explosion, for example, will never actually be copied from the original disk auditioned by the sound editor. Instead, only when that reel is pre-mixed will the sounds be re-recorded, placing them in accordance to the timecode-based Edit Decision List (EDL) created by the sound editor. If the editor will not be mixing in house, the required digital tracks could be transferred to a digital multitrack for playback at the dubbing stage.
Since it is presumed that both picture and sound editing will be connected to a central computer database, such mundane but time-consuming chores as revising cue sheets and conforming the audio to match picture changes will present little problem.
Such freedom is more complex and time-saving than might be imagined. Perhaps it can most clearly be stated that all material recorded for the film — production dialog, sound effects, music, Foley and ADR — along with library sound effects, are always readily accessible in any order at any time to anyone. Thus, if the sound editor wants to modify, for example, a car crash pre-mix at the time of the final mix, this can be accommodated readily, even though the picture cut for that reel has changed through three later versions. Today's technology would require a few hours of editing (conforming the original cut tracks to the current version) and mixing time — while trying to remember all of the EQ settings, etc., used during the pre-mix, which may have taken place weeks ago.
It should be made clear that the problems concerning today's analog 35mm-mag-based technology are also present in any digital recording format that would be stored in any serial medium — digital multitrack tape, digital 35mm mag film, digitally-encoded videotape, etc.
**SoundDroid**
One of the most eagerly anticipated events at the 1985 NAB Convention in Las Vegas was the introduction of the Lucasfilm/Convergence SoundDroid sound editing and mixing system. Since the last *Reel* update on digital sound research at Lucasfilm, project leader Andy Moorer and his staff of six have concentrated on the "front-end user interface." The system is currently using a touch screen laid over a high-resolution graphics VDU, which is a slight change from the trackball configuration used in the companion Lucasfilm Convergence EditDroid picture editing system.
The touchscreen "gives the user a more direct way of interacting with what's going on on the screen," Moorer offers. "Rather than reaching off on the side, you point directly at it. But you can use a trackball or mouse with the system."
The 1024- by 800-pixel bit-map screen has three basic formats. The Electronic Cue Sheet, not only lists footages, but also displays action description and dialog. The MeterScreen allows for patching of three-band digital equalization, reverb, and panpotting, while indicating the signal flow. A recent improvement to the Meter Screen is the concept of "pages," allowing the user to immediately jump to eight other channels, as opposed to the scrolling feature used earlier. Moorer notes that "people were asking for faster console response, and we were willing to give up a little flexibility. The paging feature allows us to bring up eight more faders in a flash." Finally, the Library Screen accesses the database of sound effects and production recordings.
The current SoundDroid hardware configuration consists of two Motorola 68010 microprocessors — including one for the control computer — two Mbyte of main memory, the hi-res graphic display, and a fixed Control Data 825-Mbyte Winchester hard disk that holds up to two hours of 16-bit, 48 kHz mono sound. The second 68010 is located in the ASP (Audio Signal Processor), which controls up to 16 DSPs (Digital Signal Processors) and includes the control panel console.
While the basic ASP contains one
DIGITAL FILM SOUND
DSP, allowing 16 channels of digital audio to be processed in real time, an ASP can handle up to 256 channels simultaneously when connected to the full configuration of 16 DSPs. Each DSP, in turn, can control up to 16, 825-Mbyte disks, thus allowing instant access to a staggering 512 hours of mono digital audio.
Since having access to 256 tracks might be considered overkill for even the most elaborate Lucasfilm soundtrack, one might wonder why such capabilities were built into the system? The answer lies in the concept of a facility that would share the disk drives among many SoundDroid editing and mixing stations.
At the present time, the on-line working store utilizes the 825-Mbyte CDC hard disks, which are cheaper, faster and more reliable than the 300-Mbyte drives "equipped with interchangeable disk packs" that the SoundDroid team has been using since the beginning of the system's development. The fixed disks are "dual-ported," and would be connected both to a SoundDroid station and to a transfer-room robot arm changer/archive machine. Thus, while the mixing theater is using the "A" disks, the "B" disks are assigned to the transfer machine and can be loading for the next session or the next reel, or used for archiving yesterday's work.
The DSPs and the disk drives that they control can only be used with their assigned SoundDroid; the dubbing stage can't "borrow" a few disk drives during a busy reel unless the lending station is not in use.
Later on, during final mixing, with most of the editing stations not in operation, the SoundDroid mixing station might commande the half of the DSPs in a facility. The other half might be used by an editing station to pre-mix and to conform tracks to picture changes.
Hand-in-hand with the issue of how many channels can be processed in real-time is the question of how many tracks are on-line, and how the off-line material will be archived — a problem that has to be addressed by all random-access editing/mixing systems.
Moorer feels that the solution to the long-term archival storage problem is 10-inch, write-once optical disks (a product also made by Control Data) and which hold an hour of stereo information on each of the platter's two sides. "The price compares favorably with digital master machines. We can buy, off the shelf, a robot arm changer [made by FileNet] that will hold 64 platters, and can insert the disks into any of four players," Moorer says. The changer, which functions in much the same way as a juke box, can retrieve and load an optical disk, and his "play" within 15 seconds of receiving a command.
Because the response time of the optical disks is slow (they can record and playback in real time only), and because they are only capable of steanalog facility. (Moorer relates that in the new Lucasfilm Sprocket Systems building on the Skywalker Ranch in Marin County, CA, there will be only two 35mm mag machines!)
It is presumed that such an all-digital facility would also include EditDroid picture editing stations. SoundDroid was designed from the start to interface with the EditDroid "in terms of message formats, communications standards, and the type of computers that we would be using," says Moorer. "EditDroid has the same basic computer system, operating system [Unix], and is programmed in same language [C] using the same database management system and Ethernet protocol. We can pick up a [Edit Decision] list from EditDroid and edit directly to it: location of scene changes, whether they are cuts or dissolves. And, when picture is re-cut, conforming of all sound material is semi-automatic."
Moorer notes that there are three levels of software in the SoundDroid: "There's the front-end user interface; then there's what we call the 'real-time monitor' — which actually runs in a separate computer — that handles things like disk scheduling and the loading and unloading of microcode. The third piece is console response and automation. Now we are rewriting part of the user interface, and starting to rewrite the automation. We got a lot of good ideas and comments at NAB."
...continued overleaf...
DIGITAL FILM SOUND
The SoundDroid staff is currently in the process of developing user interfaces in the mixing domain, including patching, mixing and monitoring. "We have set up standard patches for most of the mixing desk functions, like equalization, panpots, reverberation," Moorer explains. "We've also got graphical front ends for the loop and doppler-shift programs."
The SoundDroid sound-editing program, now called SD (originally "FMX" and "EdiSon"), allows eight audio tracks to be manipulated simultaneously. The tracks can not only be viewed on the high-resolution display, but also mixed on the GML/Penny and Giles motorized fader system.
Moorer and his staff have experimented with a set of touch-sensitive, six-inch Farenstat control strips made by Tasa Electronics. "It's entirely capacitive — touch sensitive — so that nothing actually moves. The good point is that there is no nulling problem; you just pick your finger up and put it somewhere else. The bad points are that the resolution is not as fine as you get with regular sliders. Also, to get the same resolution on a Farenstat that you get with a normal fader — from 'silence' to 'wide open' — the distance would have to be longer than six inches, which would mean that you would have to push it down with three strokes [to achieve the same control range]. Also, since there is no mechanical feedback, you are relying entirely on your ears. The guys at Sprocket Systems have built into their fingers what a 3-dB movement feels like; it's absolutely automatic and I'm a little hesitant to break that training."
SoundDroid saw its first use in the processing of certain effects for Indiana Jones and the Temple of Doom. Amadeus utilized the SoundDroid to help clean up about three minutes where Salieri is talking over quiet music passages. The noise on the production track interfered with the clarity of the music. Moorer remembers that he calibrated the system "using samples of the noises[between words] to let it know what the noise energy of each band was. It would then automatically set the gate thresholds."
Alpha testing of SoundDroid for the sound editing of a feature will begin this Fall. The SoundDroid will be used for all standard sound editing: the picture editor's work track will be loaded into the system after copying onto optical disks. Dialog clean-up and splitting of tracks, along with standard sound effects editing, will be done totally on the digital processor.
Because the current channel capacity of the prototype system is limited to 16, the final mix will be performed on Lucasfilm's Neve8128 analog console, playing back from analog copies of the cut and edited digital tracks.
SoundDroid will be shipped to beta test sites in December. Mary Sauer, director of marketing for the Droid Works, notes that "there is more interest for the beta units than we can meet." The first production models will be shipped toward the end of the first quarter of 1986, she says.
Both the EditDroid and the SoundDroid are manufactured by The Droid Works, a joint venture of Lucasfilm Ltd. and Covergence Corporation that will handle their marketing, sales, and maintenance. The systems have built-in diagnostic routines that can be run over modems for remote troubleshooting, bringing a standard practice found in the world of mainframe computers to the film sound industry.
CompuSonics DSP-2000
CompuSonics was founded by David Schwartz in 1982 with the intent to design and manufacture a floppy disk-based digital record/playback unit for the consumer market. Armed with a prototype of the DSP-1000 consumer unit, $750,000 was raised in a public stock offering, with another $1.5 million added in April 1985.
When development systems for the floppy-disk consumer unit "started looking more and more like very nice professional systems," says John Stautner, CompuSonics vice president, "we decided to market those as well, in addition to developing software for professional hard-disk applications." In May 1984, CompuSonics formally introduced both the consumer DSP-1000 and the professional DSP-2000 Series of random-access digital recorders/editors/mixers.
Vitello & Associates, a film and TV sound editorial company located in North Hollywood, CA, received the first CompuSonics DSP-2002 two-channel editing/recording system in November 1984. The basic $35,000 unit contains a CPU module with Motorola MC68000 microprocessor and TI TMS320 digital signal processors. The 16-bit system operates at a sampling frequency of 50 kHz, other standard rates being optional. Sounds stored on the unit's 140-Mbyte hard disk are accessed via a keyboard and 12-inch monochrome monitor displaying menu-driven software. The system is modular and can be upgraded, with the addition of plug-in modules, from the two-output DSP-2002 to a four-channel DSP-2004 system with trackball mixing console and 19-inch color graphics monitor.
One of the first assignments Paul Vitello gave his new system was the creation of sound effects for 125 episodes of the animated stereo TV series, Voltron. Although in early episodes the sound effects were layered in "on the fly," CompuSonics introduced a SMPTE timecode interface to enable sound effects to be triggered at specific timecode locations. Another recent software development allows more than one stereo pair of tracks to
be built for a given timecode location, thus creating a "playlist" that allows an editor to simultaneously lay multiple stereo pairs separately onto mag film or a multitrack. This facility is in contrast to the DSP-2002's original design, which allowed only two tracks to be created and replayed at one time. Stautner says that the software is "continually under development, and we add new features and programs to it all the time and send out upgrades."
After the CES Convention in June 1984, the staff at CompuSonics became aware of interest in the system being generated by broadcasters, as well as for film and video post-production. The 3.3-Mbyte "SuperFloppy" disk drives currently used in the DSP-2000 units will soon be expanded to 6.6 Mbytes for the professional broadcast version of the DSP-1000 scheduled for introduction this winter. The 6.6-Mbyte floppy disk is capable of storing and replaying up to one minute of mono sound, which is considered long enough for station IDs and commercials; with Compusonics' data reduction techniques (described below) entire singles could be stored on a 6.6-Mbyte disk.
When the consumer DSP-1000 is finally released, it is expected that the capacity of the SuperFloppy will be 25 Mbytes, to accommodate the longer recording times of albums. Both versions of the DSP-1000 can be connected via an RS-232 serial port to a personal computer, allowing editing and even noise clean-up from the keyboard. CompuSonics plans to furnish software written for the IBM PC and its compatibles.
The Music Workspace module utilized in a basic DSP-2002 contains a SuperFloppy 3.3-Mbyte disk drive and one 140-Mbyte hard disk, with room to add three more drives. (The SuperFloppy is primarily used in the DSP-2000 Series for mastering to the DSP-1000 format, and to load in applications software.) Up to seven additional Music Workspace expansion modules can be added, each containing a minimum of two and a maximum of four 140-Mbyte hard disks.
Using, as a rule of thumb, the fact that 10 seconds of mono audio can be stored per megabyte (assuming a 50 kHz sampling rate and normal error-detection overheads), each formatted 140-Mbyte hard disk holds up to 10 minutes of stereo sound, yielding 40 minutes of storage time per Music Workspace module, and up to a maximum of 5.3 hours of stereo sound online simultaneously with a fully loaded system.
However, these figures do not assume the use of the patented CompuSonics CSX™ data-reduction scheme, which allows the SuperFloppy to serve as a viable recording medium. CSX analyzes the time, frequency and amplitude content of the incoming audio signal after it has been broken into a maximum of 128 bands. A short-term "model" of the signal is built using the filters, and the parameters stored by the system. The model adapts itself to the changing audio content, and is updated every 10 milliseconds.
Data reduction can be introduced either during or after a recording, and Stautner notes that "you can reduce up to a factor of two without any loss of data at all; after expansion, all of the bits are still there. If you compared the reconstructed signal with what you originally recorded, there would be no difference. This technique is called the 'lossless' data reduction algorithm, and it is important to note that the amount of reduction is program dependent; in some cases we have seen it go up past a factor of three, and in others it is less than two."
Further data reduction — up to a factor of eight — does involve the loss of data bits, although Stautner emphasizes that the amount of data reduction, or its use in the first place, is at the discretion of the engineer at... continued overleaf...
DIGITAL FILM SOUND
all times. "It's just like a tape-speed knob; if you put it on 15 ips, you get high-quality recording, but you get less time. On 7.5, you get less quality, but more time."
The current method of archival storage in a DSP-2000 system is by means of 500-Mbyte streaming tape drives manufactured by MegaTape and costing $9,500. The unit can run at two speeds, the slower speed providing backup for four channels of 16-bit, 50 kHz audio in real time, or two channels at twice real time. The company encourages the slower speed because "it's easier on the tape."
By October of this year CompuSonics says that 10 DSP-2000 systems will have been delivered, all but three of which are the two-channel DSP-2002 editor/recorder. The firm reports receiving several orders for larger consoles: one each for the 4×4 DSP-2004, 8×4 2008 and for the 16×4 2016. Stautner reports that the company is "concentrating on the two-channel systems, because they have a shorter delivery/lead time."
The DSP-2004 trackball mixing array is equipped with five rows and six columns of trackball controls, all capable of being assigned in various configurations according to the application program. The color screen translates trackball movements into the plan view of a familiar-looking "console," with slider faders, pan-pots, EQ knobs, VU meters, etc.
In larger models (DSP-2008, etc.), notes Stautner, the user can "scroll" the control panel on what he call a 'virtual console.' In other words, if you have only four channels on the screen, you can 'move' down the console according to a legend at the bottom of the screen. Instead of me getting up and reaching over, I 'roll' over to it."
The trackball mixers range from the 4×4 DSP-2004 at $50,000, to the 64×16 2064, which costs $400,000. All units come with one hard disk, and each 560-Mbyte Music Expansion Module costs $25,000. The software included in all DSP-2000 Series units comes with one-year free upgrades.
Wordfit ADR System
Production dialog for motion pictures produced in the U.S. is replaced almost exclusively using the ADR (Automated Dialog Replacement) system. The actor watches a projected image of the scene and hears in a headset the "guide track" of the original production recording that will be replaced, with three "pips" counting down to the beginning of the line. In some facilities, video is used for playback and audio is recorded on a multitrack; most studios, however, still use...
35mm picture playback and three- or four-track 35mm mag recorders. In both cases a sync-pulsed 1/4-inch tape is always running as backup.
Regardless of what medium is used to record the looped lines during an ADR session, copies of individual tracks are "strung off" onto 35mm stripe for fine-tuning of lip sync. It can be safely stated that some amount of "massaging" of sync — a frame here, a sprocket there — is always needed, the precise amount being dependant upon the skill of the actor and the amount of time the ADR editors have to prepare the tracks for dubbing. Thus, a small army of ADR editors will often be required during post production of a film that has a large percentage of the production track replaced in ADR.
Probably the first application of digital random-access techniques for the ADR process is the Wordfit system, designed by Dr. Jeffrey Bloom and Nick Rose, of Digital Audio Research Ltd. of London. (The system's co-inventor, Garth Marshall, is no longer with the company.) In a nutshell, the Wordfit system compares the guide track to the looped line being recorded, and tries to edit the replaced dialog to "mod match" the original recording. The idea is that after the ADR session, little correction will have to be made in terms of sync. In addition, the onus of achieving sync is partly removed from the actor, hopefully allowing him or her to concentrate on performance rather than timing.
When the actor says a line during an ADR session, it is recorded digitally (16-bit, 32 kHz sampling frequency) onto a Winchester disk, in addition to the studio's standard multitrack or 35mm mag recorder. As the actor is speaking, the Wordfit processor performs spectral analysis both on the guide track and on the track being recorded. Analysis — and recording of the looped track onto the Winchester disk — begins when the mag record mode is activated.
Results of the spectral analysis are compared in one of the system's two computers, which tries thousands of different trial alignments of the "local spectra." The program will try to align one pattern to another to a tolerance of ±10 milliseconds, or the rough equivalent of a sprocket hole in 35mm mag (96 sprockets/second). The length of the track is reduced or extended by making an average of 10 edits per second on the "unfitted" (normal) recording, which is saved on the hard disk. No pitch-shifting of elements is employed, since the Wordfit system stores a list of the edit points that are necessary to create the matched version. (The "fitted" version is not recorded on the hard disk, only the calculated edit points.)
When the spectral analysis and editing are completed — after the actor finishes delivering the line but usually before the picture and track have been rewound for playback — the ADR editor has the choice of listening to the unfitted track (normal), or to the fitted version, which the system constructs from the edit list. While the fitted version is auditioned, the editor has the option of recording it on the mag recorder. Incidentally, the fitted version cannot be recorded on mag at the same time as the actor is speaking because the processing must allow for instances in which the actor may be late in reading a line. (It cannot process what it doesn't have!)
The 168-Mbyte Winchester hard disks available in the just-released production models of the Wordfit system hold the software plus approximately 35 minutes of 32 kHz-sampled mono audio. Up to nine takes of a line can be held on the disk, with the control screen indicating if the fitted or unfitted version has been recorded on the magnetic recorder, and what takes have been recorded to hard disk and mag.
The Wordfit system can analyze guide tracks that contain a large amount of background noise, which is an important consideration since background noise is often the main reason for a scene being looped in the first place.
As the saying goes, there is no free lunch, and Wordfit comes with certain limitations and caveats. In light of this there are four types of processing "warps" that can account for most situations encountered in looping. These settings depend upon both the background noise and the number of people speaking.
Warp 1 is intended for guide tracks containing little background noise, and little overlapping dialog. Because of this, Warp 1 can correct lines that are between -0.5 and +1.0 seconds out of sync. Warp 2 is less flexible, and places slightly more responsibility on the actor for sync correction. Warps 3 and 4 are more "stiff," and therefore will find more use in high-noise situations or with multiple speakers.
Since, according to Wordfit literature, Warp 4 "should be reserved for situations in which it is not desirable to alter the timing of the replacement dialog too much," this configuration should prove useful for revoicing a part with a different actor or, as is the case during foreign dubbing, with a different language. (Although, this technique has only been tried experimentally, Bloom notes that "in prac-
Advertisement.
NEW DISKMIX IS BIG HIT AT LARGEST AES EVER
(October 16, 1985. New York, NY) — As the 79th AES Convention closed its doors today on record attendance, DISKMIX (Release 2.0) emerged as one of the show's hottest products. Introduced by the newly formed Digital Creations Corp. (DCC), the new release of the well known automation storage/editing system was demonstrated at the Sound Workshop booth (#410).
DCC President Michael Tapes describes DISKMIX (Release 2.0) as "a total redesign from the ground up. We've designed a proprietary computer-on-a-card that is installed in an IBM PC or compatible. Controlled by a custom keypad, the system features hands-off operation. This makes it transparent to the creative mixing process, while allowing total off-line edit and merge capabilities when needed. I could go on and on about . . ." Tapes continued, and he did.
Those witnessing the impressive demo were knocked out by the system's speed and simplicity of operation, noting that DISKMIX performed noticeably faster than the competition. Digital Creations Corp. has made the intelligent choice of using the industry-standard IBM PC-DOS operating system and of publishing the DISKMIX file format specifications. This will enable studio owners and third party vendors to develop specialized applications utilizing DISKMIX files, further enhancing the system's capabilities. "I could go on and on about . . ." Tapes continued, and he did, and did . . .
Digital Creations Corp., 1324 Motor Parkway, Hauppauge, NY 11788
516-582-6229
For additional information circle #229
DIGITAL FILM SOUND
Notice it has been found that Warp 4 is possibly the most consistently useful because it copes with a wide degree of acoustic conditions in the guide track. In practice, Warp 4 has been left on for the majority of the looping sessions.")
Bloom notes that "there shouldn't have to be a selection of the warp modes, but we have found that with the state of our understanding of how to do time alignment, it is more useful to let the operator do an initial assessment of the guide track for background level and number of people speaking."
Among the situations where Digital Audio Research advises caution is when a deliberate pace change in reading might be desired. Since the processor "looks" to the guide track for timing, the system should not be used in these instances.
Wordfit has been in routine operation in London since early 1984, primarily at Mayflower Film Recording, Ltd., where it was used in the looping of such films as *Dune*, *A Passage to India*, and *The Killing Fields*. Its first use in the U.S. was in March/April 1985 at Directors Sound, Burbank, for *The Goonies*. Universal Studios took delivery of the first production unit in September of this year. Cost of the installed system is currently $91,000.
Bloom reports that Digital Audio Research is currently developing a "low-cost, interactive digital sound editing system, incorporating Wordfit's editing technology." Demonstration models should be available by the first quarter of 1986.
Other Random-Access Systems
- The forerunner of all random-access sound editing tools is the ACCESS (Automated Computer Controlled Editing Sound System) system, developed for Neiman-Tillar Associates (now TAV Sound), Los Angeles, in January 1977. The hardware was designed by Bill Dietrick of Mini-Micro Systems, Anaheim, CA, with software written by Jim McCann. In 1981 a second system was installed at the Sound Shop in New York, and both systems have utilized 12-bit resolution and 50 kHz sampling frequency.
New updates to the system include 16-bit resolution and stereo capability, in addition to storage capability on optical disk. Current systems utilize disk drives with removable 200-Mbyte disk packs. Dietrick estimates that an ACCESS system sold today incorporating the above updates would cost approximately $350,000. [See the October 1982 issue of *R-e/p* for more details on the ACCESS system — Editor.]
- Although Soundstream, Inc. is no longer active (in early 1985 it was purchased, along with its parent company, Digital Recording Corp., by a Canadian corporation), the firm's digital recorders and random-access Digital Editing System are currently
alive and well at RCA Studios in New York.
The Soundstream Instant Access system can edit up to eight audio tracks stored on a 300-Mbyte disk pack. Direct digital transfer is available for Sony PCM-1610, JVC VP-900, Mitsubishi X-80 two-track, 3M M81 DMS four-track and, of course, the Soundstream two-, four-, and eight-track recorder. Only three of the Soundstream Instant Access systems were built; aside from the RCA system, one currently resides with the Canadian company that bought Soundstream, and the third at Sonapress in Germany.
During the company's final days in Los Angeles at Paramount Studios, Soundstream completed development of software to provide SMPTE timecode lockup. Although the software has never been used in production, there is the possibility that that capability may be revived at RCA Studios. "The editing system, because it is limited to eight tracks, is most often used for classical music," says Tom MacCluskey, RCA staff engineer who was previously Soundstream's general manager; he has worked with the editor since its inception. "A number of classical producers have thought about recording eight-track on a digital multitrack (Sony PCM-3324 or Mitsubishi X-800), making three or four passes on the tape. We would transfer directly off the multitrack digital tape, eight tracks at a time, into the computer and then edit it."
The edited tracks can either be transferred back to the digital multitrack, or mixed directly to two-track from the Soundstream editor at RCA.
- The SYSTEX System currently being marketed by Gotham Audio Corporation, New York, utilizes 330-Mbyte disk drives and a Motorola 68000 microprocessor. The system, which is based on the Digiphon 450 multitrack, random-access recorder developed by EMT-Franz of West Germany, utilizes 16-bit linear resolution and a 48-kHz sampling frequency, and can store up to 60 minutes of mono or 30 minutes of stereo per disk pack.
In its current configuration, Gotham is aiming SYSTEX at the broadcast market, as it is with the EMT-Franz Model 448 Digital Storage System, which stores effects on a 5½-inch floppy disk cartridge holding up to 50 seconds of mono audio.
- The Advanced Music Systems AudioFile digital storage system (not to be confused with the proposed Soundstream AudioFile playback card) utilizes a Winchester disk drive that holds up to an hour of 16-bit/48 kHz audio. The system is currently configured for eight outputs, and, with a built-in SMPTE timecode interlock capability, can be used for such film sound tasks as ADR/Foley recording and sound-effects creation. The unit is expected to go on sale this Fall.
---
**TIMECODE APPLICATIONS**
Larry Blake is currently preparing, for possible inclusion in the February 1986 issue of *R-e/p*, a comprehensive overview of the use of SMPTE timecode in audio/video/film production. He would like to hear from anyone that could help him shed light on this important and often misunderstood topic. Contact him c/o the *R-e/p* office, whose address is included on the Contents page —Editor.
April 3rd, 1985, marked something of an engineering "Superevent" at the King Street Studios, San Francisco, as Bogue-Reber Productions, in association with One Pass Productions, taped an audio/video concert special, *Mister Drums: Buddy Rich and His Band Live on King Street*. The soundtrack for the special, featuring the SQ/Tate surround system, was recorded and mixed live to six different tape transports, including two digital formats and four of the best recorders that analog technology currently offers. More than just an impressive array of technical hardware, the *Mister Drums* special was an audio event with a level of direct manufacturers' support and participation rarely seen outside of an AES Seminar; "state-of-the-art" high-tech was definitely the standard for this audio/video production.
Few productions are planned and presold as completely and diversely as the *Mister Drums* special: the concert video is licensed to Pioneer Artists for LaserDisc release (with digital audio via the new Pioneer 900 series players, and other units to come); Sony Corporation for worldwide Super-Beta Hi-Fi, VHS Hi-Fi videocassettes, and Video-8 with digital sound; and to both the Bravo Entertainment and Discovery cable networks as a pay-TV special. An agreement also has been reached with the People's Republic of China National TV network for airing the special later this year — a first for an American Jazz concert video. In addition, Mobile Fidelity's new label, Cafe Records, will be issuing two separate releases from the concert audio in both digital Compact Disc and analog vinyl versions.
The special began life as the brainchild of producer Gary Reber, whose credits include SQ/Tate soundtrack production of David Bowie's *Serious Moonlight* concert video, and the *Dolly Parton in London* HBO special. Reber, a jazz enthusiast of the highest order, approached Steve Michelson and Scott Ross at One Pass Productions, one of San Francisco's leading...
High Definition Audio
For the complete picture
The new 300 Series Audio Production Console has been specifically designed to complement the latest audio and video technology. It's the only console in its class, offering mono or stereo inputs each available with or without equalization, output submastering, audio-follow-video capability, a comprehensive user-programmable logic system, and a wide range of accessories for custom tailoring to your specific requirements. Available now. Call us collect for further information.
auditronics. inc.
3750 Old Getwell Rd.
Memphis, TN 38118 USA
Tel: (901) 362-1350
Telex: 533356
For additional information circle #232
video operations, to produce the Buddy Rich special on the facility's newly completed 30,000-square-foot King Street Soundstage Complex. The plan was to make *Mister Drums* the first special in what will hopefully be a long series of state-of-the-art jazz and classic pop concert videos.
It's hard to imagine a better or more prestigious choice for the first *Live on King Street* production than a jazz artist of Buddy Rich's caliber. Even with 40 years experience as a big band leader, Rich's current crew is as fresh and exciting as any going in the jazz world today. A 15-piece outfit — four trumpets, five saxes, three trombones, electric bass, acoustic piano and drums — the band features Steve Marcus on tenor and soprano sax. For the *King Street* special, the band played through two, 50-minute sets — the "Channel One" and "West Side Story" sets — before a live studio audience. Each of the sets was centered around its respective title tune, with some great solos by Steve Marcus and, of course, Buddy Rich himself. The sheer energy and excitement generated by the horn section alone seemed to project an upbeat feel to the entire production crew.
**Production Philosophy**
Planning for the special began nearly a year ago and, from the start, Ken Rasek had been selected as the project's mixing engineer. Based out of Chicago, Rasek has an extensive background in mixing both electric and acoustic music live to stereo. He also understands the nuances, as well as the potential pitfalls, of mixing for a stereo surround-sound format such as SQ/Tate. Rasek had worked with Gary Reber in 1982 on the first live SQ/Tate broadcast featuring Devo at the Beverly Theatre, Los Angeles. The *Mister Drums* project proved to be the perfect opportunity for the pair of them to continue their work together.
Instead of bringing in an existing mobile audio truck, Reber and Rasek decided to go with their idea of creating a "living-room-type" control room. When putting together a mixing studio from the ground up, there is a greater freedom to carefully pick and choose each piece of equipment in the recording chain. Reber began approaching the manufacturers he considered to be the best for his needs, with the idea of being involved in a production that is dedicated to showcasing the finest hardware that modern audio has to offer.
Judging from the turnout at the shoot itself, the equipment manufacturers liked this idea. In the tape machine field alone, Sony, JVC, Studer, Nagra and Ultramaster were all represented; Barcus-Berry, Monster Cable, Lexicon, Yamaha, JBL, Electron Kinetics, Lenco and Stax Professional also participated. Sound Genesis of San Francisco, and Leo's Professional Audio of Oakland, California, helped Bogue-Reber Productions with the sheer logistics of coordinating the various "pieces" of the control-room puzzle. A real sense of camaraderie existed among all of the participants — it was great to see the manufacturers themselves supporting the project, and in the recording of this state-of-the-art soundtrack.
Gary Reber is anxious to discuss the "philosophy" behind the choice of audio equipment. His philosophy
---
A sound control room was custom built at the King Street studios for the Buddy Rich shoot. From left-to-right: Nick Latimer and Joe Van Whitsen of Discovery Network, and John Caden, head of CMS Digital.
SENNHEISER ANNOUNCES ALMOST NOTHING.
Almost no noise. Almost no overload limit. Almost no distortion. Almost no variation in response or directional characteristics over the entire recording spectrum.
Introducing the new MKH 40 cardioid condenser studio microphone. With an exclusive symmetrical transducer* that makes possible dramatically higher overload levels, higher output, ruler-flat frequency response, and lower IM distortion (particularly in the presence range). All this, plus a noise level so low, it is virtually imperceptible, even by modern digital recording equipment.
Without doubt, it is the finest studio microphone Sennheiser has ever made.
We believe you may consider it the finest studio microphone anyone has ever made.
Contact us for a personal demonstration. And hear for yourself why almost nothing is really something.
See Us at AES Booth #509-510
For additional information circle #233
Sennheiser Electronic Corporation (N.Y.)
48 West 38th Street
New York, NY 10018
(212) 944-9440
Manufacturing Plant
D-3002 Wedemark West Germany
© 1985 Sennheiser Electronic Corporation (N.Y.) *Patents applied for
SQ/TATE MATRIX ENCODED SURROUND-SOUND SYSTEM
We asked producer Gary Reber the obvious question: Why such a complete commitment to working in the SQ/Tate format?
"Because the technology works," he replied. "The SQ/Tate technology is the one system that truly delivers the promise of a live playback experience. It has the natural depth and dimension that our ears hear. Conventional stereo just can't recreate the same sense of 3-D."
What makes the SQ Tate system different from the various "quad" formats of the mid-Seventies?
"Several key factors. The various competing 'quad' formats were rushed to the market before the encode/decode process was completely worked out. Front-to-back separation was as low as 3 dB in some cases, while the Tate system decoding of SQ provides better than 35 dB of interchannel separation.
"Also, stereo/mono compatibility is not a problem with the Tate system: the encoded signal behaves exactly like mono or stereo when heard on home systems, yet the same signal becomes surround-enhanced when a Tate system decoder is added in the playback chain of a four channel audio system. There is no 'planned obsolescence' inherent in the system's consumer applications.
"I've been producing exclusively in the SQ/Tate format since 1980, and co-produced the very first live SQ Tate radio broadcast in 1982. I had been eagerly following the 'quad' developments in the early Seventies, while still working as an Economic Development Planning Consultant and teaching in the graduate program at UC Berkeley. I was glad to see 'quad' sound expand and flourish with the success of surround sound concepts in the film industry during the late Seventies. By the time I began producing I knew I wanted to record in surround stereo. I also knew that SQ Tate was the surround format."
What was your SQ Concept for the Buddy Rich Project?
"We went for a natural recreation of the live experience. We kept the normal spatial relationships of the sound on the King Street stage: the band up-front left to right, wrap the environment of the 'club' in the rear channels, and mix the audience in perspective.
"Like all production decisions, use of the system has to be appropriate to the spirit of the music being produced. SQ Tate allows you to position any given signal in a 360-degree spherical soundfield. The Buddy Rich soundtrack is all depth, dimension and separation, but there are no solos being panned around the room. Now, if I was doing a Pink Floyd recording, a much more aggressive use of the system would be appropriate!
"One of the potential Pitfalls in mixing to the SQ/Tate format is that you can't pan anything in the exact center-back position, or you will lose it completely due to phase cancellation when you go to mono.
"Regarding additional equipment costs for an SQ Tate production; the only special equipment you need is the actual position encoder console, which is currently a rental item. The professional model will be available for sale to the industry in 1986. Additionally, you would need speakers and amplification for rear-channel monitoring. Storage of the two-channel encoded program can be either digital or analog.
"If you have the kind of project which involves a multitrack mixdown, of course it would take the additional studio time to make placement choices, particularly with the rear channels. Obviously you now have a whole new world of dimensional choices to make that just don't exist with conventional stereo or mono.
Seen here to the right of the main Yamaha mixing console is the rack-mounted, 16-channel SQ Tate Position Encoder Console, used to pan sounds into the four-channel master, which is then used to produce the two-track surround-sound mix.
BUDDY RICH LIVE ON KING STREET
stems from a total commitment to digital audio and, even more importantly, to Reber's long-standing involvement with the SQ/Tate stereo surround-sound system. The Tate system is a 4-2-4 matrix licensed through CBS, Inc., as the companion to the company's original SQ encoding design. (As discussed in an accompanying sidebar, the SQ matrix is an encoding/decoding process that allows pan positioning within a spherical and 360-degree symmetrical soundfield. The four-channel mix of left-front, right-front, left-rear and right-rear are matrix-encoded to produce a pair of "transmission channels" for album or videocassette release, and then decoded in the home to provide four surround-sound replay signals.)
The objective of Reber's approach, as he states it, "is to put the listener there. I want to give the consumer a true live concert with all of the depth and dimension that goes with the live experience. Our equipment choices — particularly with microphones — were all made while keeping in mind the need for extreme accuracy and transparency in order to maximize the benefits of recording digitally and in the Tate surround-sound format."
Microphone Selection
From the very start, it became readily apparent that Reber's approach to
AUDIO PRODUCTION EQUIPMENT LIST
Yamaha 2000 Console
Electron Kinetics Eagle 7A power amps
JBL 4435 studio monitors
STAX SR Lambda Pro Electrostatic Earspeakers
Lexicon Model 200 digital reverb
Barcus-Berry BBE/202 Signal Processors
TATE/SQ Position Encoder Console
Fosgate Research SQ/Tate Model 101A decoder
Sony PCM-1610 digital processor
JVC DAS-900 digital processor
Sony BVU-800 U-Matic video recorders
Studer A810 two-track recorder
Nagra IV-S TC two-track recorder
Nagra T-AUDIO TC two-track recorder
Ultramaster half-inch two-track recorder
Sony 701-ES digital processor
Nakamichi DMP-100 digital processors
Sony BETA HI-FI VCR
JVC VHS HI-FI
Crown PZM-31S microphone elements
Crown PZM-6S microphone elements
Countryman Isomax II condenser mikes
AKG The Tube microphones
Monster Cable Prolink audio cables
Monster Cable Hi Performance microphone cables
Nakamichi The Dragon cassette recorder
Monster Cable Soundex acoustical panels
INTEGRITY RELIABILITY SUPPORT.
The Hy James modus operandi.
It's on all our price tags
But it doesn't cost a penny more.
Serving the Great Lakes and beyond
with the ultimate in professional audio
and audio post production equipment
and consultation.
HY JAMES
24166 Haggerty Road
Farmington Hills, MI 48024
(313) 471-0027 994-0934 - Ann Arbor
For additional information circle #234
the project, particularly regarding the choice and application of microphones, was far from conventional. R-e/p was invited to the King Street soundstage the day before the shoot, while the crew readied the various technical elements. In the center of the floor was a large, geometrically-shaped plexiglass construction. A closer look revealed a specially designed housing for a pair of Crown PZM-31S microphone elements — definitely a contender for the Guinness Book as the world's largest stereo microphone! This plexiglass array, and several others, was designed by Reber and Vince Motel, the project's PZM applications engineer. Ken Warrenbrock, the acknowledged "father of the Pressure Zone Microphone™" concept, also lent his support and advice to the array designs.
The main stereo PZM array consists of two, three-sided "V-shaped" units braced together. All of the plexiglass was ¼-inch thick, and treated with a silicon compound to make the surface as hard and as reflective as possible. The four largest sheets were four-foot-square sections mounted at right angles, and had triangular pieces of plexiglass mounted on the top only. The PZM elements were mounted in the upper corners, where the bottom of the top section meets the right angle. The completed array measured 12 feet across when fully assembled. Once positioned, the entire unit was flown approximately 10 feet above the floor and centered in front of the band's horn section.
In addition to the stereo array, four single-sheet PZM arrays were positioned above the floor, and at the four outside corners of the audience area. These single plexiglass sheets are also four-foot square with PZM-31S elements mounted in the center of each sheet. The outputs from these additional microphones were assigned via the SQ/Tate matrix to left-side, right-side, left-back and right-back, respectively.
Miking for the drum kit was also handled by a combination of plexiglass and PZM elements. Opting for a three-mike setup, Reber and Motel this time used a smaller V-shaped array placed directly in front of the kick drum. Mounted within the V-angle was another Crown PZM-31S element. The overall drum sound was handled by a pair of parabolic dishes each holding a PZM-6S element, and positioned in a standard drum-overhead configuration using two AKG baby-boom stands.
The Yamaha C7 acoustic grand piano was miked using a pair of PZM-31S units mounted at the high and low position of the piano's harp, while the electric bass was taken direct. Four AKG The Tube microphones were placed within the band itself, in order to capture an intimate feel for
LOCK AROUND THE CLOCK.
The ability to synchronise video and audio recorders is an increasingly vital facility required in studios all over the world. As much as three-quarters of today's audio recordings involve a visual aspect, and recording is more international than ever before. Basic tracks in New York, string and brass overdubs in London, dubbing in Los Angeles... modern international productions need an international standard for machine synchronisation, and there's really only one: Q-Lock by Audio Kinetics.
It's the same all over the world. The simple, uncluttered controls. The custom interfaces that suit your machines. The remarkable software capability. The integrated system with built-in expansion possibilities. It all adds up to accuracy and ease of use, and that means speed and creative flexibility. It means Q-Lock. If you're looking for an international standard, you've found it.
Now, more than ever, you need to keep in sync with the times. Lock around the clock – with Q-Lock.
Audio Kinetics Inc.
1650 Hwy 35, Suite 5, Middletown,
New Jersey 07748, U.S.A.
Tel: 201-671-8668
Audio Kinetics Inc., 4721 Laurel Canyon Boulevard,
Suite 209, North Hollywood, California 91607, USA
Tel: 818-980-5717
the various trumpet, trombone and sax soloists. Countryman Isomax II condenser microphones were placed on the flutes used by three of the sax players for occasional parts.
**Mixing Environment**
One Pass Video's 45-foot Mobile One served as command center for the seven-camera Ikegami shoot, while the audio crew set themselves up in a control room adjacent to the sound-stage itself. The control room was acoustically transformed with Monster Cable Soundex Acoustical Panels to approximate the size and sound quality of an average consumer's living-room. Next to the control room was a production "theater" setup that housed the director's on-line video and a full, decoded SQ/Tate surround playback system. (The "theater" presentation was arranged in an attempt to help keep the control room proper free of extra bodies, as anything else.)
The main mixing console was a Yamaha Model PM-2000 with 24 inputs, eight subgroups, and four mains. Linked to the Yamaha was a special 16-channel SQ/Tate system "position encoder" console, which is necessary to create the final "live-to-two-channel" encoded surround-sound mix. The encoder console is equipped with 360-degree, fixed-position panpots that allow the engineer to assign individual input channels within the surround-sound environment. (The SQ/Tate two-channel encoded mix is completely compatible with conventional stereo and mono playback systems.) For the
The main stereo array suspended in front of the Buddy Rich Band was fabricated from four, four-foot by four-foot sheets of plexiglass, with two Crown PZM-31 elements mounted in the top apex corners.
*Mister Drums* mix, only the outputs from the two single-sheet PZM arrays located at the back of the audience area, plus a slight amount of digital reverb from a Lexicon Model 200, was encoded to the rear channels of the surround matrix.
Due to the nature of mixing live-to-tape, Ken Rasek was forced to work quickly and efficiently. One of the most interesting features of Rasek's mixing approach was that he basically used little or no equalization or signal processing. In addition to the Lexicon Model 200 digital reverb, the only other piece of processing gear used on the session was a Barcus-Berry Model 202 connected to those input channels covering the Buddy Rich rhythm section. Applied just ahead of the SQ/Tate encoder, the Model 202 was used to enhance the signal transparency and the transient definition via a patented circuitry which, essentially, deals with automatic EQ and phase correction.
The SQ/Tate mix was monitored through a Tate Professional Decoder that then fed the left-front, right-front, left-back and right-back signals through Electron Kinetics Eagle 7A stereo amplifiers powering four JBL 4435 monitor loudspeakers. Monster Cable covered the entire audio wiring requirements for the sound-track production. It's interesting to note that Ken Rasek himself used Stax Ear-speakers exclusively to monitor the mix in progress.
Reber selected a Lenco Model 600 distribution amplifier system to handle feeding of the final mix to the six main tape machines, plus the myriad of Nakamichi digital processors and analog cassette recorders used for the producer's reference copies. CMS Digital, of Altadena, Calif., provided Sony PCM-1610 and JVC DAS-900
digital audio processors coupled to a pair of Sony BVU-800 U-Matic video recorders. On the analog side, Studer provided an A810, while Nagra supplied both a IV-STC and T-Audio TC model two-tracks. Of particular interest was the Ultramaster two-track analog machine, a unique hybrid that features half-inch, 30 ips recording only. John Curl and Dave Wilson designed the machine's custom electronics, which they have mounted on a Studer A80 chassis. 3M Scotch tape was used for all systems: 250 for analog, and Color Plus Super High Grade cassettes for the various video-based digital formats.
The Show
It might be easy to get the wrong impression about production for the *Mister Drums* special, which was much more than a technical exercise. When it came time for taping, it was all Buddy Rich's show. He is big band, swing and be-bop all at once, and that's the impression one gets before he even plays a note. The man Sinatra calls "the world's greatest drummer" has often been accused of being as conceited as he is talented. If so, that conceit was only represented by Buddy's complete commitment to the quality of the *Mister Drums* production. He was patient and relaxed throughout any technical detours or delays, seemingly as much a fan of the crew as they were of his music. More than anything, there was an underlying feeling of fun when the band set itself up on stage. Buddy displayed a keen sense of humor, directing a steady stream of one-liners around the bandstand and throughout the stage.
The sound stage had been set up in an intimate Jazz-club fashion, with 20 or more round tables seating well-dressed studio audience. Bright neon outlined the roomy bandstand with a "BR" neon logo positioned squarely above the drum riser, which consisted of translucent glass bricks, a clear plexiglass top and white lights underneath — a design that, at times, gave the illusion of Buddy almost floating on a bed of light.
Throughout the night's taping Buddy Rich propelled his band through an exciting ensemble sound, and punctuated his way around some very hot horn soloing. But, of course, the real treat was when Buddy himself would take a solo. By the end of the first set Buddy sweated his way into a sustained snare roll that should definitely keep his legend alive for another 40 years.
During a subsequent visit to One Pass Video's post-production facilities during the editing process, audio quality of the session sounded superb. Even in the standard stereo mode, the sense of dimension provided by the SQ/Tate System, to this writer at least, was impressive. Channel separation is extremely wide, which provides the listener with the clarity to enjoy the production crew's excellent use of ambient miking. You cannot help but admire a producer like Gary Reber, who has a complete vision of what he wants, and is willing to go through the sometimes difficult process of trial and error. He fully embraces the technology available today, and anxiously awaits the best new technical developments of the future. Reber is not interested in breaking any rules; he just wants to set new standards.
NOW ALL THE WORLD'S A SOUND STAGE.
No matter what you're recording in the field, from Shakespeare-in-the-Park to "Dancing in the Dark," you'll find a Sony portable mixer that brings the creative control and flawless sonic performance of the studio to wherever you happen to be.
12 FOR THE ROAD
The big difference between the Sony MX-P61 and other studio-quality 12-channel mixers is that the Sony can be tucked neatly into a small case and carried to any location—thanks to its switching power supply, transformerless design and, of course, the fact that it's made by the company that's best at making big things small.
Its myriad professional features include transformerless, electronically-balanced inputs and outputs, complete equalization for comprehensive signal control and modular construction for reliability and easy maintenance. Along with the phenomenal sonic performance with which the name "Sony" has been synonymous for decades.
THE 4-CHANNEL MIXER
FOR EVERY CORNER OF THE GLOBE
The incredibly small and light MX-P42 lessens not only your burden, but the complexities of field recording as well. That's because each input incorporates a fast-acting compressor/expander with gain make-up control. So input levels can be preset separately, then maintained automatically during recording.
HIGH QUALITY FOR LOW BUDGETS
The family resemblance between the 8-channel MX-P21 and Sony's more expensive portable mixers is readily apparent:
The MX-P21 is portable, durable, and has an incredible array of features for its size—including phono EQ, fader-start and cascade interface.
All of which makes the choice between Sony and any other portable mixer a simple one.
Just decide whether you want all your location recordings to be as good as studio recordings.
Or not to be.
For a demonstration or more information, call Sony in the North at (201) 368-5185; in the South (615) 883-8440; Central (312) 773-6000; West (213) 639-5370. Or write Sony Professional Audio Products, Sony Dr., Park Ridge, New Jersey 07656.
©1985 Sony Corp. of America. Sony is a registered trademark of Sony Corp. Sony Communications Products Company, Sony Drive, Park Ridge, New Jersey 07656.
EQ characteristics of the MX-P61.
The Sony MX-P42 weighs in at a scant 8 lbs. 10 oz.
For additional information circle #239
MIDI Recorders: Myth or Reality?
An R-e/p Guide to MIDI Data Recorders and Sequencer Software Designed to Run on Personal Computers
by Stephen St. Croix
Many of us are living today, sort of artificially, thanks to the present state of technological advancement of medicine; these thousands of people would not be alive if it were not for artificial means of some sort. Many of us are also playing music today, sort of artificially, thanks to the present state of technological advancement in computer hardware and software; these people could not be creating as high a quality commercial product if it were not for artificial means of some sort.
I am one of those persons who composes and plays great music, just as long as massive amounts of computer and multitrack technology are there to help. I have had simple data transfer and sync programs operating with my equipment for years, as have several other people, but it has never been enough. Then came MIDI.
With the advent of a Musical Instrument Digital Interface we, as engineers and artists, were finally faced with what the musical-instrument industry has been threatening us with for years: data standardization; or for players: equipment from different manufacturers that can talk to each other. As limited as MIDI is (compared to the speed and flexibility of some of the older computer-interface standards), the fact that the interface was designed to do exactly what we want done in the musical environment makes it a powerful tool that can greatly improve productivity.
Linking together synthesizers with simple Note-On and Note-Off data came first, then the locking of drum machines and additional synthesizers. Simple synchronizing to and from tape, and the transfer of velocity, bend, pressure and other variables followed shortly afterwards. Finally, there appeared serious data transfer, such as bulk store and System Exclusive commands, which allowed great stuff like playing your drum machines from your favorite MIDI-equipped keyboard, assigning any drum sound to any key, and having real-time velocity control. Then came enhancements such as saving voices to floppy disk, rather than those mystery cartridges. The MIDI evolution became graphically obvious when voice editing appeared for the ubiquitous Yamaha DX-7, in the form of the DX-PRO package from Kevin Laubach.
But artists who felt that computers could aid them in composing were still divided into three groups: those who made use of the limited dedicated sequencers available; those who developed their own MIDI interfaces and recording/editing software; and those who gave up in frustration and sat down to wait.
Technology moves very rapidly once a market for it develops, and MIDI recorders are no exception. We now find ourselves in an interesting position. In just a few months we have moved from a position of waiting for the first MIDI data recorder to appear in non-crash dress, to suddenly having so many available that it became impossible to compare them.
I began looking into the new MIDI data recorders for the third time many months ago, and found it almost impossible to collect all of them together for comparison and evaluation. Although a very large and complete music store near Washington, DC, had several of the MIDI software packages on its shelves — plus the necessary hardware interfaces for linking the controlling personal computers to the MIDI-equipped keyboards — it found itself in a similar position: there was simply too much to wade through. Since many studio engineers and producers might also be facing the same problem, R-e/p has decided that it might be timely to publish this in-use field comparison and overview of some of the more innovative MIDI data recorder software.
When I began comparing the various packages available for the Apple II, Macintosh and IBM PC (plus PC-compatibles), I thought that I would simply sit down for a few days and write a tune on each one, thereby learning all that I needed to know about each system. No. Instead, I discovered that even though I had been...
MIDI RECORDERS
Hearing about these packages for some time now, less than one-quarter of them actually existed in a useable form; of these, only a few actually implemented all the advertised features. The entire field seemed to be just coming into existence. Further, and sadly so, of the packages that I actually did manage to try out, most were under-developed, under-researched, or under-powered. It seemed that it was just too soon to dispose of the 24-track, much less write this article.
Despite such reservations, I went ahead with the evaluation project, and was assembling a very negative overview of the whole endeavor when I called one of the publishers of a MIDI recorder in order to ask why the system seemed so immature. The answer that I received was interesting. It seems that the program was, in fact, very immature, and that a new version was coming "real soon." I explained my situation, and asked if my "real soon" could be now. The new version arrived the next day, and the difference was so remarkable that I thought it only fair to call all of the companies involved, and ask for their newest versions. Almost all of them did have newer versions, either instantly available or in beta stage (under development).
As these finished and experimental versions began arriving, a totally new picture developed. The MIDI data recorder programs had matured so much over the three-month period I had been examining the older ones, that one thing became very clear: the entire comparison had to be done over. Some of these companies must really have listened to their first customers, because the new programs were great.
There is an impressively wide selection of features currently offered, and the systems represent a spread from simple multitrack sequencers to powerful multitrack MIDI data recorders with real-time and step editing (features that allow creative minds with out-of-control fingers to create controlled music). Some systems allow you to treat them almost as if they were real mechanical multitrack tape recorders with additional features, while others are "modular" or phrase oriented, building songs from short basic phrases much like drum machines. The more elaborate systems even allow you to clean up horrible timing and other errors, and then go through and "rehumanize" (reintroduce smaller errors) to each note by hand.
Each approach clearly has its own merits and power. I finally chose more than one system so that I could transfer between them, and use my favorite features of each (remember, there is no actual audio recorded, so there is no signal-quality degradation).
There seems to be five types of potential users for MIDI recorder software or dedicated hardware packages:
- The person who just wants a good sequencer.
- The player who wants to use a computer as a tape recorder, to jam into, record tunes and be able to overdub and punch.
- The person who might not be the most flawless player, but who wants a system so he can play a little loose, even miss a few notes, and then go in and time correct and fix the bad bits.
- The guy who wants to build songs from sections, like he would on a drum machine.
- The pro who needs to do ultra-precise production work within exact time constraints, where a need for repeat phrasing may exist but the ability to edit each note independently afterwards must be provided.
A few of the currently available systems deserve highlighting here, since they were clearly exceptional. No doubt there are other very good programs out there, but I found that the four systems described in greater detail in the remainder of this article to be the most impressive of the ones that actually showed up in time to be included in the first part of this overview (several promised packages never showed). A companion sidebar contains detailed descriptions of these and other software packages designed to run on various personal computers. This writer would be interested in hearing from other companies that are developing MIDI software, for possible inclusion in part two of this article, to be published in a subsequent issue of R-e/p write me c/o the magazine.
Syntech Music Digital Studio II
Tape recorder emulation systems are the most natural for a musician to use, and initially the most rewarding. Such programs generally allow you to pick one of several tracks on which to record polyphonically, with little or no complicated setup. You may then overdub on individual tracks and bounce between them all you want. MDS II, however, offers such direct simplicity, and more.
So far, this program is definitely the one for the Apple II; it is very fast and the features work. While only eight tracks are provided — but this is the most I have seen on the Apple II — you can set up a total of 16 sequences. Each track is channel assignable. MDS offers solo and mute for each
— the Author —
Stephen St.Croix is a real neat guy who studied to be a welder, but at the last minute decided to be a rock star. The transition is not yet complete.
State of the art shouldn’t create a state of confusion.
Octave Plateau’s new SEQUENCER PLUS makes music, not misery.
If you’re serious about your music, you’ve probably been looking at MIDI sequencers. And if you’ve really looked, you probably found that dedicated sequencers just don’t do enough to justify all of the trouble and expense. Now comes the breakthrough you’ve been waiting for, as Octave Plateau introduces the studio-on-a-disk: SEQUENCER PLUS. By harnessing the power of IBM-compatible PC’s, SEQUENCER PLUS offers more features and ease of use than any dedicated sequencer ever could.
64 tracks. 60,000 notes. Punch-in/punch-out. Complete editing. And we mean complete. Track by track, and note by note. All of the flexibility you’ve wanted from a sequencer, but never thought you’d get. If you’d like to know more about SEQUENCER PLUS, or any of our other American-made products, like the Voyetra Eight synthesizer, drop us a line. We’ll be glad to send you one of our informative brochures and a list of dealers in your area. And if you’re still not convinced, go ahead, check out the competition. When your options are state of the art or a state of confusion, the choice is simple.
Octave Plateau
The sound approach to technology
51 Main Street
Yonkers, NY 10701
914 • 964 • 0225
For additional information circle #240
MIDI RECORDERS
track, along with track bounce, track shift, program change editing, and controller filters. (Time shift enables sliding of an entire track backwards or forwards by very small amounts, for special effects or to compensate for a synthesizer's MIDI lag.) The program allows you to store sequences and whole songs to disk; sequences can be loaded and appended to build larger, complete songs. This means that you can keep a library of standard drum parts, for example, to use in many versions of a song.
Merely listing the features of these programs would take a lot of space, and doesn't mean too much. What really matters is how each one does its job, and Music Digital Studio II is easy to use and to learn — it works. The auto correct works well, which is rare. You can punch on-the-fly or in step mode. You can step enter notes; you can even step by clock pulses to tear apart a chord and fix a bad note. (Remember that MIDI is a serial interface, so even if you manage to slam down five keys in a chord at exactly the same time, the data is sent and recorded as a series of note-on/off pulses one after the other.)
There are a few points that I didn't like about the software, but I understand that they are related to limitations in current hardware standards. This system supports Passport-type hardware and, in fact, a much nicer MIDI interface card sold by Syntech was included with the package. However, the software does not support the Roland MPU-401 interface card; as a result, even though there a visual metronome is featured, no audio metronome is available. When confronted with this omission, the company stated that its expects people to have a drum machine synchronized to the interface card, a configuration that would provide a perfect metronome. (Which seems like a reasonable answer.) Further, unless you are actually recording, there is no MIDI-Through capability. The system will not make use of additional memory cards fitted to the Apple II, so use those filters.
The owner's manual is very good, and includes a command listing with such nice instructions as "Z" for "Zero all Channels Pitch and Mod Wheels." (A few days spent with most other MIDI software programs leaves you
The Linn 9000 is conceived for every artist, every songwriter whose creativity demands the finest in technology.
Designed for musicians by musicians, the Linn 9000 incorporates the world's most sophisticated touch sensitive digital drum machine with the most advanced 32 track MIDI sequencer. There is virtually no songwriting style that it cannot accommodate, instantly. There is no manner of performance or personal expression that it cannot precisely duplicate.
A glance at the control panel tells you that when inspiration strikes, the 9000 makes it effortless to capture, arrange and edit your music. What you can't see are its unique sound sampling capabilities and the extensive Linn Library of professional quality sounds.
Isn't it about time you visited your Linn dealer and experienced the Linn 9000 for yourself?
Imagine the possibilities.
The inventors of the digital drum machine now offer you the most sophisticated compositional tool ever created. The Linn 9000.
Linn Electronics, Inc.
18270 Oxnard Street, Tarzana, CA 91356
(818) 708-8131 TELEX #298949 LINN UR
For additional information circle #242
MIDI RECORDERS
looking for just such a command.) Those of you with one disk drive will like the fact that, once you have booted the system diskette, you can remove it for the duration. I will be covering the company's other software packages in part two of this article, including what seems to be a very nice version of the above mentioned program designed to run on the Commodore 64.
Octave Plateau Sequencer Plus; REV 2.0
Frankly, I wasn't expecting too much from a program that calls itself a "sequencer," while all the others are screaming that they are "multitrack recorders." The first version of Sequencer Plus that I received was actually kind of nice; though, it was better than I had expected, but nothing amazing. This program runs on an IBM PC or compatible, with control of screen attributes for each particular computer. While the program worked, it had some limitations that were a bit frustrating, and some quirks that were a little annoying. The second version, however, is incredible — the limitations and quirks are gone. You get no less than 64 tracks to play with; and editing that is well thought out and powerful... very powerful.
The first revision also had 64 tracks, which brings up a point that I feel strongly about. Most of the other companies that publish MIDI programs with only a few tracks talk about that fact being unimportant, because you can bounce down. But, since you can't take the tracks apart again after you bounce, global editing on one of the bounced overdubs becomes impossible. If, for example, you decide at a later time that you want the arpeggios before the chords to be on a different MIDI channel, you can only make the change if they were residing on their own track(s). Because of this restriction, I prefer to have lots of available tracks (64 is lots) with solo and mute.
You can see as much as 72 bars of 22 tracks at one time; full cut-and-paste editing is now supported; pop-up windows get you directly to functions from anywhere in the program; individual tracks or whole songs can be saved; external sync and "locating" MIDI song position are also supported. Lots of nice punch commands exist, along with shuttle ("play range").
When the program is playing back, the screen moves across the music so that you can always see what is going on, and where you are. This feature is not a gimmick: the ability to see where that klunker is when you hear it is valuable!
The owner's manual is in a nicely executed three-ring binder, and is well written, but doesn't really say much when you consider the power of this program. And it doesn't have to, a fact you learn surprisingly fast. The IBM PC function keys are very well used, and all the help that you could want, both general and specific to your current command, pops up on screen with one keystroke. The program will install on a hard disk attached to an IBM PC-Series, which makes life a lot easier.
Sequencer Plus does not offer music notation, but the people at Octave Plateau are working with other companies to establish a standardization
When the boys from the engineering department walked in with their newest creation, we said: "Nice looking box. What is it?"
"This," they said proudly, "is our new MSP-126 Multi-Tap Stereo Processor. It's a stereo-tapped digital delay line with a 20kHz bandwidth, eight pre-programmed processing modes, and . . ."
"Hold the engineering jargon," we said. "Just tell us what this gizmo does."
"Oh, no problem," they said. "Basically, the MSP-126 is a signal processor that creates a whole range of interesting effects. To begin with, it produces really great balanced stereo with flat response from any kind of program material. And it also creates other kinds of effects—some of which are subtle, dramatic, or even bizarre. It's easy to fine-tune the effects you get, too. For each of the eight effects modes, there are 16 delay parameter set-ups and 16 amplitude variations. Okay?"
We tried to look enthusiastic. "Well, maybe it would help if you could just give us a few examples of these effects," we said.
"Good idea," they said. "One of the neat things the unit does is produce forward and backward discrete repetitions. Then there's a traditional 'comb filter' stereo synthesis. And delay-based panning. And binaural image processing for Walkman applications. And delay clusters. And concert hall early reflections."
"That's better," we said. "We've probably got enough to do a pretty good ad for you. Before we go, though, you probably ought to run us through a quick demo. That might help if we get stuck for the right word to describe what the effects sound like."
"Sure," they said. "Hope you like what you hear."
So we listened. Then we walked over to the typewriter, rolled in a blank sheet of paper, and typed a headline that seemed to say it all:
"WOW!"
If you'd like to see why we're so excited about the MSP-126, ask your nearest Ursa Major dealer for a hands-on demonstration. It's an astonishing experience.
MSP-126 STEREO PROCESSOR
URSA MAJOR, Inc.
Box 28, Boston, MA 02258 USA • Telephone (617) 924-7697
Telex: 921405 URSAMAJOR BELM
October 1985 □ R-e. p 155
For additional information circle #243
The original Doctor Click has been used to create countless innovative hit records, TV and film scores, and special effects, and has spawned a whole new generation of imitators (the sincerest form of flattery, you know).
Now, you can get more of the good Doctor's medicine for a lot less.
DOCTOR CLICK 2 — More of the Doctor, for half the price.
DOCTOR CLICK 2—This new Studio Quality synchronization system from Garfield Electronics has the features and performance that made the original Doctor Click the standard of the industry, and more. DOCTOR CLICK 2 synchronizes to click tracks, live tracks, the 5 sync-to-tape codes, both DIN Sync formats, and MIDI clocks for unsurpassed adaptability. From any of these sources, DOCTOR CLICK 2 generates, simultaneously, 6 time-base clocks, the 5 sync-to-tape codes, the DIN Syncs, MIDI clocks, click and trigger outputs. Additionally, a variable clock channel provides 22 triggering rates including syncopated rhythms for step sequencers and arpeggiators.
Best of all, DOCTOR CLICK 2 incorporates new technology that enables us to give you the DOCTOR CLICK 2 for half the price of the original. Twice the machine at half the price!
MASTER BEAT — The Ultimate Studio Interface.
At the request of many studios and top musicians, Garfield Electronics has developed the only true open-ended code interface system. MASTER BEAT, a studio system so versatile that the only limit is your imagination. This features list gives a glimpse of the performance control horizons possible with MASTER BEAT.
The Only SMPTE Synchronizer With All These Features:
- Sync generation in beats per minute, 24, 25, or 30 FPS film/video calibrated tempos from ALL SMPTE/EBU formats: 24 and 25 frame, 30DF and NDF. Produces all 4 codes as well.
- Simultaneous production of all sync formats: 6 fixed clocks, 1 variable arpeggiator/step sequencer clock, the 5 sync-to-tape codes, 2 DIN sync formats, and MIDI. Click and trigger outputs also.
- A "Doctor Click facility" enabling sync to click tracks, live tracks, MIDI, and all tape sync codes for total compatibility.
- High resolution programmable "timing map" allowing any beat interval sequence to be stored.
For a demonstration of the amazing capabilities of DOCTOR CLICK 2 and MASTER BEAT, call or write for the location of your nearest dealer.
Garfield Electronics
The Home of DOCTOR CLICK
P.O. Box 1941, Burbank, CA 91507 (818) 840-8939
Our Only Business Is Getting Your Act Together.
of MIDI song-file formats. If this happens, you will be able to instantly transfer songs from one company's package to another. If this does not happen, watch for conversion programs to appear.
**Southworth Music Systems**
**Total Music**
This program, designed to run on a 512-Kbyte Apple Macintosh, should be called Total Shock. To make one little computer work this hard, and do this much, is almost cruel. Total Music is the most complete MIDI package that I have ever seen. Those of you who believe that programs for the Macintosh should be as "Mac-like" as possible will fall for this one in seconds. Notes, bars, whole sections of music can be cut, pasted, transposed and more by defining them with mouse-placed brackets. In fact, I used this program for two days without the Mac keyboard even being plugged in.
You have to envision this program as being multidimensional: it provides a matrix of 99 sequences by 16 MIDI channels, and allows total movement and editing power within this matrix by means of graphic screen manipulation of each note's timing, duration, velocity value, and more. Total Music supports two types of real-time punch, three types of timing correction and, of course, full cursor editing.
At the time of writing this article the program was still under development, but it should be on the street by the time the issue appears on your desk. I found the people at Southworth Music Systems to be as helpful and responsive as could be. After playing with my first disk, I had some questions and suggestions about greatly expanding the data editing capabilities of the MIDI controller. Less than one day later they called back and informed me that these items will be included! One will now be able to tag any note with the mouse, and ask for a display of all MIDI information that is related to it, in addition to the normal editing screen display. Such upgrading has happened a few more times since then, including multiple ways of viewing the immense data matrix with similar responses. Total Music is now a true monster. [According to Paul Lehrman of Southworth Music Systems, every attempt will be made to include these and other features suggested by the user — Editor.]
The mouse-driven, on-screen buttons, classic Macintosh pull-down menus, and drag bars make it possible to do most things intuitively, but not all. The owner's "novel" runs just short of 200 full-size pages. I read it cover to cover, and the text is actually interesting! In addition to teaching you how to use the system, the manual contains notes and pointers on MIDI and several associated products. Parts of the it read like notes from a friend who has discovered a way to do something better, or maybe a strange thing that happens if you try to do certain things with certain pieces of gear. They help.
The manual contains no reference section, however. Since there are no on-screen help pages, it would be very nice to be able to immediately find the proper area in the book for support when the need arises. Maybe this will be corrected by the time Total Music is released.
The release version of the program will enable registered users to back up the master diskette — which, considering the mysteries of the Mac mind, is very important. The program is keyed to the hardware; however, if you buy two systems, don't mix them up.
The MIDI hardware interface card, wisely, does not draw power from the Mac's PSU. The interface has four outputs and two inputs that are all active simultaneously. Did I mention that Total Music provides full conventional musical notation, on-stave editing and very good score-sheet printing?
**Cherry Lane Texture**
This program was one of the first to reach the market, and it is the first revision that I cover here. (The next revision of Texture is almost ready, however, and should be covered in the next part of this article.)
Texture is also quite different from the majority of other MIDI software. Written by Roger Powell, a member of an exclusive club (being both a musician and programmer) the program is designed around what he calls a "Modular" approach to song building. Systems like this are structured to work in a similar way to drum machines — you build up little sections and assemble them, but with modifiers like key changes and differing combinations of track timing or "rotation." Such a programming approach allows extensive manipulation of small phrases — such as repeated short four-bar loops — then calling a bridge loop, then another verse loop, and so on.
With Texture you can mask overdubs so that different ones are brought up with different loop sets. The advantages offered by such a compositional technique include the ability to construct songs very quickly; to store song segments for use else-
---
**YOU CONFIGURE IT OUT FOR YOURSELF!**
You really can! Wireworks Mix & Match Components Group gives you all the products you'll ever need to create your own perfect audio cabling system. And our Audio Cabling Design Kit shows you exactly how to put them all together.
Call or write today for your free Design Kit. With Wireworks you really conduit!
**wireworks**
Wireworks Corporation 380 Hillside Avenue, Hillside, NJ 07205
201/686-7400 800/624-0061
For additional information circle #245
MIDI RECORDERS
where; and minimal use of memory (since these systems don't actually copy the looped song segments, but merely call the same one up over and over again.) The bad part is that songs built in this way may tend to be "cold" or over structured, since each repeated section is identical, with no unique errors or "slop" (expression).
The owner's manual teaches you about the rules of modular recording; you learn about tracks, patterns and links. There is even a short tutorial that walks you through a tune as an example. A quick reference card included with the software is excellent, and is all you need once you've learned the system.
The program is structured so that a song may have up to 64 links taken from a library of up to 64 patterns. There are eight tracks, each of which is MIDI-channel assignable.
Step editing is good, and all of the standard features are there. Although Texture is not a graphics-oriented program in its present revision, having learned how to read the screen you discover that a lot of data is there at one time. Editing is by direct manipulation of MIDI data values. Much to my surprise, I found that I became very fast at editing within a very short period of time. A lot of different types of note and controller data could be massaged very quickly. The new revision is promised to be "days away," and I look forward to updating you on its features in part two.
Concluding Comments
There are several factors that I became aware of while evaluating some MIDI recorder systems, of which I have chosen to spotlight these four packages; some may seem obvious, and others may just save you a day's work. The MIDI data streams produced by Bend and Mod Wheel controllers, plus Aftertouch (pressure), can be very dense, use up massive amounts of memory, and make a song file quite large in not very much time. Furthermore, the production of high-density information can actually "clog" the MIDI data stream, and cause audible slowing or lagging of parts of the music, or even strange errors. Some synthesizers, including the Yamaha DX-7, buffer the data in such a way as to grossly aggravate the clogging problem. It is for this reason that most of the systems offer "filters" that allow stripping of such data. (Sequencer Plus actually allows you to "thin out" the data without losing it.) Use them; it makes no sense to load things down with 20 Kbytes of pressure information, if the synthesizer assigned to play back that track can't use it!
MIDI programs that emulate tape recorders generally use more memory than block editors, because the latter play back segments many times, and transpose them, etc. Some systems will allow you to "rush" or lead tracks by one MIDI clock pulse at a time (called track shift), in order to compensate for older synthesizers with MIDI retrofits, and slower ones like the DX-7 that can lag behind as much as 150 milliseconds.
Your disk drive is always sitting there waiting — save your work often. All of these systems are new, and you never know when you might discover the magic sequence of events to send the program and your music to Mars.
forever. (I managed to find a way of doing just that on almost every software package.) Even if this doesn't happen, a power glitch might. In the words of one of the owner's manuals: Use it or Lose it.
If you use MIDI Thru as you record, be sure that it is not returning to the keyboard that you are playing. Otherwise, if it does you are in for a strange surprise with every one of these programs, and even without them.
My thanks to Washington Music Center in Wheaton, Maryland, and Robert Levin and John Chase there, without whose help this article would have been very, very short.
In Part Two: Hardware MIDI recorder systems such as the Yamaha QX-1 and the Roland MSQ-700 and -100; plus a look at other software packages (and the new Harley-Davidson FX-ST Soft tail). A list of the companies offering MIDI recording software and hardware is included in an accompanying sidebar. Given the speed at which things change in this industry, there is a good chance that the listing is incomplete. I invite any companies that we have omitted to contact the R-e/p for inclusion in the next part of this article.
**HYBRID ARTS, INC.**
11920 West Olympic Boulevard
Los Angeles, CA 90064
(213) 826-3777
**MIDITRACK II**
Runs on: Atari 800XL
Interface: MidiMate interface cards available for different synthesizers, plus Cherry Lane interfaces.
Tracks: 16.
Key Features: Sequences can be up 6,500 MIDI events in duration; 16-track overdubbing, punch-in/out, autolocate, full MIDI Channel assign, velocity encoding, pitch and mod-wheel recording; program-change recording; transpose and quantizing; step editing; a variety of sync in/out interfacing; three entire recording saved per floppy disk; visual and audible metronomes. (MidiTrack III, with a 12,000-note capacity is scheduled for release in the near future; price has yet to be announced.)
SRP: $349.00, including MidiMate interface, cables and user's guide.
**MUSIC DATA**
844 Wilshire Boulevard
Beverly Hills, CA 90211
(213) 635-3580
**MIDI SEQUENCER**
Runs on: Commodore 64, Apple II+ and IIe.
Interface: Passport Design-Compatibles, or company's own interface.
Tracks: 16 tracks, with 16 unique sequences per track; polyphonic.
Key Features: 8,500 MIDI events; "ease of operation — everything you need is on the screen at all times"; song selection capability, each track having its own independent length; looping capability; each set of 16 tracks can be assigned to any 16 MIDI chan-
---
**Why Is EAW The Number One Supplier Of "One Box" Horn Loaded Flying Loudspeaker Systems**
**Quality**
EAW's leadership in setting the standards for sonic accuracy has been acknowledged the world over, including Japan's Nippon Onkyoka Kyokai which named the EAW-based Unicus system, "the best performing high level sound system in the world".
**Reliability**
Due to EAW's high standards of design and construction our systems are virtually indestructable, and their field failure record is essentially zero when operated within their specified ratings. Every KF Series product built in the past 5 years is still on the road today.
**Experience**
EAW originated the "One Box" horn loaded flying system in 1978 and our 7 years of experience allows us to offer many off the shelf solutions to meet your specific requirements. We've made over 10 different "One Box" systems, most in their third generation.
**Return On Investment**
The bottom line is return on investment, and this is where EAW has no competition. When you consider that our first system, built in 1978, is still considered "state of the art" and continues to bring top rental income after 7 years, you can see why no other system comes close.
See Us At AES Booth 145-146
Eastern Acoustic Works, Inc. 59 Fountain St. Framingham, MA 01701 (617) 620-1478
For additional information circle #248
MIDI SOFTWARE ... continued —
**MUSICWORKS**
16 Haviland
Boston, MA 02115
(617) 266-2886
**MEGATRACK**
Runs on: Apple Macintosh.
Interface: Company's Midiworks MIDI interface.
Tracks: Unlimited via 32 MIDI channels.
Key Features: Capacity dependent on memory size of Macintosh (1 MByte of memory is equivalent to 72,000 notes or events); unlimited overdubs; mouse operation; "ease of use."
SRP: $150.00; Midiworks interface $100.00.
**OCTAVE PLATEAU ELECTRONICS, INC.**
51 Main Street
Yonkers, NY 10701
(914) 964-0225
**SEQUENCER PLUS**
Runs on: IBM PC, or compatibles; operates under MS-DOS.
Interface: Roland MPU-401, and company's OP-4001.
Tracks: 64.
Key Features: 60,000-note capacity; user interface; menus styled after Lotus 1-2-3™; menu-driven operation.
SRP: $495.00; OP-4001 interface $295.00.
**OPCODES SYSTEMS**
1040 Ramona
Palo Alto, CA 94301
(415) 321-8977
**MIDIMAC SEQUENCER**
Runs on: Apple Macintosh with 512 Kbytes of memory.
Interface: Any available MIDI interface, including own product.
Tracks: 10 tracks per individual sequence, but with replay of previous sequence while recording another; 32 tracks available for simultaneous replay.
Key Features: 48,000 MIDI events (24,000 notes); utilizes Apple mouse and pull-down menus; "fast and highly interactive"; configured for live-performance applications; MIDI keyboard controls pitch of replayed sequence.
SRP: $150.00
**PASSPORT DESIGNS, INC.**
625 Miramontes Street #103
Half Moon Bay, CA 94019
(415) 726-0280
**MIDI/4 PLUS AND MIDI/8**
Runs on: Apple II+, IIe, IIc and Commodore 64.
Interface: Company's own interface, available with or without tape-sync capability.
Tracks: Four and eight, respectively.
Key Features: 7,000-note capacity; auto-correct, punch-in/out, fast forward/rewind modes; sequence chaining; sync-to-tape (with suitable interface); sync to MIDI and drum machines; real-time editing; tempo control; can be linked to Polywriter music-printing software.
SRP: MIDI/4 plus $99.95, MIDI/8 $149.95 (upgrades for existing MIDI/4 software $35.00); Apple IIe/II+ interface with drum and tape sync $199.95, interface with just drum sync $149.95; Commodore 64 drum/tape interface $169.95, drum-only $129.95. (Also available is a MIDI Pro interface for the Apple IIc, IBM PC and Apple Macintosh; $249.95.)
**ROLANDCORP US**
7200 Dominion Circle
Los Angeles, CA 90040
(213) 685-5141
**MSP (MUSIC PROCESSING SYSTEM)**
Runs on: IBM PC, or compatibles; requires IBM standard color-graphics card (but will work with monochrome monitor).
Interface: MPU-401 plus MIF-IPC interface card.
Tracks: Eight, with unlimited merging.
Key Features: Up to .12,000 MIDI events with 256-Kbyte memory, and up to 65,500 with 640 Kbytes; built-in editor enables modification of sequence data down to a single note; transcribes a sequence into standard musical notation for output to a standard dot-matrix printer; handles all MIDI commands; built-in data "filter" to strip out pitch-bend and mod-wheel information and conserve memory; each track can contain information from up to 16 MIDI channels; "highly interactive user interface"; internal-sync, MIDI-sync and FSK/tape-sync.
SRP: $495.00; MPU-401 $200.00; MIF-IPC $110.00.
**MUSE (MIDI User's Sequencer/Editor)**
Runs on: Apple IIe, IIc, II+ (with 64 Kbytes of RAM) and Commodore 64.
Interface: MPU-401 plus MIF-APL interface card for Apple II+ and IIe; no interface card needed for Commodore 64; custom-designed interface available from J.L. Cooper Electronics for linking MPU-401 to Apple IIc.
Tracks: Eight, with unlimited merging.
Key Features: Up to 6,000 MIDI events; use of joystick enables "point-and-click" access to all functions; automatic punch-in/out facilities; chain mode enables entire tracks to be built out of smaller phrases; editing functions are used to delete, insert, and copy any portions of any track; controller "filters" similar to MPS software.
SRP: $150.00; MPU-401 $200.00; MIF-APL $110.00.
SEQUENTIAL
3051 North First Street
San Jose, CA 95134
(408) 946-5240
MODEL 964
Runs on: Commodore 64.
Interface: Model 242 MIDI Interface Cartridge.
Tracks: Eight.
Key Features: In excess of 40,000 notes; reads all MIDI information, including velocity, after-touch, pitch wheel and program changes (but excluding System Exclusive); autocorrect mode.
SRP: $99.00; Model 242 $99.00.
SYNTECH CORPORATION
23958 Craftsman Road
Calabasas, CA 91302
(818) 704-8509
MUSIC DIGITAL STUDIO II
Runs on: Apple Ile, II+ and Commodore 64.
Interface: Passport Designs, Yamaha, Korg and company's own interface.
Tracks: Eight, with unlimited merging.
Key Features: 16 sequences with 6,300 MIDI events total capacity; four song placements per set of sequences; "Echo Thru" function enables playing of any MIDI-equipped keyboard or module from a master keyboard; "Track Shifting" provides digital delay to move entire track backwards or forwards because of processor delay in keyboard or drum machine (from 1/96 of quarter note to length of track or sequence); sync to off-tape signals.
SRP: $225.95; Apple interface with tape sync $199.95; Apple interface without tape sync $129.95.
CHERRY LANE TECHNOLOGIES
P.O. Box 430
Port Chester, NY 10573
(914) 937-8601
CONNECTIONS
Runs on: Apple Ile (64 or 128 Kbyte).
Interface: Roland MPU 401.
Tracks: Eight.
Key Features: "Linear sequencer;" set up like an eight-track recorder, with similar "transport controls" plus manual/automatic punch in/out functions; 18,000-note storage on (128-Kbyte Apple); looping functions; "non-intimidating program."
SRP: $149.00.
TOTAL MUSIC
Run on: Apple Macintosh with 512 Kbytes of memory.
Interface: Company's self-powered interface.
Tracks: 1.584 (99 sequences by 16 MIDI channels).
Key Features: Eight sequences running simultaneously, with 16 polyphonic channels per sequence; external sync to any MIDI source; dual keyboard inputs; music transcription with automatic beaming, stem direction and accidentals; editing resolution of approximately 1 millisecond.
SRP: $489.00.
TEXTURE
Runs on: IBM PC and Apple Ile (128 Kbyte).
Interface: Roland MPU 401.
Tracks: Eight.
Key Features: "Modular sequencer," in which 64 variable-length patterns are built up from beat information; 52,000 MIDI events available to create patterns; each pattern within a link can be repeated up to 255 times; efficient memory usage; individual note editing, plus compositional tools available.
SRP: $199.00.
In pro-audio,
the edge is a combination of talent and technology.
DR1 DIGITAL REVERB SYSTEM
- FULL 16-BIT TECHNOLOGY/14KHz BANDWIDTH
- FULL FUNCTION REMOTE CONTROL INCLUDED
- OVER 100 PRESETS/USER PROGRAMMABLE
- FULL MIDI CAPABILITY
- STEREO IN/OUT/FULL MIX CONTROL
- SOFTWARE BASED/UPDATEABLE
Our brand new software based DR1 Digital Reverb has 16-bit technology and 14KHz bandwidth, giving you wide dynamic range and frequency response. This range and response result in high definition performance.
Couple this technology with the convenience of full function remote control, over 100 user presets and full MIDI capability. Add your talent and you've got the edge in high definition.
And that's not all. We've gone several steps beyond by providing stereo in and out with full mix control and our famous FIR programs so that all the sound you're looking for can be realized.
There's one more thing. Our powerful software is updateable. That means when you buy a DR1 today, you won't lose your edge tomorrow.
Applied Research & Technology Inc.
215 Tremont Street
Rochester, New York 14608
(716) 436-2720
For additional information circle #250
The concept of a virtual console has far-reaching implications for our industry, and to say that it is a “hot” subject at the moment is to understate the case. Mention the term to console manufacturers, and you are likely to receive responses that range all the way from dreamy predictions of the gleaming, all-digital “Studio of the Future,” to a terse “No comment.” The divergence of responses should come as no surprise: what is at stake here, after all, is no less than a fundamental redefinition of a studio’s major creative tool. Consequently, manufacturers must confront an enormous complex of interrelated issues — some emotionally charged; all intellectually challenging — if they are to offer this new emerging technology.
By definition, a virtual console is capable of taking on the characteristics of a music-recording console, or a film-dubbing console, or a sound-reinforcement console — not only in terms of system architecture, but also in its control functions. In other words, the virtual console can be likened to a mirror: it reflects the functionality of the console topography best suited to the task at hand, which is to say that it forms an image of the task itself.
This specular feat is achieved by separating every manual control element from the audio signal path, and interposing digital control electronics. A computer system — which, in a practical implementation, usually requires the use of several interlinked microprocessors to control various functions — acts as a mediator between the controls and indicators placed in front of the operator, and the signal-processing circuitry. As a result, what the operator sees as the “console” becomes simply a fully-digitized interface, the sole function of which is communication with the controlling computer. In the emerging terminology, the switches, buttons, knobs, faders and indicators whose positions are constantly being scanned by the console’s computer system are referred to as a “control surface.”
The various in-depth interviews that *R-e/p* conducted with major console manufacturers indicate that, for many, the advisability of pursuing the virtual-console design concept is no longer in question. Indeed, at this writing, two manufacturers are beginning production of large-scale virtual consoles, and others are planning similar introductions within the next two years.
The first company to develop a virtual console was Rupert Neve, Inc., whose Digital Sound Processing (DSP) console features a totally digital signal path. A pair of DSPs have been in service at CTS Music Center and Tape One, London, for over a year now. Additional orders from The National Sound Archives, London, West Deutsche Rundfunk, West Germany, and others have also been announced. Now Harrison Systems, Inc. has joined the fray: the company plans to demonstrate its new Series 10, the first totally-automated, digitally-controlled analog console to be offered as a production unit, at the upcoming AES Convention in New York during early October.
Among those companies with firm plans to produce a virtual console in the near future is Audio & Design/Calrec, Ltd., which currently is completing construction of a custom digitally controlled analog console slated for delivery to Thames Television, England, at the end of the year. The firm also plans to offer a production broadcast console in 1986, and a recording console system sometime in 1987. On another front, George Massenburg Laboratories and AMEK Systems and Controls, Ltd. have formed a new joint venture expressly to design, manufacture and distribute large-scale, automated virtual consoles. The company will be known as AMI, Ltd. — an acronym for AMEK/-Massenburg Laboratories. (These developments are covered in greater detail in accompanying sidebars.)
The virtual console is clearly upon us, and we are likely to see some very interesting new approaches to console design over the next few years. Accordingly, it seems appropriate at this point to consider some of the questions that these and other manufacturers are currently addressing in their effort to develop practical implementations of the concept. Our discussion will include comments from those console manufacturers that consented to speak about the subject for the record.
**The Control Surface**
The digitizing of every control on a console surface opens up an extremely broad range of potential advantages and, as we shall see, some potential disadvantages as well. The key to maximizing the advantages lies predominantly in careful ergonomic design.
One of the more immediately obvious advantages of a digital control surface is the ability to scan, memorize, recall and reset every control function. Our industry has been creeping up on this capability (which George Massenburg predicts will be the “next flavor of the month” in
This is our newest multitrack. It is also the most affordable multitrack in Studer history.
For the fourth time since its inception, we've changed the A80VU. We've improved the sonic performance, tape handling, and durability. And we've substantially lowered the price.
Same outside, changes inside. In keeping with the Studer tradition, we made no superfluous cosmetic changes. We're not going to tell you this is an "all new" recorder. It isn't. It is a proven, legendary recorder incorporating several significant improvements.
Uh-oh. Something Is Missing. Yes. The transformers are gone. They've been replaced in the input and output stages with new high performance active balancing circuitry. Other MKIV improvements include a new master bias oscillator, extended record headroom, and a new record and bias driver compatible with all present and future high-bias requirements. Record electronics are now fully compatible with Dolby HX Pro* requirements.
Smoother Shutting. Hardier Heads. The MkIV's new tape tension control system provides smoother tape handling, while a new extended wear alloy for record and play heads greatly increases head life.
Never Lower. The list price of the A80VU MKIV 24-track is lower than any of its predecessors. And that's in straight dollar figures, without adjusting for inflation. What's more, the A80VU MKIV now has a list price lower than most of its competition.
No Hocus-Pocus. How could we make the A80VU MKIV better and lower the price at the same time? Simple. We make it in Switzerland, and you pay for it in dollars. The favorable exchange rate does the trick. That means you get advanced electronics, Swiss precision, and low price. If you act now, this can't go on forever.
Your Time Has Come. If you've always wanted a new Studer multitrack but thought you couldn't afford one, your time has finally come. Call today and find out why the A80VU MKIV is one of the most advanced recorders available at any price. And then ask about our new lower prices. Be prepared for a pleasant surprise.
For more information, call or write: Studer Revox America, 1425 Elm Hill Pike, Nashville, TN 37210; (615) 254-5651.
*Dolby HX Pro is a trademark of Dolby Laboratories
For additional information circle #252
THE VIRTUAL CONSOLE
large consoles) for some time: a prominent example of a major step in this direction is represented by the Solid State Logic Total Recall system which, as an aid in manually resetting the console to its former status, stores front-panel knob and switch settings for subsequent display. When the console surface is under digital control, however, memorized settings may be transferred automatically, thereby reducing the time required to reset the console to just a few milliseconds.
But a digitized control surface implies far more capabilities than instant reset. For example, a single control may be assigned to any of several different functions. The same rotary control that serves as an auxiliary send level might, at the push of a button, become a pan control for that send. Three knobs that affect, respectively, frequency, bandwidth (Q) and boost/cut could be assigned in turn to different bands of a parametric equalizer. The number of controls on each strip may thus be reduced and, potentially, more functions included on each strip.
It goes without saying, of course, that such rotary controls will be continuous in operation — that is, without end stops — and that movement in a clockwise or counter-clockwise direction will simply provide a relative update instruction to the computer which, in turn, would alter the previous setting stored in memory. An obvious exception, however, would be servo-driven controls, which will function like conventional rotary and linear control elements, with the added ability to move the control instantly to a previously stored setting upon command.
Furthermore, grouping becomes a simple matter; any module could become a group master — or grand master — and every control on that module (or any combination of them) be made to affect every channel in the group. By extension, the total number of modules on the control surface may be reduced as well, with each module being assignable to any channel or group of channels. One major benefit of such a design is that it can make it easier for the engineer to handle a very large number of inputs: the modules closest at hand may be assigned to those channels that require attention, while other channels pass through the signal processor "invisibly," their settings having been previously defined and memorized.
The combined effect of such capabilities can reduce the physical size of a console. The ability to handle a
The work of these top producers speaks for itself.
Now they speak for the Series 34.
"It works consistently. It's serviceable and simply laid out."
Bill Ryan and Jesse Henderson of Long View Farm, N. Brookfield, MA.
Long View Farm is the studio of choice for the J. Geils band and other major recording acts. "Our Series 34 helped us a lot on the Graham Nash solo album we did here recently," Bill recalls. "We used it for the vocal and guitar overdubs. The layout of the board is real nice. From the cushion to the meter bridge is a short reach, so it's a real easy board to work." Jesse adds that "the EQ section is one of the things we really like about the board, it's got a lot of whack to it. It's a powerful EQ, and it's also very clean."
"I'm too busy to use anything that gets in the way. The Series 34 helps me work fast and efficiently."
Denny Yeager, Denny Yeager Creative Services, San Francisco, CA.
Denny owns one of the world's largest Synclavier systems (32 megabytes). He uses his Series 34 for major film scores and national commercials. "I push my equipment to the limit every time I turn it on," Denny says, "and my Sound Workshop always gives me the results I'm after. I've bought five of them, so that's a recommendation in itself. I think it's the most underrated console in the business."
"We trust the design and what they do electronically."
Peter and Mary Buffet, Independent Sound, San Francisco, CA.
Peter and Mary do commercial production work for the National Teenage Drunk Driving campaign, the Milk Advisory Board and other demanding clients. Peter says the Series 34 helps them because "the board can be used by one person. It's fast, simple and very logically laid out. It doesn't get in the way of production. The ARMS console computer helps us on every job. It's part of the board, not an add-on. The company is fantastic to work with."
"Musically, it's very transparent. It gives me back what I put into it."
Reggie Lucas, Quantum Sound Studios, Teaneck, NJ.
From his home studio, Reggie produces hit records and film soundtracks with major artists like Roberta Flack, Miles Davis, Stephanie Mills, Madonna, Randy Crawford and many others. "Basically I bought the Sound Workshop board because it sounded great," he says. "My engineer Joe Ferla and I had never worked on Sound Workshop consoles before, so we gave it a test, and we found that it sounded as good or better than many consoles for three, four or five times the price. I had worked on the ARMS system quite a bit at Sigma Sound, and had been real happy with that.
I think the Series 34 is as good as any of the most expensive consoles available. When you log the kind of hours that I do, spending hundreds of hours working on different boards, you become really familiar with the sounds of different equipment. I have found that I can make hit records on the Series 34 console and not feel cramped at all. I'm looking forward to adding DISKMIX [by Digital Creations Corp.], because that puts it right in the world-class league.
I've been using it an average of twelve hours a day for about a year now, and I've had no significant mechanical or electronic failures. That's very important when you're under pressure as I am to deliver for record companies and other clients."
If your work speaks for you, depend on the Series 34 by Sound Workshop.
Sound Workshop Professional Audio Products, Inc.
1324 Motor Parkway, Hauppauge, New York 11788.
(516) 582-6229 TELEX 530464
For additional information circle #253
THE VIRTUAL CONSOLE
A large number of inputs from a small control surface could be of major benefit in teleproduction studios, for example. Traditionally, audio has been allocated a fairly small amount of square footage in such facilities and, in many cases, that allotment is not likely to change for a number of years. The advent of Stereo TV broadcasting, however, poses greatly increased demands on television-audio facilities. Assignable consoles may make it possible for those demands to be met by studios in which, because of space limitations, a large, traditional console simply would not fit.
In music recording, reduced console size also may offer sonic advantages. If the console area is made significantly smaller, George Massenburg points out, "the acoustical surface which normally is present to great effect in front of any sort of control-room monitor is greatly reduced." A control surface of sufficiently small size and weight could easily be positioned for minimum acoustic reflection and, incidentally, would leave more space in the control room for direct-inject keyboards, drum machines, and similar instruments.
All of which may sound great in theory, but obtaining these advantages is not as simple as it might appear. Chris Jenkins, product development manager at Solid State Logic, explains: "The most obvious potential disadvantage of assignable architecture is the other side of the coin: because they are assignable, the controls become less immediately accessible. So, you have to be extremely careful how you implement the concept, or all you end up with is a smaller desk that's harder to use."
Obviously, manufacturers must strike some kind of balance between increased capacity and operational convenience. Some very careful analysis of the ways in which engineers interact with consoles is required. As
HARRISON SYSTEMS' VIRTUAL CONSOLE DESIGN PROPOSALS
The Harrison Series 10, which debuts at the forthcoming AES Convention, is heralded as both the world's first production analog virtual console, and the first audio console to feature a totally-automated control surface.
In contrast to the majority of proposed virtual console designs, the audio path of the Series 10 is not separated from the control surface: both audio and digital control signals pass through each module. The system architecture employs distributed multiprocessing, comprising two microprocessors per module strip; one local automation slave processor for every 16 modules; and a central automation master processor located in an external rack. The Harrison 20-Mbyte hard disk automation system affords dynamic automation of all console functions with sub-frame accuracy, and the entire console configuration may be reset in less than one video frame.
Each Series 10 module controls two independent, completely automated signal paths, and modules can operate in a split mode or as a tracking stereo pair. In addition to a motorized Penny and Giles fader, five rotary controls are provided on each strip; these are assignable to equalization, pan, dynamics processing, and auxiliaries. Extensive local LED displays indicate signal flow, mix routing, main routing, and automation status of the selected path. A four-character alphanumeric display serves as a "scribble" panel, as well as providing precise feedback of control settings. Levels are indicated by 40 segment LED barometers, which can be selected to be peak or VU reading.
The Series 10 central control panel is divided into two main sub-sections. The "Shared Facilities" section addresses modules one at a time, selecting module signal flow, main assignment, mix assignment, and auxiliary source and mode. This section also allows selection of "Remote Fader," the Harrison term for the equivalent of a VCA group. Provision is made for copying all or part of the settings from one module to another.
The "Global Facilities" section of the central control panel addresses the entire console at once, determining fader mode, metering mode, and "Listen" and "Mute" assignments. Master control for the automation is located in this section, including 32 external events triggers, and two external automated functions. The Global Facilities section also incorporates a standard keyboard terminal.
Jenkins implies, often-used functions cannot be "buried" in such a way that the operator is constantly forced to go through intermediate steps to gain access to them. The design process thus demands that manufacturers consult with their clients, anticipate their demands, and perhaps resist attempts to go overboard in relying on
The EDITRON range of time code synchronisers are microprocessor based, automated control systems for editing and synchronising in the television, radio and film industries.
Proof of the success of EDITRON is that it was used in the mixing of the highly successful movie Mad Max III. There are three EDITRON systems. The EDITRON 100, EDITRON 200 and the EDITRON 500. Of the three, the EDITRON 500 is the larger capacity system. Each model interfaces with the smaller and larger models in the EDITRON range.
The EDITRON systems are so adaptable that they are the only systems available worldwide, capable of handling the many standards of film, video, audio and interproduct connection.
For information on how EDITRON can perform successfully for you, write or phone to EDITRON now.
At last...
Service Ultra Conservative Complex needs Editron Synchronising Systems
editron™
EDITRON AUSTRALIA PTY LTD, 1616 BUTLER AVENUE, WEST LOS ANGELES, 90025
Telephone (213) 478 8227
October 1985 □ R-e/p 167
For additional information circle #254
assignable features.
An additional ergonomic question raised by assignable controls is that of control-status indication. If we press a button and thereby assign a particular rotary control to pan, for example, how should the current pan status be shown? (It should be remembered that, in a "conventional" console, a knob or fader fulfills two, complimentary functions: not only does it enable the user to alter a setting, the knob pointer or scale also provides a visual indication of the pan, cut/boost, level etc.)
A number of alternative solutions to providing an indication of previous control settings on a virtual console suggest themselves. Among these are alphanumeric displays of frequency, level, pan percent, etc; vertical rows of LEDs placed beside each control, with a scale marking; semicircular arrays of LEDs around a rotary control to show angular position; various types of CRT displays; or possibly a servodriven potentiometer and knob.
Opinions among manufacturers differ greatly on this point. One way to arrive at a solution may be to examine how the operator visualizes a particular function — and one possible key to the visualization process is how we talk about console functions. Equalization, for example, is generally expressed as a particular amount of boost or cut (in decibels) at a particular frequency; bandwidth is rarely mentioned. It might be appropriate, then, to indicate frequency, cut/boost and Q via alphanumeric displays.
Similar reasoning could be applied to other functions. Though various manufacturers undoubtedly will arrive at different conclusions regarding the best means of display for a given function, there seems to be a consensus that the console should allow instant appraisal of, at the least, the status of most functions of selected channels.
Local versus Central Control
Assignability also raises the possibility of eliminating local control of certain functions, in favor of a centralized panel equipped with assignable controls. The important decision to be made, of course, is which controls may be centralized, and which are best left local to the module strip.
One function that everyone seems to agree is best handled centrally is signal routing. Indeed, computer-controlled routing has already been implemented with various degrees of sophistication in a number of currently available consoles, and has even reached mid-market boards. The Soundtracs CM4400 system, for example, employs a microprocessor-controlled switching matrix to determine subgroup and master assignments, and provides a means of memorizing patches. Similarly, the Allen and Heath/Brenell CMC Series of mixers employs a computer routing system, called CARS, which allows centralized control of input/output routing and mutes; this system also includes provision for memorizing assignments. In these and other computer routing implementations, assignments are made from a central facility, rather than from duplicate switch banks provided on each module.
However, it is the combination of computer-controlled routing with assignable controls that truly makes a virtual console, since this permits the construction of virtual architectures to conform to the operational requirements of different applications. One of the reasons that such a capability is attractive to console manufacturers is that it offers the possibility of selling a single piece of hardware in many different markets, which greatly simplifies the already complex manufacturing process, shifting the burden of customization from hardware to software. (The argument assumes, of course, that the manufacturer can find ergonomic solutions that will be acceptable to all markets.)
For an individual studio, flexible architecture can be a great advantage, since it may allow the studio to more easily broaden its own market base, booking sessions for just about everything across the spectrum from music recording to film and video post production. There is some evidence that this hope is realistic. Barry Roche, president of Rupert Neve, Inc., reports that "the CTS[digital] console has been used for film scoring, mixdown, sports programs, concert programs, disk mastering, tape transfer, you name it — all with great success."
While most agree that assignments can and should be made from a central location, there is less agreement about centralizing other functions. No one seems to be in favor of controlling everything from a single knob and button, for obvious reasons. But how far, in fact, can we go in distilling the control surface? The Neve DSP, for example, provides a total of two sets of equalization controls for the entire console; these may be assigned centrally to any of the inputs, outputs, monitors, auxiliary busses, etc. Similarly, while each input may be provided with a dynamics processor (which incorporates limiting, compression, expansion and gating), the dynamics parameters are controlled centrally — not on each strip.
QUAD EIGHT COMPUMIX IV
HARD-DISK AUTOMATION SYSTEM:
A FIRST STEP TOWARDS A VIRTUAL CONSOLE
For many years, Quad Eight has been producing custom-designed consoles for the film re-recording industry. In pursuit of a virtual console best suited to its primary market, the company (now a subsidiary of the Mitsubishi Pro-Audio Group, and located in a new manufacturing facility in San Fernando, CA) has elected first to develop an automation system that is both tailored to the working practices of film re-recording, and capable of handling the data density of a large-scale virtual console.
The result of this development effort is Quad Eight's Compumix IV, a hard-disk automation system that utilizes distributed multiprocessing. The system is designed around a 68000-based, 32-bit real-time host processor running a multi-user, multitasking version of the Forth language/operating system, with Smalltalk object program extensions written by Quad Eight.
In addition to performing the functions of data synchronization and operator interface, the host stores automation data on an 80-Mbyte Winchester hard-disk. The host communicates with up to three local console computers over an ARCNET local area network link (LAN), using a one-megabit per second serial communication protocol. Each console computer, which serves as a central control point for automation functions, communicates in turn with individual 6805-based module computers via an eight-bit parallel bus.
To accommodate the needs of separate dialog, effects and music mixers, Compumix IV provides for three independent, concurrent automation environments. Each mixer controls his or her own automation using a touch-sensitive plasma display panel, and operator feedback is provided by a high-resolution color graphics display (managed by a graphics co-processor). At any time, each mixer has access to a maximum of four separate, real-time automation records that can be written to, read, updated, or selectively merged, independent of the actions of the other two mixers.
Currently, Compumix IV automates the functions of fader with mute; graphic and parametric equalization; joystick panners; multichannel film panners; A/B transfer; and device insert keys. The system data structure has been designed such that its capacity may expand as Quad Eight develops the hardware necessary to automate all the functions of a module strip. Compumix IV will be fitted to Quad Eight's new SuperStar console, which is scheduled to be unveiled at the forthcoming New York AES Convention, and features central automated routing assignment.
Both AML and Audio & Design/Calrec are pursuing centralized control of all but fader and fader-associated functions. Harrison, however, has taken a different approach with the Series 10, which represents a relatively conservative solution. According to Claude Hill, VP of marketing at Harrison Systems, "Everybody's trying to decide what to control locally and what centrally. One of the options that we're making great use of is local display of centrally-controlled functions. For example, you determine routing centrally, but assignments then are indicated locally by LEDs on the module. Each and every module has a full set of assignment LEDs — 32 for the multitrack, and 16 for the groups. If you assign input 10 to track 10, then the LED on input 10 that [is labeled] 10 beside it comes on. You do not have to query the system to determine the routing; you can still look at the desk and see exactly what you see today on a conventional console."
Harrison also eschews central control of equalization or dynamics processing, providing instead assignable controls for these functions on each module, as can be seen from the accompanying front-panel layout diagrams of the Series 10.
**Signal Processing**
One of the most important aspects of virtual-console design is the challenge of developing a fully programmable audio processor. The main problem to be solved is finding a replacement for the variable resistor, which is a very inexpensive device with excellent audio capabilities. The problem extends far beyond the task of emulating a fader, since variable resistors are one of the fundamental building blocks of analog audio circuitry.
Two common approaches to the design of a programmable resistor employ control voltages to drive either a VCA (voltage-controlled amplifier) or a servo motor-driven potentiometer. Both solutions have gained acceptance as a replacement for channel, group and monitor faders, but neither is necessarily suitable for all console applications. Controlling every resistance-variable function in a single strip with servo-driven pots, for example, would be excessively costly, and requires an inordinate amount of space behind the panel; their use in several centrally assignable controls would be feasible.
On the other hand, despite the excellent specifications offered by present-generation VCAs, the thought of placing upwards of 20 such devices in the signal path would make many audio professionals blanch. George Massenburg offers the opinion that such objections are "religious in nature," pointing out that the use of VCAs for audio control is largely an emotional issue. While it is certainly true that VCA design has improved immensely since their introduction, practical circuits still require a fair number of parts and some adjustment to achieve good long-term tracking and linearity. Therefore, some manufacturers have sought alternatives.
---
**Introducing the NEW Summit Audio, Inc.**
**TUBE LEVELING AMPLIFIER**
Combines the desirable qualities of vacuum tubes with solid state circuitry. Features include:
- ease of operation
- "warm sound"
- "soft knee" characteristics
- switchable attack & release settings
- side-chain access
- stereo coupling capability
- balanced input
- 990 balanced output stage. (Jensen transformer option)
- made in U.S.A.
Introductory price $1000.00 U.S.
Summit Audio, Inc.
408-395-2448
P.O. Box 1678 Los Gatos CA 95031
For additional information circle #255
THE VIRTUAL CONSOLE
Harrison Systems is one such company. "The Series 10 has no VCAs," Claude Hill relates. "Levels are set by digitally-controlled attenuators — DCAs, if you will. The DCAs do all of the level-sets, pans and muting functions." And the advantages of using DCAs? "One, it is inherently and absolutely stable. Two, every one of them is the same — unit-to-unit match. Three, it does not have any control-voltage leak-through [which can cause problems with VCA designs]. DCAs are also much faster than VCAs, and the design that we have has the same apparent linearity as a linear control."
While Harrison naturally would not provide specifics of their DCA circuit, one common method of achieving such digital control involves the use of FET arrays or MDACs [multiplying digital-to-analog convertors] to switch ladders of fixed-value resistors. Some of the design factors that engineers must consider in implementing schemes of this type are standing noise, distortion, charge injection, resolution (particularly in the steep area of a logarithmic curve), and control speed.
Many have suggested that one of the advantages of the virtual console is that, since the front-panel controls are divorced from the signal path, the signal-processing circuitry may be physically isolated in a compact, rack-mounted unit. Electronics so mounted could require less service, since they are less vulnerable to physical shocks or tampering. Perhaps most importantly, in the case of digitally-controlled analog consoles, the audio signal path may be optimized without regard for the physical placement of controls. This offers the possibility of greatly improved audio quality, since the console designer no longer has to cope with, for example, routing busses that can be 26 feet or more in length.
Interestingly, Harrison has not chosen to separate the Series 10's signal path from the control surface: each module carries both audio and digital signals, the system being implemented by using distributed multiprocessing, with two microprocessors in each module strip. Claude Hill explains: "If you want to build a virtual console, one of the ways to do it is to take a desk and cover it with pushbuttons and LEDs, run a cable off it to a central computer, then run a cable off that to a rack of electronics. That's a great way to build systems. The only problem is, if that computer dies, you've got no console — you can't even do a voice-over!
"Our clients can't tolerate that, so we've built the system distributed in such a way that it's no less reliable than a standard console."
It's certainly true that reliability is a major concern in console design, and automation computers have been known to go down on occasion. Nonetheless, there are precedents for a separate signal path with computer mediation, and they are to be found in the reliability-conscious realm of teleproduction. Many video switching and special-effects desks consist of a control panel that is sampled digitally, the information then being passed to a rack of electronics that is usually located in the studio's clean-air room. This may, therefore, be yet another emotional issue, and it might be that virtual audio consoles so constructed will have to prove themselves over time in order to allay customers' fears.
Analog versus Digital
Yet another partially-emotional issue is the question of moving towards an all-digital signal path. While the strict definition of a virtual console deals only with the separation of the signal path from the control surface, and not with the nature of the signal path itself, nevertheless the digitizing of the control-surface functions at least implies the question of digitizing the audio as well. Because digital audio seems destined to ultimately gain hegemony over analog in the consumer marketplace (and, hence, in the professional markets), does it not make sense to go the full distance and digitize the signal path as well as the control path?
Some advocates of a totally-digital signal path are calling digitally-controlled analog consoles an "interim technology" — a distraction from the pure digital research which, they say, will change consoles completely, turning them into an entirely new instrument. On the other hand, digitally controlled analog could be portrayed as an evolutionary step, since a digital audio processor will require a digital control surface, too. Assignable analog consoles could, therefore, be viewed as a proving ground for sorting out the ergonomic questions.
Certainly, economic considerations are one major factor holding back most manufacturers from taking the full-digital plunge: the developmental costs associated with a fully-digital console are prodigious. Bob Budd of Quad Eight, a subsidiary of the Mitsubishi Pro-Audio Group, elaborates: "It seems to me that we're right about at a stage when the first dynamic RAMs were coming out: you could dream about megabytes of it, but when you had to do it 512 bits at a time, at $20.00 a chip, it was a little bit impractical! A lot of people just hung out with their core memory there for a while, because it did everything that they needed it to do."
Quad Eight's soon-to-be-released SuperStar console, Budd reveals, will incorporate central automated assignment, and provision for automated equalization.
Market acceptance may be another factor that restrains most manufacturers from following Neve's DSP lead. After all, digital recorders are only now beginning to gain widespread acceptance in the professional market, and are by no means predominant — even in world-class studios. And what of the cost factor? As Michael Tapes of Sound Workshop puts it, "There's going to be a market for [digital signal-processing consoles], like the people who bought the first solid-state amplifiers. That's the way technology moves along. For a relatively small company like ours, however, digitally-controlled analog is more practical. We can achieve digital control very successfully, and finally optimize the analog circuitry without the physical constraints."
Some also say that the digital signal-processing console will only become practical when the console is integrated with the audio storage medium, and digital signal processors predominate. The argument goes something like this: It doesn't make sense to convert from the analog domain to the digital domain, do the processing, then come back to analog for insert points, convert back to digital for further processing, reconvert to analog and go to the storage media, and so on — until there's nothing left of the transient response and harmonic content of the musical information. When the process can be restricted to a single analog-to-digital conversion, and that conversion process can take place in a way that does not degrade the signal parameters, then we'll do it, the argument concludes.
Here we come to one of the more serious arguments about digital signal processing: The question of signal quality. Many contend that current digital signal-processing technology simply cannot equal the quality of contemporary analog consoles. The problem is complicated by the industry having standardized on 16-bit quantization. Bob Budd: "We have no particular interest in a 16-bit system. If you look at the dynamic range of our current analog system, it's better than 96 dB [offered by 16-bit digital]. So, we'd be looking at something more in the range of 20-bit, which means that you wind up doing your multiplications in more bits than that, and you can round off and throw away the LSB [least significant bits lost during digital amplification and data manipulation]." You also wind up with higher costs in the process.
Neve, of course, has gone the extra distance to 20-bit linear quantization within the DSP console, and 32-bit processing at critical mix points. As the acknowledged pioneers of such technology, Neve has demonstrated its commitment by making the considerable investment necessary to bring a fully-digital console to the marketplace. And Neve's Barry Roche has no reservations about sonic quality: "The benefits of the digital sound, of course, are paramount — that's what we're in the business for. We take that as being for granted; that we will have an improved audio signal throughout the entire process."
**Console Function Automation**
Perhaps the most immediately apparent benefit of a virtual console is total automation. As Claude Hill explains, "This is the one thing that our extensive market research has indicated people want and expect to be a benefit of a console that is either digital, or digitally-controlled analog. Total automation represents a quantum leap over conventional fader automation: until you've automated the entire console, you don't get the full benefit of automation."
Certainly, in some ways, assignable architecture makes it easier to automate the entire console. After all, once you've made a fully-assignable console, you've conquered the data-acquisition problem, which is half the battle: you're already sampling every control on the surface for remote or local control of audio electronics. Furthermore, a programmable audio processor obviates the need to worry about installing remote, optically-isolated relays, VCAs or servo motors, and other control circuitry. Finally, assignable architecture reduces the input requirements of the automation system, since there are fewer controls to sample.
As Solid State Logic's Chris Jenkins points out, however, "Assignability does nothing to reduce the output requirements of the automation computer, because each assignable control still has an audio processing counterpart for every channel, and the computer still has to look after all of these at once. That's several orders of magnitude more difficult than simple fader automation."
Input requirements notwithstanding, therefore, one still needs an automation system capable of very high data density if the aim is to provide dynamic control of console settings — a step beyond the static "snapshot"-style capability offered by present-day automation. Moreover, a large number of changes must be
---
**Electronic Design by Deane Jensen**
**Packaging & Production Design by John Hardy**
- **Fast:** 18V/µS @ 150 Ohms, 16V/µS @ 75 Ohms
- **Quiet:** -133.7 dBv E.I.N. (20—20kHz, shorted input, unweighted)
- **Powerful:** +24 dBv @ 75 Ohms (Ref: 0dBv = .775 V)
**AND IT SOUNDS GREAT!**
**THESE USERS AGREE:**
Sunset Sound, JVC Cutting center, Mobile Fidelity, K-Disc Mastering, Sony (Digital Audio Div.), Capitol Records, Inc., WFMT Chicago, Jensen Transformers, Bonneville Productions, DeMedio Engineering, ABC-TV, 20th Century Fox, Armin Steiner, and many more!
**THE HARDY CO.**
Box AA631, Evanston, IL 60204
(312) 864-8060
TWX: 910-380 Answer back John Hardy Co/4670
For additional information circle #256
THE VIRTUAL CONSOLE
effected over a large number of channels in a very short period of time. (It seems to be an accepted fact that dynamic updates would need to be generated at least every video frame, which is equivalent to about 33 milliseconds.) Typically, therefore, to achieve such high-speed data transfer and system updates, total automation requires that the system incorporates such features as multiprocessing architecture, a multitasking operating system, fast disk access, and a fair amount of on-board RAM (random-access memory). It's no surprise, perhaps, that the Harrison Hard Disk Automation System, which services the Series 10, took five years to develop.
Bob Budd of Quad Eight speculates that the sheer data density resulting from total automation may even require operators to do their own data management. "I could see how decisions regarding what actually is stored might be under the control of the mixer or the studio," he explains. "It may very well come down to the economics of data storage, just like buying hard disks for your PC: the system capacity may determine how much of a console [control settings] you could store in real time.
"If someone is assigning something hard left on a switch, for example," the designer continues, "or just wants to take a panner and put it hard left, I don't see any reason to log that piece of data 30 or 60 times a second, or whatever your scan rate is. If you're going to be slinging something back and forth all day long, then you do need to store [the dynamic changes]. So, I think that setting up a mix may demand some analysis of the task, and a little knowledge of data management — what you want stored, and what you don't."
In fact, some are still questioning the need for dynamic automation of every console function. Neve, for example, does not provide total dynamic automation for the DSP, relying instead on "snapshot" automation for all functions other than faders. Some designers regard this as a curious omission. But, as Barry Roche explains, "The real question at this point is: How useful is dynamic automation of EQ, pan, and other functions? That need has not presented itself, actually — it may not exist. I've spoken with the people who have actually used the DSP, and we've explored the question of what the applications would actually be. When you get right down to it, they're extremely limited."
Some manufacturers agree with Neve. Audio & Design/Calrec, for example, plans on providing a total of only three instantly-accessible snapshots resident in RAM, plus 30 stored on disk, for its assignable broadcast console. Others feel that total dynamic automation is essential: George Massenburg calls it a "a major benefit of assignable architecture, the advantages of which in music recording, film post, and teleproduction are obvious and not at all trivial." As you might guess, AML, Ltd. plans to offer dynamic reset of every control on its assignable console from the point of introduction.
Another issue on which Massenburg is known to have strong opinions is the question of transporting automation data between systems of various manufacturers. Although GML has attempted to open the issue of standardization of list-management protocols and disk formats, to date there seems to be little sympathy for Massenburg's views among most makers of automation systems. Harrison's position is typical. "Like any other form of standardization," Claude Hill relates, "it's noble. But anything other than fader data in our system wouldn't mean anything to anybody else's, and their data wouldn't mean much to us. For example, we could probably read Total Recall data written with an SSL — since it is 3740 format — and reset our EQ close to it, but it wouldn't be exact."
Of course, assignability greatly complicates the question. If we can't agree on a standard format for simple fader level data, how can we deal with the far greater density of data generated by a totally-automated system? Massenburg is adamant, however: "Other industries, from automated circuit board parts insertion, to musical instrument Massenburg control, to PCB design automation, to video off-line automation have accepted transportability [of data]. With cooperation, it is certainly possible now, and inevitable in time."
**Peripheral Considerations**
In mid-June of this year, the European Broadcast Union (EBU) and the Society of Motion Picture and Television Engineers (SMPTE) carried out extensive tests of an interface protocol that the two organizations are jointly promulgating for control of broadcast-studio equipment. Designated the "EBU/SMPTE Digital Remote Control System," the protocol is a variant of the RS-422 standard.
Equipment developed by eight different manufacturing and broadcasting organizations — each according to its own interpretation of the specifications — was brought together for the tests on the premises of the Institute fur Rundfunktechnik (IRT) in Munich, Germany, and interconnected in a variety of configurations. The participants were Ampex, Bosch, BBC, IRT, Kudelski, Pro Bel, Solid State Logic, and Studer. Test support was provided by Dynair, the Grass Valley Group, and Sony.
According to a statement issued by the EBU, "The test successfully demonstrated the validity of the specifications, and the ability of the different kinds of equipment to pass control messages and return tallies between controlling and controlled devices, which includes video and audio tape recorders, a routing switcher, and a variety of control panels. Validation of the specification included investi-
---
**AUDIO & DESIGN/CALREC'S VIRTUAL CONSOLE DESIGN PROPOSALS**
Calrec Audio, Ltd. is currently completing an 88-channel, digitally-controlled analog console under contract to Thames Television, London. Delivery is scheduled for the end of 1985. The project entails a single-control surface to be installed in one of three adjacent studios. The unit incorporates 84 faders and two assignable panels; centrally-controlled, assignable features include channel input, equalizer, and auxiliary functions, as well as routing assignments. The system provides for "floating" channels, which may be addressed by control surfaces to be installed in the other two studios of the complex at a later date.
Calrec plans to offer an assignable broadcast production console in 1986. The company projects that the basic unit will feature 48 channels or less (expandable to 96), with eight stereo groups, four stereo outputs, eight auxiliaries, and 24 recording groups.
Two assignable panels are planned, with controls for individual channel or group input levels, filters, equalizers, pan, and auxiliaries. Routing, memory and master status will also be centrally-controlled. Local controls will be restricted to fader, mute, solo, PFL, and VCA selection.
Automation for the console will allow for a total of three "snapshots" of the full console to be accessible instantly from RAM memory, with a maximum of 30 stored snapshots per disk. Faders will be servo-driven.
The Audio & Design/Calrec 88-channel digitally-controlled analog console scheduled for delivery to Thames Television, London, England, by late-1985. The central control panel shown in close-up right features two assignable sections for EQ, pan, filters, and aux sends, plus assignment functions.
THE VIRTUAL CONSOLE
gation of the behavior of the bus when operating under the control of bus controllers from different manufacturing sources, under conditions of deliberately induced data errors, and with bus lengths up to 1.2 kilometers [¼-mile].”
Accordingly, the EBU Technical Committee has approved, in principle, the detailed specifications of control messages that will permit full implementation of the standard for VTR systems. SMPTE is still examining the standard, however, and plans final practical testing of a VTR dialect at a further joint test to be held in Redwood City, CA, in November of this year.
Formal approval of the standard will have considerable relevance to the design of virtual consoles and their companion automation systems. Teleproduction studios clearly represent a major potential market for console manufacturers, and the ability to offer integrated machine control can be a substantial selling point. Further, given that the lines separating “traditional” music recording studios from video post and film scoring facilities are constantly being eroded, equipment conforming to the standard will doubtless be appearing with increasing frequency in recording studios. That market will have substantial interest in the ability to trigger effects rolls, for example, from a totally-automated console. While this can be done presently with any of the various simple events-
FOR SALE
Please phone: (213) 653-0240
control schemes (relay closures, opto-isolators or open collectors, and so on), such circuitry offers only the most rudimentary transport control: cueing must still be done manually, which takes time.
Another interconnection protocol that some console manufacturers are considering is the Musical Instrument Digital Interface (MIDI). The advantages of such support are not trivial, particularly — but not exclusively — in music studios. Certainly, it would be interesting to envision a console automation system that might grow into musical-instrument recording and control. The potential for automating not only the console but also the entire control room is suggested, since some signal-processor manufacturers are already supporting MIDI control, and others are expressing interest in doing so. The ability to automate changes in delay or reverberation characteristics with video-frame accuracy — linked to the automation of not only faders and mutes, but also routing, sends, equalization, pan and dynamics processing — could open up new realms of creativity in both music and post-production for visual media.
Through the Looking Glass
"In a digitally-controlled analog console," George Massenburg reminds us, "the signal path is an abstraction rather than a physical set of links." In other words, we are no longer directly connected with the signal path: computer "intelligence" stands between, interpreting our communications and, in turn, commanding the processing circuitry. The virtual console presents a mediated experience.
A mediator can be extremely useful, as everyone who has worked with an agent or lawyer knows — provided that you can communicate with them, and that they accurately and effectively represent you. If it takes more time than it's worth to explain what you need — and how to get it — then you may as well just go and get it yourself. Similarly, a console that is difficult or confusing to operate won't be used, even if it offers excellent sound quality.
But not only does a good mediator not stand in your way, it also helps you grow. By the same token, a well-designed virtual console — which is to say one that is easily operated, highly flexible, and great sounding — will open new doors. That is the promise of this fledgling technology. The extent to which that promise is fulfilled will be determined by how well we answer the myriad questions that now confront us.
THE FUTURE OF THE VIRTUAL CONSOLE
Feedback from R-e/p Readers
As the world's leading operational pro-audio magazine, R-e/p prides itself of keeping its readers apprised of innovative developments in recording and production technology and systems. Without doubt, one of the major talking points throughout the industry is the potential offered by virtual or assignable consoles — either utilizing digital control of analog electronics, or total digital signal processing.
But what do you, the potential users, feel about the kind of technology that will be finding its way into recording and production studios over the next two to five years? R-e/p would be extremely interested in hearing from any reader that has a constructive point of view regarding the transition from working with conventional analog consoles, which offer duplicate features to enable simultaneous access to every control element — and with which we are the most familiar — to the potential offered by assignable controls, and the possibility of frame-accurate recall of the entire control surface for enhanced dynamic automation.
We will gather together your comments for possible inclusion in a follow-up article; please include a day-time telephone number so that we can contact you for additional, specific information.
Mail your letters, c/o the Editor, to the address on the Contents page of this issue. Alternatively, we can be reached via IMC EMail to REP-US, or via FAX to (213) 469-0513.
**ANT TELECOMMUNICATIONS**
U.S. Distributors: Solway, Inc.
P.O. Box 7647
Hollywood, FL 33081
Phone: (305) 962-8650
Model C4-DM/-M122/-M232
Channels: One, two, and two, respectively.
Effects Type(s): Noise reduction system.
Dynamics Parameters: Four-band; linear slope; peak indicators.
Operational Controls: N/A.
Selected Standard Features: DM: replaces Dally CAT 22. 30 dB SNR improvement in level alignments; no pumping or breathing. -M122 is a two-channel playback unit. -M232 is a two-channel ARI or VTR unit.
Frequency Response (input/output): 20 Hz to 20 kHz.
Distortion: Less than 0.1%.
S/N Ratio (input/output): Better than -90 dB.
Pro-User Price Range: -DM: $650; -M122: $2,030; -M232: $2,435
Model C4E ESF-8/ESF-16/ESF-24
Channels: Two, 16, 32, and 48, respectively.
Effects Type(s): Noise reduction system.
Dynamics Parameters: Four-band; linear slope; peak indicators.
Operational Controls: N/A.
Selected Standard Features: Single-channel card for OEL use; multitrack eight-, 16-, 24-channel system, respectively; all offering 30 dB SNR improvement in level alignments; no pumping or breathing.
Frequency Response (input/output): 20 Hz to 20 kHz.
Distortion: Less than 0.1%.
S/N Ratio (input/output): Better than -90 dB.
Pro-User Price Range: C4E: $850; ESF-8: $9,640; ESF-16: $17,150; ESF-24: $25,050
For additional information circle #190
**APHEX SYSTEMS, LTD.**
13340 Saticoy Street
North Hollywood, CA 91605
Phone: (818) 765-2212
Aural Exciter Type-C
Inputs: Two.
Outputs: Two.
Effects Type(s): Psycho-acoustic audio enhancer.
Dynamics Parameters: N/A.
Operational Controls: Drive; tune; mix; in/out.
Selected Standard Features: Uses new Aphex MAX IC (Modulating Aural Exciter).
Frequency Response (input/output): 10 Hz to 50 kHz +0/-0.5 dB.
Distortion: THD less than 0.02%; IMD 0.1%.
S/N Ratio (input/output): Better than -100 dBv.
Pro-User Price Range: $295
**Compellor**
Inputs: Two.
Outputs: Two.
Effects Type(s): Compressor; leveler, peak limiter.
Dynamics Parameters: N/A.
Operational Controls: Input process balance; output silence; gate threshold; stereo enhance.
Selected Standard Features: Full automatic control of all operational parameters; program dependent.
Frequency Response (input/output): 5Hz to 65kHz.
Distortion: THD less than 0.01% at 20 dB compression.
S/N Ratio (input/output): Better than -95 dBm.
Pro-User Price Range: $1,195
Inputs: One.
Outputs: One.
**Effect Type(s):** Compressor, expander module.
Dynamics Parameters: N/A.
Operational Controls: Display select; compression: in/out; threshold and release time; expansion: in/out; threshold and delay time in out control defeat; input/output gain/attenuation depth.
Selected Standard Features: Modular construction; expansion and compression in one package.
Frequency Response (input/output): 20 Hz to 50 kHz.
Distortion: THD less than 0.1%.
S/N Ratio (input/output): Better than -105 dBv.
Pro-User Price Range: $445
For additional information circle #191
**ASHLY AUDIO**
100 Fernwood Ave.
Rochester, NY 14621
(716) 544-5191
Model CL 52
Inputs: Two.
Outputs: Two.
Effects Type(s): Dual-channel compressor, limiter.
Dynamics Parameters: N/A.
Operational Controls: Gain ratio; attack; release; output.
Selected Standard Features: Ten-level meters on each channel indicate gain reduction and output.
Frequency Response (input/output): 20 Hz to 20 kHz.
Distortion (input/output): Less than 0.05%.
S/N Ratio: Better than -90 dBv.
Pro-User Price Range: $659
Model SG33
Inputs: Two.
Outputs: Two.
Effects Type(s): Stereo noise gate.
Dynamics Parameters: N/A.
Operational Controls: Threshold; attack; hold; fade; floor.
Selected Standard Features: Key input stereo "tie" point.
Frequency Response (input/output): 20 Hz to 20 kHz.
Distortion (input/output): Less than 0.05%.
S/N Ratio: Better than -90 dBv.
Pro-User Price Range: $429
For additional information circle #192
**AUDIO+DESIGN/CALREC**
P.O. Box 786
Bremerton, WA 98310
Phone: (206) 275-5009
Filmes
Inputs: One.
Outputs: One.
Effects Type(s): Four-band, single-ended noise reduction system.
Dynamics Parameters: Four-band, single-ended noise reducing expansion with up to 60-dB control range.
Operational Controls: Ratio; threshold; release and range in each frequency band.
Selected Standard Features: Signal split into four bands: 12 dB per octave linear filter.
Frequency Response (input/output): 20 Hz to 20 kHz +0/-0.5 dB.
Distortion: THD less than 0.01%.
S/N Ratio (input/output): Better than -90 dBm.
Pro-User Price Range: $2,660 per channel
Scamp $30
Inputs: One.
Outputs: One.
Effects Type(s): Noise reducing expander/gate.
Dynamics Parameters: Ratios 1:1.2 (soft); to 1:20 (gate).
Operational Controls: Attack; release; ratio; threshold; range; gate hold; side chain; pre-deemphasis; in/out; key input; computer control note input.
Selected Standard Features: Hold facility; side chain equalizer; infinitely variable ratio; 60 dB control range.
Frequency Response (input/output): 20 Hz to 20 kHz +0/-0.5 dB.
Distortion: 0.03%.
S/N Ratio (input/output): Better than -100 dB, reference to 80 dBm.
Pro-User Price Range: $445
Model F601
Inputs: Two.
Outputs: Two.
Effects Type(s): Fast limiter.
Dynamics Parameters: Peak limiter, clipper.
Operational Controls: Make-up gain; threshold; attack; release; clip no output.
Selected Standard Features: Designed to protect digital PCM inputs; 100 dB dynamic range; dual mono/stereo.
Frequency Response (input/output): 20 Hz to 25 kHz +0/-0.5 dB.
Distortion: 0.08% at 1 kHz, with 6dB gain reduction.
S/N Ratio (input/output): Better than -90 dBm.
Pro-User Price Range: $1,490
Easy Rider
Inputs: Two.
Outputs: Two.
Effects Type(s): Dual-channel compressor, limiter.
Dynamics Parameters: Ratios variable from 1:1 to 30:1; attack time 500 microseconds to 5 milliseconds; release time 5 milliseconds to 4 seconds.
Operational Controls: Input level; output level (preset); ratio; attack time; release time; stereo crossover.
Selected Standard Features: Thresholds automatically change with ratio; dynamic attack changes with level; 25 dB gain; control range: 34 dB make-up gain; 1.5 dB stereo match over 20 dB of gain change.
Frequency Response (input/output): 20 Hz to 25 kHz +0/-1 dB, at limit threshold reference of gain change.
Distortion: 0.15% at 1 kHz, with 12 dBm maximum limit level.
S/N Ratio (input/output): Better than -80 dB at 12 dBm.
Pro-User Price Range: $690
Express
EQ Ranges: N/A.
Dynamics Parameters: Ratios switched from 1:5 to 30:1; attack time 500 microseconds to 5 milliseconds; release time 25 milliseconds to 3 seconds.
Operational Controls: Input level; output level; ratio; attack time; release time; expander threshold (preset) RMS or peak sensing; meter select.
Selected Standard Features: 25 dB gain; control range: 1.0 dB stereo match over 20 dB of gain change.
Frequency Response (input/output): 20 Hz to 25 kHz +0/-1 dB, at limit threshold reference 1 kHz.
Distortion: 0.15% at 1 kHz, with 12 dBm maximum limit level.
S/N Ratio (input/output): Better than -80 dB at 12 dBm.
Pro-User Price Range: $690
Scamp $31
Inputs: One.
Outputs: One.
Effects Type(s): Feed-forward compressor, limiter.
Dynamics Parameters: Ratios 1:1 to 20:1; threshold +12 dBm to -50 dBm; logarithmic linear release.
Operational Controls: Make-up gain; compression ratio; compression threshold; limit threshold; release; attack; system in/out; side chain; computer control mute input.
70% OF?
70% of Studio applications, for automation, are for pre-programmed muting, the remaining 30% utilising the more powerful memory of fader level.
Now the CM4400 can be linked to a Commodore 64 PC with a suitable communications port via a SMPTE/EBU reader writer. With software loaded onto the '64, (by floppy disc,) there is now available:
1. Automated muting of over 1000 mutes against SMPTE or EBU time code.
2. Automated trip of the 30 internal memories against SMPTE or EBU time code.
3. Video synchronisation using SMPTE or EBU to open or shut channels automatically — a later update will be an events controller linked to this.
4. Full operation of the CM4400 muting or routing from the REMOTE 64 PC keyboard.
70% of all automation applications available for only 70% of conventional costs!
CM4400
DIGITAL ROUTING SYSTEM
affordable quality
Dealer list and brochure from: Soundtracs Inc. 745 One Hundred and Ninth Street, Arlington, Texas. 76011. Tel. [817] 460 5519
MCI Intertek Inc 745 One Hundred and Ninth Street, Arlington, Texas. 76011. Tel. [817] 469 1600
In Canada: Omnimedia Corporation Ltd 9653 Côte de Liesse, Dorval, Quebec H9P 1A3. [514] 636 9971
For additional information circle #260
Selected Standard Features: Zero to 60 dB control range; separate peak limiter and compressor threshold.
Frequency Response (input/output): 20 Hz to 25 kHz, +0.5 dB, at limit threshold reference 1 kHz.
Distortion: 0.05% at 1 kHz, with 12 dBm maximum limit level.
S/N Ratio (input/output): Better than -100 dB at +8 dBm.
Pro-User Price Range: $445
Inputs: Two.
Outputs: Two.
Effects Type(s): Compressor, limiter, expander, gate.
Dynamics Parameters: Ratios 1:1 to 20:1; threshold +14 dBm to -20 dBm.
Operational Controls: Input/output limit in/out; compressor attack, release, threshold, release, expander gate attack, release, threshold, range system in/out.
Selected Standard Features: Only one VC A used for all four separate functions of compression, limiting expansion, and gating.
Frequency Response (input/output): 30 Hz to 20 kHz, +0/-0.5 dB.
Distortion: 0.1%.
S/N Ratio (input/output): Better than -89 dB.
Pro-User Price Range: $1,890
F760X
Inputs: Two.
Outputs: Two.
Effects Type(s): Integral compressor, limiter, expander, gate, plus side-chain parametric equalizer.
EQ Ranges: 40 Hz to 1.4 kHz, 80 Hz to 1.6 kHz, 400 Hz to 14 kHz, 800 Hz to 16 kHz.
Dynamics Parameters: Ratios 1:1 to 20:1; threshold +14 dBm to -20 dBm.
Operational Controls: Input/output; peak limit in/out; compressor attack, release threshold, ratio; expander gate in/out; attack, release, threshold, range, Q-frequency loop, cut, input gain.
Selected Standard Features: Compressor and EQ section may be used together or separately; equalizer may also be routed pre- or post-compressor or to side chain.
Frequency Response (input/output): 30 Hz to 20 kHz, +0/-0.5 dB.
Distortion: 0.1%.
S/N Ratio (input/output): Better than -89 dB.
Pro-User Price Range: $1,775
Compex 2
Inputs: One.
Outputs: One.
Effects Type(s): Compressor, limiter, expander, gate.
Dynamics Parameters: Ratios 1:1 to 20:1, threshold down to -60 dB.
Operational Controls: Make-up/gain; system in/out; compressor ratio; release, attack and attack limit; limiter cutoff and threshold; expander gate ratio; threshold; range; hold, attack.
Selected Standard Features: Choice of logarithmic or linear release; infinitely variable expander from soft 1:1.2 (hard gate); side chain access.
Frequency Response (input/output): 20 Hz to 25 kHz, +0/-0.5 dB.
Distortion: Better than 0.05% 0 Hz to 100 kHz.
S/N Ratio (input/output): Better than -100 dB, reference to 8 dBm.
Pro-User Price Range: $990
For additional information circle #193
BIAMP SYSTEMS, INC.
P.O. Box 2160
Portland, OR 97208-2160
Phone: (503) 641-7287
Model LG-2/-4
Effects Type(s): Two-channel or four-channel (respectively) limiter, compressor, noise gate.
Dynamics Parameters: N/A.
Operational Controls: Front panel threshold control; individual release time; switchable selection for limiting or noise gate.
Selected Standard Features: Individual compression/noise indicators; balanced XLRs; unbalanced inputs and outputs; series switchable.
Frequency Response (input/output): 20 Hz to 25 kHz, +0.5 dB.
Distortion: THD less than 0.1%; at +15 dBV output.
S/N Ratio (input/output): Maximum threshold is better than -103 dB A weighted.
Pro-User Price Range: 1G-2: $349. 1G-4: $599
For additional information circle #194
BROOKE SIREN SYSTEMS, INC.
U.S. Distributor: Klark Teknik
2624 Eastern Parkway
Farmingdale, NY 11735
Phone: (516) 249-3660
Model DPR-02
Inputs: Two.
Outputs: Two.
Effects Type(s): Compressor, expander, peak limiting.
Dynamics Parameters: N/A.
Operational Controls: Ratio, attack, release, and threshold; output level; tunable filter; gain reduction.
Selected Standard Features: Up to 40 dB increase in dynamic range; modular construction; differential inputs compatible with all type I NR systems.
Frequency Response (input/output): 40 Hz to 20 kHz, +0.5 dB.
Distortion (input/output): THD less than 0.1% from 100 Hz to 20 kHz.
S/N Ratio (input/output): Better than -110 dB dynamic range.
Pro-User Price Range: Models 941A: $269. 942A: $279
Models 140A/180A
Channels: Two-channel type I (140A), and type II (180A) NR systems.
Dynamics Parameters: +40 dB effective noise reduction.
Operational Controls: Separate channel level adjusters; balanced inputs; unbalanced outputs; provision for output balancing transformers.
Selected Standard Features: Up to 40 dB increase in dynamic range.
Frequency Response (input/output): 30 Hz to 20 kHz, +0.5 dB, -1 dB for 20 Hz.
Distortion (input/output): THD less than 0.1% from 100 Hz to 20 kHz.
S/N Ratio (input/output): Better than -115 dB dynamic range.
Pro-User Price Range: 140A: $589. 180A: $589
Model 150
Channels: Two-channel type I NR system.
Dynamics Parameters: +40 dB effective noise reduction.
Operational Controls: Simultaneous encode and decode of noise reduction.
Selected Standard Features: Gold-plated RCA jacks; compatible with all type I units; hard-wire bypass.
Frequency Response (input/output): 30 Hz to 20 kHz, +0.5 dB, -1 dB at 20 Hz.
Distortion (input/output): THD less than 0.1% from 100 Hz to 20 kHz.
S/N Ratio (input/output): Better than -105 dB dynamic range.
Pro-User Price Range: $249
Model 911
Channels: Single-channel type I NR system.
Dynamics Parameters: +40 dB effective noise reduction.
Operational Controls: Separate record/play level adjusters; NR on/off.
Selected Standard Features: Up to 40 dB increase in dynamic range; compatible with all type I units; modular construction; differential input.
Frequency Response (input/output): 30 Hz to 20 kHz, +0.5 dB, -1 dB for 20 Hz.
Distortion (input/output): THD less than 0.1% from 100 Hz to 20 kHz.
S/N Ratio (input/output): Better than -110 dB dynamic range.
Pro-User Price Range: $310
Model 903
Effects Type(s): Single-channel Overeasy compressor, limiter.
Dynamics Parameters: Compression ratio varies from 1:1 to infinity; 1" and beyond to dynamic inversion.
Operational Controls: Threshold; compression ratio; output gain.
Selected Standard Features: Modular; slaves to 907 for stereo operation.
Frequency Response (input/output): 30 Hz to 20 kHz, ±1 dB.
Distortion (input/output): 2nd harmonic: 0.07%.
3rd harmonic: 0.2%.
S/N Ratio (input/output): FIN better than -85 dBm; maximum output level +20 dB.
Pro-User Price Range: $359
Model 166
Effects Type(s): Dual-channel compressor, noise gate.
Dynamics Parameters: Compression ratio varies from 1:1 to infinity.
Operational Controls: Threshold; peak-stop level; output gain; noise-gate threshold.
Selected Standard Features: Unbalanced quarter-inch jacks; side chain; input/monitor.
Frequency Response (input/output): 20 Hz to 20 kHz, ±1 dB.
kHz, +0.5 dB
Distortion (input/output): THD less that 0.1% at max compression, 0 dBm.
S/N Ratio (input/output): Output noise better than -85 dB A weighted.
Pro-User Price Range: $549
Models 160X/ 163X/ 165A
Effects Type(s): Single-channel Overeasy compressor, limiter.
Dynamics Parameters: Compression ratio varies from 1:1 to infinity:1; dynamic inversion mode (160X).
Operational Controls: Threshold (-40 to +10 dB); attack time (0.2 to 1 second); peak-stop, "soft clipper"; attack time switchable (165A).
Selected Standard Features: Quarter-inch balanced in/out; terminal-strip connectors (165A).
Frequency Response (input/output): 20 Hz to 20 kHz, ±1 dB (160X and 165A); 20 Hz to 20 kHz, ±1 dB (163X).
Distortion (input/output): 2nd harmonic 0.05%*, and 3rd 0.07%* (160X and 165A); THD less that 0.2% (163X).
S/N Ratio (input/output): Output noise better than -89 dBm, 20 Hz to 20 kHz (160X); EIN -85 dB unweighted (163X); EIN -90 dBm unweighted (165A).
Pro-User Price Range: 160X: $429; 163X: $149; 165A: $699
Model 904
Effects Type(s): Single-channel modular noise gate with Overeasy expansion.
Dynamics Parameters: Attenuation: 0 to 60 dB; threshold: -40 to +10 dB; expansion ratio 1.5:1 to 5:1; attack times 500 to 2.5 dB ms; release 2.5 dB ms to 22 dB/sec/second.
Operational Controls: limit ratio; threshold; attack release; bypass.
Selected Standard Features: Modular; balanced in/out; LED column meter.
Frequency Response (input/output): 20 Hz to 20 kHz, +0/-1 dB.
Distortion (input/output): THD less that 0.02% at 1 kHz.
S/N Ratio (input/output): EIN -82 dBm 20 Hz to 20 kHz, unweighted.
Pro-User Price Range: N/A.
DOD ELECTRONICS
5639 South Riley Lane
Salt Lake City, UT 6906
Phone: (801) 268-8400
Model R-825 and MT-828
Inputs: One and two, respectively.
Outputs: One and two, respectively.
Effects Type(s): Compressor, limiter and stereo compressor limiter, respectively.
Dynamics Parameters: Gain reduction: 828 includes four channels.
Operational Controls: Input level; output level; compression ratio; attack time; switchable side chain; de-essing circuitry. 828: gate; threshold; ratio; attack time; release time; input level; output level; compressor limiter; fast in/out.
Selected Standard Features: Balanced de-essing circuitry; adjustable attack and release time; switchable side chain; LED bargraph. 828: ratio variable from 1:1 to infinity:1; independent noise on each channel; gain reduction LED side chain connections.
Frequency Response (input/output): 20 Hz to 20 kHz, ±0.5 dB.
Distortion: THD 0.02% at 0 dB gain reduction.
S/N Ratio (input/output): Better than -85 dB.
Pro-User Price Range: 825: $249.95; 828: $299.95
DOLBY LABORATORIES, INC.
731 Sansome Street
San Francisco, CA 94111
Phone: (415) 392-0300
Models 360/361/362/372
Channels: 360 and 361: one; 362 and 372: two.
Effects Type(s): Dolby A-type noise reduction system.
Operational Controls: 360: record/play, NR in/out. Dolby tone. 361/362/372: monitor signal, check tape.
Selected Standard Features: 360/361: three XLR input/output connectors; 362: quarter-inch socket; 372: seven-pin Tuchel/Binder; all have level setting meters.
Frequency Response (input/output): 30 Hz to 20 kHz, 1± dB.
Distortion: Less than 0.1%, +4 dBm.
Pro-User Price Range: 43: $825; 330: $1,900; 330L: $2,050
For additional information circle #198
DRAWMER
U.S. Distributor: Harris Sound, Inc.
6640 Sunset Boulevard Suite #110
Hollywood, CA 90028
Phone: (213) 469-3500
Effects Type(s): Dual-audio gate system.
Operational Controls: Two-channel operation with separate threshold, attack, hold, decay, range, low- and high key filter controls.
Selected Standard Features: Frequency conscious keying; two-stage release control; stereo link gating; and ducking.
Frequency Response (input/output): 25 Hz to 40 kHz, ±1 dB.
Distortion: Variable between 0.03% and 0.05%.
S/N Ratio (input/output): Better than -90 dBm.
Pro-User Price Range: $595
1960 Tube Compressor
Effects Type(s): Dual-channel vacuum-tube compressor.
Dynamics Parameters: "Soft Knee" ratio 1:5 to 20:1.
Operational Controls: Two-channel operation with threshold, attack, release output level, make gain, auxiliary gain and bass, and treble controls.
Selected Standard Features: Balanced outputs; mike input with phantom power available; auxiliary input with equalizer; stereo link.
Frequency Response (input/output): 22 Hz to 22 kHz.
Distortion: N/A.
S/N Ratio (input/output): Better than -88 dBm.
Pro-User Price Range: $1,395
For additional information circle #199
DREW ENGINEERING CO.
35 Indiana Street
Rochester, NY 14609
Phone: (716) 544-3337
Y-Expressor D58 and Y-Processor D55
Inputs: Three and four, respectively.
By Popular Demand...
Now, there's a Gatex module for your dbx F-900 frame. And, as you would expect, the unit incorporates all of the important features that made the four-channel version an overwhelming success.
In its noise gating mode, Gatex employs Program Dependent Attack to eliminate turn-on "pop", while maintaining attack times sufficiently short to accommodate all percussion instruments. Program Controlled Sustain automatically lengthens the release time as dictated by program content. This means freedom from distortion when using shorter release times.
As an added bonus, Gatex offers two expansion modes. Users of the original Gatex have found them unsurpassed for reducing noise on instruments, vocals, and mixed program material.
The Gatex 904, just what you'd expect from...
Outputs: Four and two, respectively.
Effects Type(s): Dynamic Sound Shaper and envelope controller, respectively.
Dynamics Parameters: D5B: detector and processor calculates envelope shapes.
Operational Controls: D5B: mode; control level microphone line, automatic. Signal level: input/output mix; compression; attack, decay, sustain pedal. D5S: mode; attack, sustain; control level, automatic.
Selected Standard Features: Real-time envelope control; sound transformation; sound combining synchronization; rhythmic effects; voice processing foot-pedal; 12V DC, MIDI CV optional.
Frequency Response (input/output): 5Hz to 50 kHz, -1 dB. D5B: 10 Hz to 30 kHz.
Distortion: THD less than 0.02%; D5S: 0.05% typical
S/N Ratio (input/output): Better than: -94 dB. D5S: -84 dB.
Pro-User Price Range: N/A.
Inputs: One.
Outputs: One.
Effects Type(s): Compression, expansion, gain, limiter.
Dynamics Parameters: 1:1 to infinity:1 compression; 1:1 to 10:1 expansion; ±1 dB for 60-dB change in input level.
Operational Controls: Threshold, attack time, release time, attenuation limit, gain limit, dynamic range, effects.
Selected Standard Features: Metering system employs a logarithmic amplifier to generate information on input, output and gain.
Frequency Response (input/output): 15 Hz to 20 kHz, +0/-1 dB.
Distortion: Variable between 0.02% and 0.5%.
S/N Ratio (input/output): Output noise level better than: -90 dBm, at unity gain.
Pro-User Price Range: $700
Fostex 3070
15431 Blackburn Avenue
Norwalk, CA 90650
Phone: (213) 921-1112
Model 3070
Effects Type(s): Two-channel limiter, compressor, noise gate.
Dynamics Parameters: 32-dB maximum limit; compression ratio of 1:1 to infinity:1; release time 20 milliseconds to two seconds.
Operational Controls: For each channel: input/output, attack, release, ratio, gate sensitivity, in/out switch, and stereo interlock for each channel.
Selected Standard Features: Gate sensitivity indicator; two-segment LED display indicating gain and reduction/reduction created by pulse width modulation; detector loop access for frequency gain-reduction.
Frequency Response (input/output): N/A.
Distortion: THD less than 0.02%.
S/N Ratio (input/output): Better than: -90 dB.
Pro-User Price Range: N/A.
Inputs: One.
Outputs: One.
Effects Type(s): Limiter, compressor. -X features built-in expander.
EQ Ranges: N/A.
Dynamics Parameters: Expander from 1:1 to 5:1; compression ratio 1:1 to infinity:1; 30:1.
Operational Controls: Attack time of 100 microseconds, release time 50 microseconds to 1.1 seconds; compression ratio of 2:1 to 50:1. -X: attack, release, expand threshold and ratio; compression threshold and ratio; output gain and limit.
Selected Standard Features: De-ess and side chain; meter; gain reduction meter. -X: 10-segment bypass switch.
Frequency Response (input/output): 20 Hz to 20 kHz, +0/-0.5 dB.
Distortion: Less than: 0.012%, THD at 0 dBv.
S/N Ratio (input/output): Better than: -102 dB; -3: better than: -102 dB.
Pro-User Price Range: $342 - X: $449
Model QN-4
Effects Type(s): Quad noise gate.
Dynamics Parameters: Release rate 0.005 to 5 seconds; threshold infinity to -10 dBv.
Operational Controls: Threshold (x4); fade time (release time x4).
Selected Standard Features: LED indicator; unit modulation for low noise and distortion.
Frequency Response (input/output): 20 Hz to 20 kHz, ±0.5 dB.
Distortion: Less than: 0.01%. THD at 0 dBv.
S/N Ratio (input/output): Better than: -111 dB.
Pro-User Price Range: $395
Gotham Audio Corp.
741 Washington Street
New York, NY 10014
Phone: (212) 741-7411
Neumann U47A3
Inputs: One.
Outputs: One.
Effects Type(s): Compressor, limiter, expander.
Operational Controls: Compressor: ratio, release with auto attack, output, gain, compressor gain. Expander: threshold, recovery time; stopped control, repeatable sweep.
Selected Standard Features: LED gain reduction meter; limiter and expander LED indication; bypass and expander in/out switches; stereo control link.
Frequency Response (input/output): 40 Hz to 15 kHz, ±3 dB.
Distortion: Less than: 0.03%.
S/N Ratio (input/output): Better than: -100 dB, unweighted -6 dB out.
Pro-User Price Range: $945
NTP 179-170
Inputs: Two.
Outputs: Two.
Effects Type(s): Compressor, limiter, expander, gate.
Operational Controls: Gain: 0 to 20 dB with constant output level; hold level is -10 to -50 dB; gate level is 0 to -50 dB.
Selected Standard Features: Automatic release opens all front-panel adjustments, gain/attenuation; LED metering with limiter LED; external gating of limiter/expander control signal.
Frequency Response (input/output): 40 Hz to 15 kHz, ±3 dB.
Distortion: Less than: 0.01%, +15 dBu out.
S/N Ratio (input/output): Better than: -89 dB, unweighted.
Pro-User Price Range: $2,310
EMT 257
Inputs: One.
Outputs: One.
Effects Type(s): Limiter.
Dynamics Parameters: "Soft" limiting with optional pre-emphasis compensation.
Operational Controls: Attack is 50 to 500 microseconds; threshold is -2 to +10 dB; release is 0.25 to 10 seconds and automatic, pre-emphasis compensaIt delivers the punch without the bruise.
When you want to increase sonic punch in production, compressor/limiters are indispensable. Orban's 412A (Mono)/414A (Dual Channel/Stereo) Compressor/Limiter is uniquely versatile—it can serve as a gentle "soft-knee" compressor to smooth out level variations, or as a tight peak limiter to protect from overload distortion.
Most importantly, the 412A always delivers its punch with finesse. Instead of the usual pumping and squashing, what you get is amazingly natural sound: the dynamic "feel" of the program material is preserved even when substantial gain reduction occurs. Like a true champion, the 412A works hard but makes it look easy.
Whether the application is DJ mike enhancement, cart transfers or daily production chores, the 412A is a real workhorse. But the best news is that the most flexible and natural-sounding compressor/limiter is also one of the least expensive.
Orban Associates Inc., 645 Bryant St.
San Francisco, CA 94107
(415) 957-1067 Telex: 17-1480
For additional information circle #263
**MISTUBISHI PRO AUDIO GROUP**
225 Parkside Drive
San Fernando, CA 91340
Phone: (818) 898-2341
**Model CL-22**
Inputs: One
Outputs: One
Effects Type(s): Compressor, limiter
Dynamics Parameters: Variable attack 0.002 to 5 milliseconds at 10 dB; release: 100 to 5 seconds at 5 dB
Operational Controls: Expander: gain ratio; attack release: 4 threshold pots; L5 and CL in-switch with indicator LED meter
Selected Standard Features: Feed-forward VCA design with a range of -40 dB to +26 dB adjusted from 2 to 20:1 expansion; from -60 to -200 dB.
Frequency Response (input/output): 20 Hz to 20 kHz; +/- 0.25 dB
Distortion: THD less than 0.05% clout.
S/N Ratio (input/output): Better than: -94 dBm, reference: +4 dB where 0 is 0.775V.
Pro-User Price Range: $295
**Model NSD 120**
Inputs: One
Outputs: One
Effects Type(s): Noise gate
Dynamics Parameters: Attack is less than: 25 microseconds; adjustable release from 0.03 to 5 seconds.
Operational Controls: Release threshold; attenuation pots; auxiliary input keying switch; selectable decay time
Selected Standard Features: Unity gain device: 0 to 50 dB attenuation; sensitivity is 31 dBm at 0.03-second release; LED indicator.
Frequency Response (input/output): 30 Hz to 20 kHz; +/- 0.5 dB
Distortion: THD less than 0.25% rated output at
**RUPERT NEVE, INC.**
Berkshire Industrial Park
Bethel, CT 06801
Phone: (203) 744-6230
**Series 33609**
Inputs: One or two
Outputs: One or two
Effects Type(s): Compressor, limiter
Dynamics Parameters: Limiting ratio is 100:1; compression ratio is 2:1 to 6:1
Operational Controls: Limit threshold and recovery limit in/out; fast/slow attack; compression threshold, recovery and ratio; gain make-up; compression in/out; bypass in/out; quad stereo link, 10kV gain reduction; meter(s)
Selected Standard Features: Stereo and four-channel switchable linking; all rotary controls are switches.
Frequency Response (input/output): 20 Hz to 20 kHz; +/- 0.5 dB
Distortion: 0.2% at 6:1 compression; .27 dB above 10kHz
S/N Ratio (input/output): Better than: -75 dB at 0 dB with 20 dB gain; make up.
Pro-User Price Range: Starting from $1,465
**OMNI CRAFT, INC.**
P.O. Box 1069
Palatine, IL 60078
Phone: (800) 562-5872
**Model GT4A**
Inputs: Four
Outputs: Four
Effects Type(s): Noise gate.
Operational Controls: Threshold; range; and release
Selected Standard Features: Key switch for LLDs.
Frequency Response (input/output): 0 Hz to 100 kHz; +/- 0.5 dB
Distortion: "Not measurable."
S/N Ratio (input/output): As above.
Pro-User Price Range: $195
**Model GTX (D) or (K)**
Inputs: One.
Outputs: One.
Effects Type(s): Noise gate.
Operational Controls: Switchable high- and low-pass filters; switchable ducking circuits.
Selected Standard Features: Designed to fit two self-powered, rack-frame sizes.
**ORBAN ASSOCIATES, INC.**
645 Bryant Street
San Francisco, CA 94107
Phone: (818) 843-7567
**Models 412A/414A/422A/424A**
Inputs: One, one, two and two, respectively.
Outputs: As above.
Effects Type(s): Compressor, limiter 424A gated compressor; linear de-esser.
Dynamics Parameters: 35 dB gain reduction; 424: 25 dB gain reduction.
Operational Controls: Input/output attenuation; threshold; attack and release time; compression ratio; hard-wire system bypass. 424 has de-esser sensitivity.
Selected Standard Features: Control interaction; output adjustments automatically change to attack time; phone jacks and barrier strip.
Frequency Response (input/output): 20 Hz to 20 kHz; +/- 25 dB
Distortion: Less than: 0.04% at 100 Hz. 424 and 424A: less than 0.03% at 1 kHz.
S/N Ratio (input/output): Better than: -90 dB, typical.
Pro-User Price Range: 412A: $425; 414A: $799. 424A: $989; 422A (mono): $629
**PEARL INTERNATIONAL, INC.**
406 Harding Industrial Drive
Nashville, TN 37222-1240
Phone: (615) 833-4477
**Model CO-04**
Inputs: One
Outputs: One
Effects Type(s): Compressor.
Operational Controls: Attack; tone level; sustain.
Selected Standard Features: Attack and tone adjustment.
Frequency Response (input/output): N/A.
Distortion: N/A.
S/N Ratio (input/output): N/A.
Pro-User Price Range: $115
**Model SU-19**
Inputs: Two.
Outputs: Two.
Effects Type(s): Noise gate.
Operational Controls: Threshold; decay; gate indicator.
Selected Standard Features: Stereo capable.
Frequency Response (input/output): N/A.
Distortion: N/A.
S/N Ratio (input/output): $179.50
Pro-User Price Range: $179.50
**ROCKTRON CORP.**
2146 Avon Industrial Drive
Auburn Heights, MI 48057
Phone: (313) 853-3055
**Model HUSH II/B/II/C**
Channels: One; IIC: two.
Effects Type(s): Single-ended noise reduction system.
Dynamics Parameters: Expander 30 dB; dynamic filter 1.5 kHz to 30 kHz.
Operational Controls: In/out switch; line/instrument switch; expander/threshold control. IIC has
**HUSH II**
Frequency Response (input/output): 30 Hz to 20 kHz, ±0.5 dB.
Distortion: THD less than: 0.2%.
S/N Ratio (input/output): Better than: -110 dB dynamic range.
Pro-User Price Range: II: $200; IIB: $250; IIC: $330
**Model 120A-140A-180A**
Channels: Four, eight and 16, respectively.
Effects Type(s): System One out side decode noise reduction.
Dynamics Parameters: 2:1 compression to 1:2 expansion.
Operational Controls: Record playback levels, bypass switches.
Selected Standard Features: Two, four, and eight channels (respectively) of simultaneous encode-decode RCA connectors; 30 dB effective noise reduction.
Frequency Response (input/output): 30 Hz to 20 kHz, ±0.5 dB.
Distortion: THD less than: 0.05%.
S/N Ratio (input/output): 1N better than: -95 dB, A weighted.
Pro-User Price Range: 120A: $319; 140A: $545; 180A: $950
**Model 300-310**
Inputs: One.
Outputs: One.
Effects Type(s): Compressor, peak limiter, single-ended noise reduction; and compressor, respectively.
Dynamics Parameters: 25 dB compression and limiting; 30 dB expansion; 310 has 25 dB compression.
Operational Controls: 300: compression, attack, release, limiter threshold, bypass, side chain, in/out meter, expander threshold, output gain; 310: input level, expander threshold, output, output in/out switch, compression switch.
Selected Standard Features: Logarithmic compression, simultaneous peak limiting and 30 dB noise reduction; single rack space; 310: LED gain reduction meter, program dependent ratio, attack/release.
Frequency Response (input/output): 30 Hz to 20 kHz, +0 -1 dB.
Distortion: 0.1 maximum.
S/N Ratio (input/output): Better than: -115 dB dynamic range; 310: 95 dB dynamic range.
Pro-User Price Range: 300: $429; 310: $219.
**Powerplay Deluxe/Basic**
Inputs: Two.
Outputs: Seven.
Effects Type(s): Compressor, EQ, distortion; HUSH II, echo, stereo chorus, exciter; Basic features stereo ambient chorus.
Operational Controls: Membrane touch switching; foot switchable; ambient chorus; 2, 4, 5 delay tape; exciter; output gain loop.
Selected Standard Features: Program dependent compressor; automatic Hush II, digitally controlled; full foot switchable.
Frequency Response (input/output): N/A.
Distortion: N/A.
S/N Ratio (input/output): N/A.
Pro-User Price Range: Deluxe: $1,270. Basic: $699
**RX-1/-2H**
Inputs: Two.
Outputs: Two.
Effects Type(s): Exciter/image; 2H features built-in Hush II single-ended noise reduction.
Dynamics Parameters: Expander — attenuation: switch selectable to -3 dB, -15 dB, -40 dB slope; switch selectable to 1:12, 1:5, 1:20; threshold: -60 dBv, to 0 dBv; attack time: 5 milliseconds; release time: 10 milliseconds to 5 seconds; control input: switchable: normal, external "Key." Compressor — slope: 1:1 to 00:1; threshold: -10 dBv to +10 dBv; attack time: 2 to 200 milliseconds; release time: 5 milliseconds to 5 seconds. Limiter — attack time: 4 dB per millisecond; release time: 2 dB per millisecond.
Selected Standard Features: Ability to compress, expand, and limit simultaneously.
Frequency Response (input/output): N/A.
Distortion (input/output): Less than: 0.03%.
S/N Ratio: -92 dBv, A weighted.
---
**PHOENIX AUDIO LAB, INC.**
P.O. Box 127
Manchester, CT 06040
(203) 649-1199
**Loft Model 400B**
Effects Type(s): Quad gate, limiter.
Dynamics Parameters: Gain: reduction: 30 dB, attenuator or -27 dB.
Operational Controls: Gate, limiter, threshold, attack/release, input/output impedance.
Selected Standard Features: Four independent limiters with noise gates; feed-forward; side-channel processing; front-panel LED indicators; balanced and unbalanced inputs.
Frequency Response (input/output): N/A.
Distortion (input/output): Less than: 0.2%.
S/N Ratio: -95 dBv, A weighted; reference 0 dB
Pro-User Price Range: $602
**Loft Model 410**
Inputs: Two.
Outputs: Two.
Effects Type(s): Compressor, limiter, expander, gate.
Dynamics Parameters: Switch selectable to -3 dB, -15 dB, -40 dB slope; switch selectable to 1:12, 1:5, 1:20; threshold: -60 dBv, to 0 dBv; attack time: 5 milliseconds; release time: 10 milliseconds to 5 seconds; control input: switchable: normal, external "Key." Compressor — slope: 1:1 to 00:1; threshold: -10 dBv to +10 dBv; attack time: 2 to 200 milliseconds; release time: 5 milliseconds to 5 seconds. Limiter — attack time: 4 dB per millisecond; release time: 2 dB per millisecond.
Selected Standard Features: Ability to compress, expand, and limit simultaneously.
Frequency Response (input/output): N/A.
Distortion (input/output): Less than: 0.03%.
S/N Ratio: -92 dBv, A weighted.
**SCV, INC.**
414 North Sparks Street
Burbank, CA 91506
Phone: (818) 843-7567
**Model NGS-2**
**Effect Type(s):** Stereo noise gate with frequency-shaped gate trigger
**Dynamics Parameters:** +20 dBv, HI and LF gate settings
**Operational Controls:** Balanced and transformerless balanced inputs
**Selected Standard Features:** -900 dB response capacity
**Frequency Response (input/output):** 20 Hz to 20 kHz, +0/-1 dB
**Distortion:** 0.01%
**S/N Ratio (input/output):** Better than -84 dB
**Pro-User Price Range:** $697
---
**SOUND PERFORMANCE LABORATORY**
11 Burlingame Avenue, Suite 5
Burlingame, CA 94010
Phone: (415) 344-8787
**Model USM-4**
**Inputs:** One
**Outputs:** One
**Effects Type(s):** Compressor, limiter, exciter, noise gate
**Dynamics Parameters:** Noise gate release time of 6 milliseconds to 3 seconds; attack from 4 microseconds to 7 milliseconds.
**Operational Controls:** Variable filter control; high, mid and low filter; EXARHY psycho-acoustic processor; noise gate processor; attack; threshold; release
**Selected Standard Features:** XLR connectors; LED level indicator
**Frequency Response (input/output):** N/A
**Distortion:** Less than 0.01%
**S/N Ratio (input/output):** Better than -93 dB
---
**SPECTRA SONICS**
3750 Airport Road
Ogden, UT 84403
Phone: (801) 392-7531
**Model 610/601**
**Inputs:** One
**Outputs:** One
**Effects Type(s):** Compressor, limiter
**Dynamics Parameters:** Compression ratio of 1:1 to 100:1; attack time is variable 100 nanoseconds to 2 microseconds; or limiter; compressor is 100 nanoseconds to 2 milliseconds; release time is less than 90 milliseconds; or limiter; compressor is variable from 50 milliseconds to 10 seconds
**Selected Standard Features:** Two units allow stereo tracking; 601 modular plug-in card allows daisy-chaining of unlimited number
**Frequency Response (input/output):** 20 Hz to 20 kHz, +0/-1 dB
**Distortion:** Less than 0.1%; 30 Hz to 20 kHz full output
**S/N Ratio (input/output):** Better than -80 dB, +4 dBm, unweighted
**Pro-User Price Range:** 610: $699, 601: $142
---
**SUMMIT AUDIO, INC.**
P.O. Box 1678
Los Gatos, CA 95031
(408) 395-2448
**Tube Compressor Limiter**
**Inputs:** One
**Outputs:** One
**Effects Type(s):** Vacuum-tube compressor, limiter
**Dynamics Parameters:** Up to 10 dB gain reduction
**Operational Controls:** Gain; gain reduction; attack; release; manual and bypass
**Selected Standard Features:** Stereo linkable; side-chain access
**Frequency Response (input/output):** N/A
**Distortion (input/output):** N/A
**S/N Ratio:** N/A
**Pro-User Price Range:** $1,000
---
**SYMETRIX, INC.**
4211 24th Avenue West
Seattle, WA 98199
Phone: (206) 282-2555
**Model CL-150/501**
**Inputs:** Three
**Outputs:** Three
**Effects Type(s):** Compressor, limiter
**Dynamics Parameters:** 0.4:1 to infinite compression
**Operational Controls:** Threshold, attack, release; auto/manual; in/out; stereo slave; output gain
**Selected Standard Features:** XLR or quarter-inch inputs; automatic or manual operation; side chain; insertion capability; 501 has peak limiting
**Frequency Response (input/output):** 20 Hz to 20 kHz, +0/-1 dB
**Distortion:** THD less than 0.025% @ 0 dBm at 1 kHz
**S/N Ratio (input/output):** Better than -100 dB at 0 dBm, 1 kHz
**Pro-User Price Range:** $750
---
**U.S. AUDIO, INC.**
U.S Distributor - Valley People, Inc.
P.O. Box 10878
Nashville, TN 37204
**Gate/904**
**Inputs:** Four and one, respectively
**Outputs:** Four and one, respectively
**Effects Type(s):** Noise gate expander 904 can be housed in the same power supply frame
**Dynamics Parameters:** Range of attenuation is 0 dB to 80 dB; maximum input level before clipping is +24 dB; maximum output level is +21 dBm (600 ohm or greater)
**Operational Controls:** Variable threshold, release and range controls; mode switch allows switching of the unit in or out of the audio path, or to be "triggered" by an external or "keying" signal; mode select switch places each channel in the noise gating mode, 1:2 expansion mode, or 2:3 noise reduction mode
**Selected Standard Features:** Three LED "stop light" metering; green LED indicated a "full on" or unity gain condition; yellow LED provides visual indication of ongoing expansion while red LED shows maximum attenuation as determined by the range control
**Frequency Response (input/output):** 20 Hz to 20 kHz, +0.5 dB
**Distortion:** Output - less than or equal to 0.015%; at THD at unity gain - less than or equal to 0.04%; SMPTE R-10 compliant
**S/N Ratio (input/output):** Better than -82 dB, reference +4, unweighted
**Pro-User Price Range:** $435; 904: $250
---
**VALLEY PEOPLE, INC.**
P.O. Box 40306
Nashville, TN 37204
Phone: (615) 383-4737
**Model 430**
**Inputs:** Two
**Outputs:** Two
**Effects Type(s):** Noise gate expander, limiter, deesser (all in one-over-one device)
**Dynamics Parameters:** Range of attenuation is 0 dB to 60 dB; maximum input level before clipping is +24 dB; maximum output level is +21 dBm (600 ohm)
**Operational Controls:** Variable threshold, release, range and output control for each channel; source selection (direct mic or line); switchable input which is fed to the detector; mode switch; detector switch
**Selected Standard Features:** Eight LED gain reduction meter; clipping warning indicator; external input connector; remote meter/control input connector
**Frequency Response (input/output):** 20 Hz to 20 kHz, +0/-1 dB
**Distortion:** Output - less than or equal to 0.015%; at THD at unity gain - less than or equal to 0.04%; SMPTE R-10 compliant
**S/N Ratio (input/output):** Better than -82 dB, reference +4, unweighted
**Pro-User Price Range:** $495
---
**Model 522**
**Inputs:** Four
**Outputs:** Four
**Effects Type(s):** Compressor, limiter, expander, gate ducker
**Dynamics Parameters:** 1:1 compression; 60 dB expansion capable
**Operational Controls:** Threshold, attack, release; range ratio; channel in/out; mode select; internal
kHz, ±0.5 dB
**Distortion:** Output — quiescent distortion at +10 dB input — less than 0.04% at 1 kHz THD at unity gain; less than or equal to 0.3% SMPTE IMD at unity gain (typically 0.1%).
**S/N Ratio (input/output):** Better than -88 dB, reference +4, unweighted.
**Pro-User Price Range:** $560
---
**Model 811**
**Inputs:** One
**Outputs:** One
**Effects Type(s):** Limiter, compressor, ducker (voice-over device).
**Dynamics Parameters:** Gain reduction range is 0 dB to 48 dB; maximum output level before clipping is +24 dBm; maximum output level at +21 dBm (600 ohms).
**Operational Controls:** Variable threshold, release, range gain, ratio and threshold control; linear/logarithmic release shape switch; mode switch to allow switching of the unit in or out to the audio path to eliminate the effect of an external signal appearing at the safe-chain input.
**Selected Standard Features:** 13 LED gain reduction meter.
**Frequency Response (input/output):** 20 Hz to 20 kHz, ±0.5 dB
**Distortion:** Output — quiescent distortion at +10 dB input — less than 0.01% at 1 kHz THD at unity gain; less than or equal to 0.025% SMPTE IMD at unity gain.
**S/N Ratio (input/output):** Better than -87 dB, reference +4, unweighted.
**Pro-User Price Range:** $400
---
**Kepex II**
**Inputs:** One
**Outputs:** One
**Effects Type(s):** Noise gate, expander.
**Dynamics Parameters:** Range of attenuation is 0 dB to 80 dB; maximum output level before clipping is +21 dBm; maximum output level at +21 dBm (600 ohms).
**Operational Controls:** Variable threshold, release, range, ratio and threshold controls; linear/logarithmic release shape switch; mode switch allows switch of the unit in or out of the audio path, or to be "triggered" by an external or "keying" signal appearing at the side chain-input.
**Selected Standard Features:** 13 LED gain reduction meter.
**Frequency Response (input/output):** 20 Hz to 20 kHz, ±0.5 dB
**Distortion:** Output — quiescent distortion at +10 dB input — less than 0.01% at 1 kHz THD at unity gain; less than or equal to 0.05% SMPTE IMD at unity gain.
**S/N Ratio (input/output):** Better than -87 dB, reference +4, unweighted.
**Pro-User Price Range:** $400
---
**Model 440**
**Inputs:** One
**Outputs:** One
**Effects Type(s):** Compression, expanded compression, peak limiting, F/Limiting, AGC, dynamic silence processing, peak clipping.
**Dynamics Parameters:** Range of gain reduction range (compression) is 0 dB to 40 dB; reference 0 dB to 20 dB; maximum input level at 1 kHz is +24 dB balanced, +21 dB unbalanced; maximum output level is +24 dBm, +21 dBm unbalanced (600 ohm).
**Operational Controls:** Variable threshold, release and attack controls; variable expand/threshold variable limiter (clipper) threshold; variable limiter release; variable output control.
**Selected Standard Features:** 13Pre-emphasis, limiting, aral compression; auto attack and release mode for compressor and expander; AGC; variable peak clipper; linking to two unit operation as either master, slave or stereo coupled configuration; selectable VU meter mode for input or output reading; hardware bypass, electronically balanced inputs and outputs.
**Frequency Response (input/output):** 20 Hz to 20 kHz, ±0.5 dB
**Distortion:** Output — less than or equal to 0.01% at 1 kHz THD at unity gain; less than or equal to 0.025% SMPTE IMD at unity gain.
**S/N Ratio (input/output):** Better than -84 dB, reference +4, unweighted.
**Pro-User Price Range:** $599
---
**Model 610**
**Inputs:** Two
**Outputs:** Two
**Effects Type(s):** Compression, expanded compression, peak limiting, noise gate, 13M pre-emphasis limiting, voice-over, expansion, linear expansion.
**Dynamics Parameters:** Range of gain reduction is 0 dB to 60 dB; maximum input level at +2 at 1 kHz is less than or equal to +24.5 dB balanced; maximum output level is +24 dBm, +21 dBm unbalanced (600 ohm).
**Operational Controls:** Variable compressor limiter threshold, variable compressor limiter ratio, variable expander/noise gate threshold, variable range, release and gain; switch selectable compressor/expander time; selectable expander slope: stereo couple switch included.
**Selected Standard Features:** Auto release mode; eight LED gain reduction meter; electronically balanced inputs and outputs; external input connector; peak reversion; correction circuitry; threshold, ratio and coupling circuitry; gain recovery configuration circuitry.
**Frequency Response (input/output):** 20 Hz to 20 kHz, ±0.5 dB
**Distortion:** Output — less than 0.01% at 1 kHz THD at unity gain; less than or equal to 0.2...SMPTE IMD at unity gain.
**S/N Ratio (input/output):** Better than -87 dB, reference +4, unweighted.
**Pro-User Price Range:** $595
For additional information circle #220
---
**YAMAHA INTERNATIONAL CORP.**
6600 Orangehorpe Avenue
Buena Park, CA 90620
Phone: (714) 522-9011
---
**Model GC7020**
**Inputs:** Four
**Outputs:** Four
**Effects Type(s):** Compressor, limiter
**Dynamics Parameters:** 1.1 to infinity, 1, 0.2 to 20 multi-signal attack, 50 millisecond to 2 seconds release.
**Operational Controls:** Expander gate threshold, compression, attack, release; compressor threshold.
**Selected Standard Features:** 24 dB gain reduction meter for each channel; detector loop; stereo or dual-mono modes 32 dB limiting.
**Frequency Response (input/output):** 20 Hz to 20 kHz, ±2...2 dB
**Distortion:** Less than 0.01%
**S/N Ratio (input/output):** Better than -87 dB.
**Pro-User Price Range:** $295
For additional information circle #221
---
**At Last, a 200 Watt Coax!**
Everyone knows the benefit of a well designed coaxial loudspeaker — a single point sound source. Until now, the most popular coaxials presented severe power limitations — had to have "trick" crossovers and needed time compensation. Gauss technology has changed all that.
The new Gauss 3588 is the first computer designed coaxial. But, we know computers can't hear, so we used a panel of "golden ears" at the fall AES to help determine the final sound of the loudspeaker. This combination of computer design and great ears gives you a coax with the sound and the power you want!
With a conservative power rating of 200 watts RMS, this new Gauss coaxial has been tested to 750 watts delivering clean sound — and can "coast" along at control room levels still delivering great sound. Metric sensitivity is 95dB for the low frequency and 109dB HF.
Because of our proprietary design parameters, both drivers are virtually in the same acoustic plane, eliminating the need for costly time compensation networks. For bi-amp operation, you can use any standard professional quality crossover.
The unique cash horn was designed using Gauss's exclusive Computer Aided Time Spectrometry (CATS™) program. This design provides an extremely stable image, reduced second harmonic distortion and virtually no midrange shadowing.
For additional information on the new Gauss coaxial loudspeaker, call or write Cetec Gauss, 9130 Glenoaks Blvd., Sun Valley, CA 91352, (818) 875-1900. Or better yet, hear it at a selected sound specialist soon.
Sound Ideas for Tomorrow Today!
gauss by Cetec
© 1984 Cetec-Gauss
For additional information circle #286
Lexicon has been manufacturing digital reverberation systems for many years now, and has probably placed product in every major recording studio in the world. The company began with the 224, improved on that design with the 224X, and then brought out a very user-friendly control head, the LARC (Lexicon Alphanumeric Remote Console). While it may seem unusual to review a unit that has already proved to be so popular, Lexicon recently introduced a new software upgrade with additional reverb and special effects programs that will surely be of interest to current 224X owners.
**System Configuration**
The 224XL comprises a rack-mountable mainframe unit and the LARC remote control head, which connects to the processor via a 50-foot flexible cable. The mainframe unit is fan cooled, and should be kept in as cool a spot as possible. Since all control information is displayed on the LARC, the mainframe can be stored almost anywhere in the studio control room or live-sound equipment rack. The review unit came with the standard 50-foot cable, although runs of up to 1,000 feet are possible with an optional remote power supply. While I didn't verify this with the factory, it seemed to me that the mainframe's fan was much quieter than on previous models. Normally, I would move the unit out of the control room for mixdown, but this one sat on the floor in front of the console and was pretty quiet in operation.
The device is a two-input, four-output processor that can be run either in a balanced or unbalanced configuration. Inputs may be either mono or stereo, while the outputs operate as mono, stereo, dual-stereo or quad, depending on the program selected. Front-panel adjustment pots are provided for input and output level adjustment, which is a simple procedure using one of two delay-line diagnostic programs.
The LARC is a clean, well laid out piece of work that is simple to use yet delivers quick, powerful control capabilities into the engineer's hands. Since a description of the LARC front panel would be wordy and rather boring, take a look at Figure 1 for a simplified description. As can be seen, the dedicated sliders and keys allow for fast, sure operation, and keys are sized to allow easy operation with fingers. By using the main and parameter display windows, control functions can quickly and easily be learned without continually referring back to the user's manual. With the cost of studio time being what it is, not to mention shrunken recording budgets, the LARC can prove to be a speedy tool both for a first-time user and the experienced pro. All displays are quite legible, and programs are both named and numbered for easy access. Parameter descriptions are abbreviated yet logical.
**Reverb and Effects Programs**
Accessing programs via the LARC follows a logical path. Programs are grouped according to acoustic nature in five categories called Banks. Within each Bank are stored up to six Programs, each of which holds up to eight Variations. (See Table 1 for a complete listing of the 224XL's Hall, Room, Plate, Effects and Split Programs.) Any of these variations may be altered by the operator, simply by calling up the different Pages. Parameters within each page are adjusted using the six control sliders. Most Programs have four or five different Pages (all have at least one), and each Page contains up to six Parameters. As you can see, not only does the 224XL come with 83 factory-loaded Programs, but each Program has many Variables that can be altered to fit the user's particular needs. Flexibility is a key word here, and the 224XL allows a great deal of creative control.
Personalized program alterations may be stored for future use in one of 36 non-volatile Registers. These altered programs may be assigned to any of 10 Banks, each of which holds up to 10 Registers. Both Banks and Registers may be labeled with an alphabetic name and, for easy recall, each storage location is assigned a number that also displays the Program from which it was derived. I found the memory storage and labeling process to be easy to learn, and quick to implement. The LARC will even tell you when the memory Registers are full.
If 36 non-volatile memories aren't sufficient, the LARC will download as many programs as you need onto an external cassette deck, which may then be used to upload 36 at a time. At any one time you can have up to 119 programs at your fingertips, with many more waiting in the wings. Tape storage is a relatively simple process; Lexicon supplies the necessary cables so all you need is a good quality cassette machine and a good audio tape. The LARC inserts a tone prior to downloading the Registers, so that you can set proper record level to ensure a good transfer. LARC will also verify that your transfer is good or bad by comparing the data recorded on tape to that in the Registers. Recall from tape to memory can also be verified.
Independent engineers and mixers
PM1000
+ PM2000
PM3000
SEE AND HEAR HOW IT ALL ADDS UP AT THE AES CONVENTION OCT. 13-16, IN NEW YORK
YAMAHA®
LEXICON 224XL
DIGITAL REVERB
will find tape storage a very attractive feature. One has only to carry a cassette tape of your favorite Programs to a studio equipped with a 224XL in order to recreate that "signature sound." Also, all effects developed for an album project, for example, can be easily reproduced in the live-concert situation with a very short setup time. (I recall watching the house-sound mixer for The Police painstakingly programing pages of data by hand into a 224X at a concert in Portland; LARC will make his future concerts much simpler!)
Software Updates
Lexicon is making a statement with its new software package, by reaffirming a commitment to improving quality and holding a competitive place in the market of creative programming. I found several improvements in this unit that I believe make the 224XL distinctly superior to past models. The new system handled transient material much better than did the 224X of my past experiences. I was able to input zero to +6 VU in most programs without noticeable weirdness on the top-end (although it is better to play it a little safer with all digital devices). This also meant that with higher input levels, output levels could be brought down much lower than previously, resulting in greatly reduced output noise and hiss. The 224XL also handled bass frequencies much more naturally, which came in handy while doing Pop-Dance mixes calling for effects such as placing the kick in reverb. Some Programs — usually those with exaggerated high-end response — still got rather "sizzly" on top, but this could be controlled by reducing the input level or rolling off the high-frequency crossover point.
The manufacturer has also done its homework on Programs, and has put something for everyone into the 224XL. The inclusion of 83 factory-loaded Programs certainly provides a good starting point for program modification and, in fact, quite a few of them are very attractive as shipped by the factory. Lexicon has provided many concepts from past models, adding to and improving on ones that work. Also included are several nice programs from the competition, with some sonic improvements. I tried to take enough time to run through all the programs, yet found that some attracted more attention than others. From a glance at Table 1, it will be easy to see how Lexicon groups its banks of Programs, a factor that makes it easy to quickly find a middle ground when searching for a particular effect.
Session Applications
For this review, I worked with the 224XL mostly on rock material involving instrumentation of sax, drums, bass, guitar, synthesizers of many varied types, and both male and female vocalists. I did not use any real string or horn sections which would have been nice in evaluating the larger halls.
- The sound of the Concert Halls were really too big for the test material, but I can give a few impressions. Initially, I noticed that the Programs tended to load up, just as halls with extremely long reverb times tend to do. Synthesizers sounded really huge, and the Programs were very effective when using more reverb than original signal.
... continued overleaf —
| Programs | Halls | Rooms | Plates | Effects | Splits |
|----------|------------------------|---------------------|--------------------|--------------------------------|-----------------|
| 1 | Concert Hall | Room | Plate | Chorus & Echo | Hall/Hall |
| | 7 Variations | 4 Variations | 6 Variations | 4 Variations | 1 Variation |
| 2 | Bright Hall | Small Room | Small Plate | Resonant Chords | Plate/Plate |
| | 5 Variations | 4 Variations | 6 Variations | 1 Variation | 2 Variations |
| 3 | Dark Hall | Chamber | Constant Density | Multiband Delay | Plate/Hall |
| | 7 Variations | 1 Variation | Plate A | 1 Variation | 1 Variation |
| 4 | Rich Chamber | | Constant Density | | Plate/Chorus |
| | 8 Variations | | Plate B | | 1 Variation |
| 5 | Dark Chamber | | Rich Plate | | Rich Split |
| | 8 Variations | | 8 Variations | | 1 Variation |
| 6 | Inverse Room | | | | |
| | 3 Variations | | | | |
TABLE 1: LEXICON 224XL SOFTWARE PROGRAMS
We, the undersigned, ask only one thing of a piano.
Leonard Bernstein André Previn Billy Joel
Leonard Bernstein André Previn Billy Joel
Luciano Pavarotti Georg Solti Aaron Copland
Luciano Pavarotti Georg Solti Aaron Copland
John Williams Jorge Bolet Mickey Gilley
John Williams Jorge Bolet Mickey Gilley
Ronnie Milsap Liberace Dave Brubeck
Ronnie Milsap Liberace Dave Brubeck
That it be a Baldwin.
Baldwin
Without equal.
For additional information circle #268
LEXICON 22XL REVERB
• The Bright Halls are described as similar yet much brighter than the Concert Halls. I found this to be true, and very nice for creating reverb effects. Variation #3 (V3) worked well to create a modern drum sound, and sounded very crisp and fast with a large ambience. V4 had a great low-end and good midrange presence that sounded really good on synthesized cellos.
• The Dark Halls are designed to more realistically portray natural halls than the other two programs. This is achieved with a more drastic high-frequency rolloff, simulating natural reverb times. V3 was very effective with synthesizers when featuring low-end information, or creating a dark or ominous mood. V4 gave a natural, very full sound to stacked background vocals. I had some problem with V5, which manifested as a low feedback that built slowly even after input was removed — it sounded just like a close miked gong.
All Hall Programs incorporate adjustable pre-echo delays that can add to the realism by emulating the slapback we’re used to hearing in most concert arenas or large halls.
• Room is similar to the Hall Programs in that it uses similar parameters, yet creates the ambience of a smaller space — one feels much more of a natural sense of walls and dimension with this program. V4 demonstrated some interesting vocal presence effects and ambience. I also found a great kick drum ambience when manipulating parameters.
• Small Room recreates a space about 1/4th the volume of Room. Walls feel as though they are much closer, and I found many different instruments and voices to sound quite nice with V4. There are definite applications in Automatic Dialog Replacement (ADR) with this Program.
TECHNICAL SPECIFICATIONS FOR LEXICON 224XL DIGITAL REVERBERATOR
Programs: 18 programs, 59 preset variations, expandable through software updates.
Register storage: 36 nonvolatile registers divided into 10 user-labeled banks with one to 10 registers per bank.
Reverberation time: Adjustable in two bands from approximately 0.2 to 70 seconds (program-dependent).
Additional controls: Four mode-select buttons (BANK, PROG, VAR, REG) used with 10 numeric-select buttons (1 to zero); tape storage and register control buttons (TAPE, STO); a page-select button (PAGE); three auxiliary control buttons (MUTE, PARAM, 2nd F); six sliders for control of up to 42 parameters per program, with associated display-select buttons.
LARC display: Two lines of 12 alphanumeric LEDs for interactive menu-driven display; additional line of 24 alphanumeric LEDs (six groups of four for each slider); dual 16-position LED headroom indicator (calibrated -24 to +12 dBm plus overload).
Mainframe controls: Power and indicator light; system reset; left and right input level adjustments; A, B, C, D output level adjustments.
Frequency response: 20 Hz to 15 kHz, ±1.5 dB; 20 Hz to 12 kHz, ±0.5 dB.
Dynamic range: Reverberant mode: 84 dB typical, 81 dB minimum relative to reference level; 20 Hz to 20 kHz noise bandwidth for all reverb times from zero to 10 seconds. Nonreverberant mode: 90 dB typical, 86 dB minimum; 20 Hz to 20 kHz noise bandwidth.
Total Harmonic Distortion (THD and noise): 0.04% typical, 0.07% maximum at reference level for all reverberation times between zero and 35 seconds.
Interchannel Crosstalk: -55 dB at 1 kHz.
Inputs: Two, balanced and transformer isolated; impedance: 20 kohm; maximum level adjustable: +8 to +18 dBm.
Outputs: Four, balanced and transformer isolated; impedance: 90 ohm; maximum level adjustable: +8 to +18 dBm; power-on muting.
LARC cable: 50-foot extra-flexible cable; cables can be linked — up to 1,000 feet possible with optional remote power source.
Power: Mainframe: nominal is 100, 120, 220, 240 VAC (~10%, +5%) switch-selectable; 50 to 60 Hz; 150 watts. LARC: normally powered through 224X mainframe; miniature jack accepts optional remote power supply for distances greater than 100 ft — 10 to 20 VDC or 10 to 20 VAC, 6.25 watts.
Diagnostic programs: Control and display via LARC; automatic at power-up or reset.
Size: Mainframe: Standard 19-inch rack mount: 19 by 7 by 15 inches (W×H×D), (483×178×381 mm). LARC: 5.9 by 9.5 by 3.2 inches (W×H×D), (150×242×82 mm).
Weight: Mainframe: 34 pounds (15.5 kg); 40 pounds (19 kg) shipping. LARC: 1.9 pounds (0.9 kg); 7 pounds (3.2 kg) shipping.
Automation interface: Optional RS-232C serial interface.
Suggested Pro-User Prices: 224X with LARC: $12,500; LARC retrofit kit: $800; V8.1 software: $95.
Lexicon Inc. 60 Turner Street Waltham, MA 02154 (617) 891-6790
• **Chamber** is really a very good Program. My favorite chambers were at Wally Heiders in San Francisco (now Hyde Street Studios). It is hard to build a good acoustic chamber, but Lexicon has done a fine job digitally. It is a realistic chamber with beautiful diffusion. I no longer yearn for Heiders!
• **Rich Chamber** has a very even diffusion that eliminates the sense of walls to a good degree. Dimension may be added to the program by using the pre-echo delays and, combined with a related pre-delay, can make some brilliant spaces. Many variations were impressive, with V4 being a good all-around program. V2 and V5 were especially nice for keyboards, while V3 worked well for drums. V6 and V8 emulated a modern gated drum reverb, with V8 having a faster closing gate. Rich Chamber was one of my favorite 224XL Programs.
• **Dark Chamber**, like Dark Hall, has a sharper high-frequency rolloff to more naturally simulate the sound of real acoustics. The variations are effective, and good acoustic environments are a strong point. This will be another good Program for film work. V1 made a good showing on keyboards, while V5 was very nice for vocals. I found V7 and V8 to be very interesting and created some wild metallic rooms with the parameters.
• **Inverse Room** creates distinctly different effects in its three variations. V1 is your basic “Phil Collins-style” gated-drum/room-mike sound — it is a good sounding program and a great deal of tailoring can be accomplished. The extensive frequency control comes in very handy for emulating the competition’s non-linear programs, or for creating extended frequency response versions. V2 is a backwards reverb with a reverse envelope, to give the impression of reverb building as opposed to decaying. I liked this a lot too, and utilized the delays to time a snare reverb so that it built up when leading into the kick-drum attack. V3 is an enhancement program that is effective for adding increased presence to a voice without actually adding volume.
• **Plate** is a very good representation of a true plate reverb. I especially liked V5, and found it to be realistic. Some variations are much more flexible than a real plate though, since you can perform frequency tailoring and add as many as six pre-echo delays for a more acoustic reverb feeling. Diffusion is also adjustable, and one can find just the right density to complement different instruments.
• **Small Plate** didn’t do a lot for me. I got kind of a hollow feeling from some variations, and I guess it just didn’t suit my taste. Because my first impression was not so strong, I didn’t spend any time manipulating the Program parameters.
• **Constant Density Plate A** is an original program from the 224. It simulates a plate with high initial diffusion, and maintains that density of reflections. This capability differentiates the Program from normal plates in which echo density decreases with time. I didn’t utilize this program as it was not effective with the test material.
• **Constant Density Plate B**, on the other hand, was nice and bright and very impressive with a kick drum. I used V2 and was happy with the thick type of reverb, without getting in the way of input material.
• **Rich Plate** was another dense, “tight” reverb that had many applications. V2 and V5 sounded good with almost any input material from sax to drums to guitar, while V6 was hot for drums and, especially, kick. V4 was used on vocals, producing a rich, fat, bright sound.
All the Reverb Programs discussed so far possess two Parameters that are unique to Lexicon. Low Frequency Stop Decay and Mid Stop Decay proved to be very useful, and really improved the quality of the Programs. These two adjustment Parameters allow you to set the reverb time differently in the absence of input than when input is present. In this way, you have the ability to create long,
---
**Puzzled by Audio/Video/MIDI Sync Lock? It’s SMPL™**
Yesterday it took lots of money and hassle to implement a truly contemporary Audio for Video Studio.
You needed a box to lock a Video transport to the Audio. And boxes to autolocate the Audio and Video transports. And a box to lock and locate the “virtual” MIDI tracks. And more boxes to convert the sync your sequencer likes to the kind your drum set favors.
And an Engineering Degree to tie it all together and work it, and a very friendly banker to help pay for it.
But, today, Sync Tech’s SMPL System performs all of these functions and MORE. In one easy to use, low cost package you get a complete Audio editing, Video lock-up, Instrument syncing system that includes:
- Two machine (Audio or Video) Sync Lock
- 10 point autolocator for both transports.
- MIDI Sync/Autolocate
- 24, 48, 96 Tick/Beat instrument sync
- Automatic Punch In/Out
- DF, NDF, 25 F/S, 24 F/S SMPTE Time Code Generator
- 8 programmable EVENT gates
- Transport remote controls
Best of all, the SMPL System is for real — no “gotchas”. Works with equipment from every major manufacturer from inexpensive Portable Studios to 2” transports, even consumer quality VCRs.
For more information and the name of the nearest dealer who can SMPLify your life, call or write:
SYNCHRONOUS TECHNOLOGIES
P.O. Box 14467 • 1020 W. Wilshire Blvd. • Oklahoma City, OK 73113 • (405) 842-6880
For additional information circle #271
LEXICON 22XL REVERB
flowing reverb tails that fill silent spaces, while maintaining a short reverb time during program material and thus a high degree of clarity. You can achieve the effect by setting the Stop Decay Parameters to longer times, and working with shorter decay-time Parameters. All in all, it's very effective and well thought out.
Another nice touch is that there is a gate parameter on most Hall, Room and Plate Programs, a feature that provides control for creating many different gated-reverb Programs. Gated reverb is certainly in vogue at the present time, and Lexicon is offering the engineer many options for its use.
Aside from the varied effects possible with the Reverb Programs, the 224XL offers several strictly effect-type programs:
- Chorus & Echo can create doubling, flanging, chorused echo and many other special effects. The Pro-
---
**TABLE 2: EXAMPLE OF CONTROL PAGES AND VARIABLE PARAMETERS FOR RICH CHAMBER**
| Page | Sliders: 1 | 2 | 3 | 4 | 5 | 6 |
|------|------------|---|---|---|---|---|
| 1 | LF Decay 0.1 to 83 sec* | Mid Decay 0.1 to 83 sec* | Crossover 170 Hz to 19.0 kHz* | Treble Decay 170 Hz to 19.0 kHz* | Attack Zero to 99 | Pre-delay Zero to 834 mS |
| 2 | LF Stop Decay 0.1 to 83 sec* | Mid Stop Decay 0.1 to 83 sec* | Chorus Zero to 99 | HF Bandwidth 170 Hz to 19 kHz* | Diffusion Zero to 99 | Definition Zero to 99 |
| 3 | Pre-echo Level 1 L greater than AD Zero to 99 | Pre-echo Level 2 R greater than CB Zero to 99 | Pre-echo Level 3 R greater than AD Zero to 99 | Pre-echo Level 4 L greater than CB Zero to 99 | Pre-echo Level 5 L greater than AD Zero to 99 | Pre-echo Level 6 R greater than CB Zero to 125 mS |
| 4 | Pre-echo Delay 1 L greater than AD Zero to 125mS | Pre-echo Delay 2 R greater than CB Zero to 125 mS | Pre-echo Delay 3 R greater than AD Zero to 125 mS | Pre-echo Delay 4 L greater than CB Zero to 125 mS | Pre-echo Delay 5 L greater than AD Zero to 125 ms | Pre-echo Delay 6 R greater than CB Zero to 125 ms |
| 5 | Size 8 to 87 meters | Inactive | Reverb Stop Delay (Gate) Zero to 1.25 seconds | Inactive | Inactive | Inactive |
*Can also be set to infinite.
---
**Question: What Makes A $1700 Sub-Woofer So Popular**
**Answer: Performance, Quality And Value**
The Eastern Acoustic Works BH880, BH440 & BH800 Bent Horns are the only no compromise true bass horns on the market. Their unmatched high output, exceptionally smooth extended low frequency response and extreme reliability make them the most popular sub-woofers for demanding music reproduction and special effects applications.
Since the introduction of the BH880 in 1978, it has become the standard for high quality music reproduction systems, including Studio 54, The Kennedy Space Center, The Ripley Music Hall and Boston's Kenmore Club.
When you look at the numbers, the BH880 (and its siblings BH440, BH800) offer unmatched value in comparison to vented 18 inch bass systems.
**Performance:**
- Mathematically true bent horn design permits unmatched output capabilities (see charts)
**Quality:**
- EAW's proprietary high density polyurethane foam reinforced multi-ply hardwood construction.
- RCF Laboratory Series drivers with poly-laminated cone and suspensions for 1000 watts AES power handling each driver.
BH880L $1695
Dual driver 18 inch Forsythe bent bass horn sub-woofer reproducer with two RCF LAB L18/851 440 mm drivers, 38 Hz to 150 Hz, 110 dB SPL 1w @ 1m, 72 x 42 x 36
BH440L $1195
Single driver 18 inch Forsythe bent bass horn reproducer with RCF LAB L18/851 440 mm driver, 40 Hz to 150 Hz, 108 dB SPL 1w @ 1m, 72 x 24 x 26
BH800L $1050
Single 18 inch Forsythe bent bass horn reproducer with RCF LAB L18/851 440 mm driver, 43 Hz to 400 Hz, 107 dB SPL 1w @ 1m, 60.5 x 29.75 x 29.75
For more information on these and other EAW professional audio products, call or write:
Eastern Acoustic Works, Inc., 59 Fountain St.
Framingham, MA 01701
Telephone: (617)620-1478
Forsythe Series
BH880L
BH440L
BH800L
For additional information circle #272
gram generates three voices per input channel (six total), and each voice can be assigned a separate level, delay, feedback amount, and left-to-right pan position. A chorus control determines the pitch shift for all voices. You can make some seriously "massive" vocal effects with this program, and the potential also exists for guitar and keyboards. V4 was quite unusual, and created a "mutron-type" effect when fed bass guitar. I took three tracks of background vocals — each with four singers on them — modified V1 for full-field panning and depth perception using the level and delay controls, and came out with a really primo sound.
- **Resonant Chords** is a unique program limited only by the engineer's level of weirdness. An input (which does not have to be of musical nature) excites six voices with control parameters of level, pitch, duration of ring, and an overtone lowpass filter. The best thing I can say about this Program is to put it up and see what appeals to you. It certainly is capable of some interesting sounds.
- **Multiband Delay** is just what you would expect. The Program comprises six separate delay lines with high-and low-pass filters, and full left to right panning — I used it for sax with spectacular results, and created a combination of center slapback and left-to-right movement with ease.
I believe the Split Programs to be a very strong sales point for the 224XL. With these five setups you can run two completely separate Programs, each with mono input and stereo output. Although control parameters are not always as extensive as those provided for the single Programs, having two Programs in place of one surely makes up for this minor limitation. Splits offered are Hall/Hall (based on Concert Hall V1), Plate/Plate (based on Plate V1), Plate/Hall (as above), Plate/Chorus, and Rich Split (based on Rich Chamber).
Due to the many adjustable parameters available, I would like to stress the extreme amount of program modification that can be performed by the user. Table 2 lists the Control Pages and Variable Parameters for the Rich Chamber program; a glance at this chart will better help the reader understand the 224XL's versatility.
**Upgrade Flexibility**
Technical engineers will be happy to know that Lexicon supports its customers with 14 pages of maintenance information in 224XL user's manual, including a complete description of each module plus charts showing module and fuse locations. On power-up the 224XL runs a 25-second major component test. If errors are found, two extensive diagnostic programs — one LARC test and one mainframe test — can then be run to identify the exact problem.
Lexicon also supports its users with update kits, and the 224X was designed to allow software changes with ROM (read-only memory) circuits. Anyone with a 224X can request the updates from the factory. In fact, some updates are provided free, yet Lexicon says that many owners don't register their units and so can't be notified when updates are available. The LARC remote-control head may also be retrofitted to any 224X for those who wish to take advantage of its tape storage and superior control features.
The Lexicon 224XL is the most versatile digital reverb and special-effects processor that I have worked with to date. It combines good reverb programs with some nice special effects in a very user-friendly package. The internal programs are easy to operate, and the sounds offered cover a lot of territory.
---
I would like to thank Studio D, Sausalito, CA, for donating the session time needed to perform this in-use operational evaluation.
KURZWEIL 250 DIGITAL SYNTHESIZER WITH 50-kHz SAMPLING AND STORAGE OPTIONS
Reviewed by Bobby Nathan
Of all the current sampling-technology keyboards, none has caused more controversy than the Kurzweil 250. When I first saw the 250 two years ago during the New York AES Convention at the Hilton Hotel, I was blown away! Not only was the demonstration most impressive, the sound of the 250 was electrifying. The unit shown then was still in its early development, and the microprocessor had not yet been compacted to fit into the keyboard's casing as we know it today. During the AES booth demonstration, the sounds were coming from a mainframe computer, which also was handling all of the programming. The promise from Kurzweil was that all the sounds we had heard would be put into a stand-alone unit. But, when the 250 was again shown at the 1984 New York AES as a stand-alone unit, the list price had been upped and the design changed to incorporate the use of an Apple Macintosh computer for sound-sample storage, which meant yet another additional price increase.
These changes to the 250 had disappointed many potential users. When the sampling option was finally released, again many were disappointed with the 25-kHz sampling rate, and its resultant limited bandwidth. This R-e/p review was written after working with the new 2.2 REV operating software and the 2.0 REV Digitizer software; I was anything but disappointed after hearing the 250's new sampling quality.
To start with, the new sampling rate has been increased to 50 kHz, and is variable over other different preset rates. At a 50-kHz sample rate, the total sampling time is 10 seconds; at 41 kHz the time is 11 seconds; at 37.5 kHz it is 13 seconds; at 31.25 kHz it is 15 seconds; and at 25 k it is extended to 20 seconds. (If you are not unfamiliar with sampling bandwidth versus sampling time, this is the normal trade-off of shorter sampling time for higher frequency response.)
Sampling sounds into the 250 is quite easy; the user has the ability to divide up the sampling time to multi-sample most instrument sounds. There are five different sampling types, ranging from Quick Take (which functions as the name implies), to more complex schemes. Quick Take has a pre-emphasis equalization curve that brightens the sample, while the second type of sampling, De-emphasis, samples the sound normally. The other three types use compression (slow, normal or fast decay) to reduce digital aliasing and noise. The only trade-off of these latter sampling types is that they require additional processing time, but for a good sample it's all worth it.
The 250 also has another interesting feature: an automatic split determinator between two multisamples. For instance, if you sample a piano at middle-C (as the lower sample), and sample again at C one octave above middle-C, the microprocessor will choose whether the split between the two samples should occur on the F' or F# above middle-C. The auto-split function also has an override feature that allows you to set the range by striking the lower and upper notes. A computerized filter-adjust feature is also provided that will automatically adjust how the filter closes as the sample gets transposed during multi-sampling.
If you are unfamiliar with the way in which timbre changes as the pitch of a sample is transposed, I'll try to explain. The phenomenon, which occurs in all sampling machines, means that if you sample a piano at middle-C and transpose it up to G above, you will notice its timbre (or tone) becoming brighter. For every half-step you transpose a sample upwards in pitch, the timbre keeps getting brighter; as you transpose a sample downwards in pitch the timbre becomes duller. While multisampling at middle-C and C above, at F# (the split point between the two samples) you will notice the upper sample at F# above middle C is very bright, whereas at F the sample is dull. The computer within the 250 compensates for this effect, and dulls the lower sample as it gets transposed upwards in pitch. By doing so, it smoothes out the keyboard and makes the sound seem more even.
Looping samples on any sampling keyboard has never been a favorite pastime for me, but nevertheless has to be done to make certain samples usable. Even though no on-screen display is provided on the 250 for viewing prime edit points, the unit does have another interesting feature to improve the sound quality of loops, called Crossfade. The crossfade feature allows you to fade the end of a loop into its start point; you simply loop the sample as close as possible and then adjust the amount of crossfade. In many cases the crossfade seems to remove the annoying glitch that occurs during the looping of difficult waveforms, such as strings, voice and piano samples.
System Options
Since the Kurzweil 250, as an instrument, is available in several different levels, I should stop here and describe just what they are. The "basic" unit comes with 40 internal sounds, including the famous Kurzweil grand piano sound. Also included are the other favorites, such as strings (fast, slow and bowed), acoustic upright bass, acoustic guitar, and drums.
The next level of sophistication is to add Sound Bloc A, which increases the internal sampled library to 125 sounds. Although many of the new sounds are variations of the original 40, they will be well worth the addiOne man's disc weapon is another man's screaming monkey.
Frank Serafine—Motion Picture Sound Designer/Musician Credits: Tron, Star Wars I and II, Brainstorm, Ice Pirates
For the movie Tron, Frank Serafine altered the shriek of a screaming monkey to create the unique sound of the disc weapon. In movie after movie he's taken sound to the outer limits. By any stretch of the imagination, that's the innovative use of technology.
Nikko Audio has been making substantive contributions to technology for 50 years. We were first with MOS FETs, first with DC servo-lock. And now, for the first time, Nikko's LABO Series of commercial components. Just like all Nikko components, they're built to last.
As a primary manufacturer with demanding double QC aerospace tolerances, it's no wonder Nikko Audio offers a fully transferable, unconditional 3 year warranty.
Nikko Audio and Frank Serafine. Stretching the power of technology to the limit...and beyond.
NIKKO AUDIO
The power of technology.
5630 South Triangle Drive, Commerce, CA 90040
Nikko Audio systems and components are available exclusively through Authorized Nikko Audio Dealers.
For additional information circle #274
SUMMARY OF KURZWEIL
250 DIGITAL SYNTHESIZER SPECIFICATIONS
Keyboard: 88 notes, velocity sensitive.
Channels: 12
Dimensions: keyboard 57×27×9 inches (L×W×H); pedal pod 17¾×11½×4⅝ inches.
Power consumption: AC 110V, 50/60 Hz, 380 W (220V option available).
MIDI: (In, Out, Thru): 16 channels, user-assignable. Each sequencer track can be assigned to a separate MIDI channel. Special MIDI mode slaves one Kurzweil 250 to another.
Inputs: mike/line input (user-sampling optional); two ¼-inch, assignable volume-type pedal jacks; computer port.
Stereo audio output levels: balanced XLR-type 600 ohm, 10V p-p nominal; hi-level, ¼-inch 600 ohm, 10V p-p nominal; low-level, ¼-inch 600 ohm, 1V p-p nominal; Headphone, stereo ¼-inch, 8 to 600 ohm.
Dynamic Range: over 90 dB.
Resident Voices: Concert Grand Piano, Violin Section, Viola Section, Cello Section, Bass Section, Plucked Acoustic Bass, Snare Drum, Bass Drum, Tom-tom (two-octave chromatic), Hi-hat open, Hi-hat closing, Hi-hat closed, Crash Cymbal, Cowbell, Sandpaper, Hammond B-3 Organ (three settings without percussion, one setting with percussion), Trumpet, Baritone Horn, Valve Trombone, Sine Wave, "Endless Glissando," nylon-stringed Acoustic Guitar, Hand Claps, Finger Snaps, Temple Blocks, Grater up, Grater down.
Keyboard Setups: the base unit contains 40 factory-installed keyboard setups, with up to 40 user-definable keyboard setups available. Factory instruments total 30, with 48 user-definable instruments available.
Programmable Functions: variable 256-segment envelope generator; 87-way keyboard split with up to six instrument layers; 24 LFOs; four wave shapes (ramp up, ramp down, square and triangle); continuously variable tremolo/vibrato/amplitude parameters; variable brightness levels, including velocity-to-brightness mapping; variable pitch modulation; five modes transposition (octave pitch shift, chromatic pitch shift, octave transpose, chromatic transpose, timbre shift); stereo chorus parameters, doubling, flanging, echo, full chorusing; variable delay time (up to 30 seconds), variable detuning (+1,200 cents, -6,000 cents); keyboard dynamics table with 11 different settings available.
Assignable Controls: two assignable levers, three assignable sliders, two assignable on/off foot switches (pod), two assignable external pedal jacks. . . continued overleaf —
KURZWEIL 250 SYNTHESIZER
Optimize — don’t compromise:
If you demand optimum performance from your tape recording equipment . . .
you need our services!
JRF maintains a complete lab facility insuring precision relapping and optical alignment of all magnetic recording heads and assemblies. Worn unservicable heads can be restored to original performance specifications. 24-hour and special weekend service available.
• Broadcasting
• Mastering
• Recording Studios
• Tape Duplicating
New and reconditioned replacement heads from mono to 24-track . . . Many in stock.
For repair or replacement, we’re at your service!
Call or write.
JRF/Magnetic Sciences, Inc.
101 LANDING ROAD, LANDING, NJ 07850 • 201/398-7426
For additional information circle #275
AVC CAN HELP YOU BEST TWO WAYS:
WHAT & HOW
What you need.
AVC has been the place for pros to find the right tools—that mean business—for over ten years. We don't get it done solely on our good looks.
It takes seeing what you need. And it takes looking for—and finding—the gear that does it.
It takes listening to you, hearing what you need, and finding that in tools that will do the job right, the first time and for a long time. From a single patch cord to a turn-key recording studio, AVC will be sure you get WHAT YOU NEED: THE RIGHT TOOLS.
How it works.
Hands on is what it's really all about. And we've had our hands on the best for a long time, so we know HOW it works, and where it works.
How to interface it and how to fix it. We've helped other pros just like you get it working and keep it working profitably for over ten years. Not because it's the stuff we happen to be able to get a hold of, but because it's the kind of tool we'd buy for ourselves. And because we know we'll be working with it for years.
No. We didn't forget the right price.
Buy from AVC at a price that's COMPETITIVE WITH ANYONE'S. And we're champs with financing, for purchase or lease.
AVC is proud to feature these and over 200 other fine audiowideo lines:
Harrison Otari Soundcraft JBL
Dolby Neumann Lexicon Tascam
Audio Kinetics Cipher Digital Fostex
AMS Eventide Time Line Valley
People AKG Akai Klark-Teknik
UREI
"We've helped other pros just like you get it working and keep it working profitably for over ten years."
For additional information circle #276
Twin Cities: 2709 E. 25th St., Mpls., MN 55406 (612) 729-8305 • Chicago area: 747 Church Rd., Suite A6, Elmhurst, IL 60126 (312) 279-6580
We're looking. And we see what you need.
We're listening. And we hear you.
KURZWEIL 250 SYNTHESIZER
full or partial range on the keyboard), and then the grand piano would have all those settings. You could also simply assign those settings to just one note of the piano, or split the keyboard and copy them to only the upper half.
The 250 also has three banks of 10 Storage Bins that can be used for storing variations of any of the internal presets, or for just calling up your favorite ones in a studio or live-performance situation. The bins are stored in non-volatile RAM, and can also be saved to floppy disk with the Macintosh asa "Keyboard" setup file. Each bin has 10 presets; by touching any number from 0 thru 9 on the 250's numeric keypad, sounds change instantly.
The 250 has Split and Layer functions that allow you to assign strings on the upper portion of the keyboard, and piano on the lower. In Layer mode the strings can be layered on top of the piano across the entire 88 keys, or over just the middle section. Two different or identical instruments can be assigned into a Split or Layer keyboard, and then assigned to either output group A or B. By using the separate outputs for group A or B, the 250 can be routed to separate channels of the recording console, and then equalized and enhanced separately.
I found the Timbre Shift function very useful for changing the 250's preset keyboard sounds. By simply hitting Edit and then both Transpose buttons, you can down-arrow (scroll) through the transpose options and choose Timbre Shift. If you take the original Kurzweil Grand Piano sound and shift its timbre to a setting of "-3," the result sounds like a Baldwin, while at "6" it sounds bright like a Yamaha Grand. The timbre shift also works great on the brass presets, or on any user samples.
I was also quite amazed at what the Instrument Editor can do. The 250 has a most powerful envelope editor that can edit any parameter of the envelope's attack, decay and release. While working with any these parameters they are displayed numerically on the 250's backlit liquid-
KURZWEIL SPECIFICATIONS...continued —
Sequencer: 12-track, polyphonic, 7,900-note storage capability (battery-backed RAM). Complete software includes: sequence editing, individual track editing, individual track volume control, looping, quantization on playback, individual note editing/insertion, variable-rate external synchronization, click-track output, simultaneous access to all onboard sounds.
Sound Modeling Program
Variable sampling rate (5 to 50 kHz); total sampling time (10 to 100 seconds), depending upon sampling rates; compression; adjustable loop decay and release rate; automatic natural amplitude envelope extraction and artificial envelope generation; level check meter, clip indicator, trimming and looping functions; multiple samples on each key; pitch and amplitude adjustment; multiple-and single-key multisampling; internal storage (64 sounds, eight keyboard setups in volatile, non-battery-backed memory); external storage capability on Apple Macintosh diskettes via MacAttach™ software.
Sound Block A
Resident sound module containing 15 new voices, plus 84 factory-defined keyboard setups using these sounds along and in combination with the 30 resident voices in the base unit. New voices include: choir; flute; electric bass (open); electric bass (slap); clarinet; oboe; harp arpeggios; harp glissando; marimba; conga (slap); chimes; vibes; timpani. Factory keyboard setups include: choir, cathedral choir; harp/slow choir; timpani/ choir; timpani/harp; oboe/chimes; digital chimes; flute; woodwinds & reeds; marimba; conga & marimba; vibes; clarinet; clarinet & oboe; electric bass/slap bass; dual electric bass; alien harp; piano/flute; guitar & flute; strings & flute; strings & oboe; dual electric bass/rock piano; piano & marimba; piano & vibes; rock and roll piano; cow piano; choir & percussion, and more.
MacAttach
Off-line storage and editing of sound files, keyboard and instrument setups, sequences. Apple Macintosh interconnection cable; (transfer rate: 5,670 cps); 3½-inch hardcase disk.
Suggested list price: basic K250 $12,970; Sound Modeling Program $1,995; Sequencer Memory Expansion (adds 4,600-note capacity to the base unit's 3,400-note capacity) $450; Sound Block Module A $1,995; MacAttach software and interface $195; stand $195; plexiglass music rack $75.
An Expander system is also available, and comprises a K250 without keyboard unit. Three versions can be supplied: a basic system ($9,980); base system plus enhanced instrument voices ($11,975); and a base system plus voices, sampling, Sound Modeling and Macintosh software ($13,970).
Kurzweil Music Systems, Inc.,
411 Waveley Oaks Road, Waltham, MA 02154.
(617) 893-5900.
crystal display. During the editing of an envelope, the value slider's on/off button just has to be activated for you to use the slider to adjust each segment's values. You can also type in numerical values via the keypad. The envelope can be adjusted further by inserting up to 255 segments.
To best understand the 250's envelope, I find it helpful to compare it to that of the Yamaha DX-7. Like the DX-7, the 250 has a separate cutoff level and rate adjust for each segment. There are also 12 different low-frequency oscillator (LFO) shapes for both vibrato and tremolo. When editing the chorus options you can choose chorus, echo, delay, flanging and full chorus effects, and then adjust the detune and/or delay amounts. Again, they can be typed in or adjusted via the front-panel sliders. There is also provision to adjust the 250's velocity sensing, and the filter for varying degrees of brightness. The velocity can be set to open the brightness of the filter, an effect that is very useful on brass and percussive instruments.
**MIDI Control**
MIDI (Musical Instrument Digital Interface) is well implemented on the 250. A front-panel switch labelled *Mode 1* turns the MIDI-control capability on and off. By pressing first *Edit* and then the *Mode 1* buttons, you can down-arrow through the other MIDI modes, such as Mono/Poly, Cycle Mode, Channel Assign, etc. (When used with the recently announced Expander System — in essence a 250 without keyboard unit —, the Cycle function enables cycling through the 250's 12 voices and the Expander's 12 voices for a 24-voice piano). The 250 comes with MIDI-In, -Out and -Thru jacks on the rear panel. MIDI control works exceptionally well, and makes the 250's 88-note, wooden-weighted keyboard ideal as a master MIDI controller. Triggering the 250 via MIDI input works equally as well. For those that don't care for a piano-type action, a DX-7 could be used as a master keyboard to trigger the 250.
The 250's *Sequencer* is configured as a 12-track sequencer with a total capacity of 8,000 notes of memory, each track holding a maximum of 12 voices. (Since only 12-voice polyphony is provided in the 250, a little careful planning is necessary.) The sequencer features its built-in mixer, which is useful for mixing the level of each track, so that the 250 could be recorded directly to a digital tape machine. The sequencer has a separate click output located on the rear panel, and can be synchronized to external sources such as 96, 64, 48, or 24 pulses-per-quarter-note clocks. It can also generate and retrieve its own sync tone to and from tape. All the MIDI provisions are available on the sequencer, meaning you can assign a track to a certain MIDI channel and have it play an external MIDI-equipped synthesizer.
One of the unit's nicer features is the *Loop in Record* capability, which enables you to pre-determine the length of a sequence, and then loop it to record like a drum machine. This feature is most useful in conjunction with the 250's drum samples, or when using your own with the sequencer. Each track can be quantized individually, the quantization being post performance. I found the quantization to be excellent, and capable of preserving the live feel by moving the start time of a note while keeping its duration the same — a function called "quantizing the event." The quantization range was from a half-note all the way down to 1/256th of a beat; tempo is adjustable from 10 to 600 beats per minute. Full step editing is implemented, making possible step recording and erasing. Other modes of step editing include individual note velocity, pitch correction, and editing each beat and fraction thereof for each event.
The 250's sequencer editor can be used to go in on any track and modify that track's instrument setup file to produce great effects. For instance, on the piano track you could punch in and change the piano's decay envelope to be very staccato for the desired bars, and then punch out. The sustain pedal or pitch could also be modified at any desired section, or for the whole track. With this flexibility you can create amazing sequenced performances, the simplest of which would be a drum sequence for improvising or composing.
Kurzweil has contracted Southworth Music Systems, Inc. (creators of Total Music, a MIDI recorder software package for the Apple Macintosh) to write a special version for the Kurzweil 250. The soon-to-be-released software package will allow on-screen editing of sequences written graphically; notes on staves editing; music scoring and printing; and the ability to read external sync.
I like the Kurzweil 250 digital synthesizer. In its fully-blown state, with all options and Macintosh computer included, the 250 represents a medium-priced sampling keyboard and a most powerful studio instrument. It is good to see that the manufacturer has supported the current updates, and comforting to know that there will future updates as well.
---
**When Your Reputation Depends On It, There’s Only One Choice . . .**
**Ian Communications — The #1 choice for cassette and video duplications.**
From 100 - 100,000 copies, the same consistent quality goes into every cassette we duplicate.
And there’s more . . .
With our in house graphics and printing capabilities, you can have more than just a great sounding cassette, you can have a looking cassette too!
Call us. You should hear what you’re missing!
**Ian Communications Group, Inc.**
10 Upton Drive Wilmington, MA 01887 (617) 658 3700
For additional information circle #278
The most distinctive feature of the Sanken CU-41 studio condenser microphone is its use of two capsules mounted one above the other, whose outputs are combined electrically in a cardioid-only configuration. The Japanese manufacturer's design premise is an innovative approach to the problems and tradeoffs encountered in the choice of capsule size. In theory, large-diaphragm condenser microphones have superior output versus noise characteristics, at the expense of high-end "colorations" caused by frequency-and phase-response aberrations in the region defined by the wavelength of sound corresponding to the capsule diameter. Smaller capsules, while allowing these resonances to be placed out of the audible range, have less output for a given sound-pressure level. Greater demands are therefore imposed on the system electronics for low-noise performance, both within and following the microphone.
By utilizing the high-output, low-noise capabilities of the larger of the two diaphragms over most of the audio spectrum, and by crossing over to the smaller element for the extreme high-end, Sanken engineers appear to have successfully accomplished their design goal. Also, to judge from published specifications, calibration curves supplied with the microphones, and our own listening tests at the University of Iowa School of Music, the engineers seem to have dealt with the potential problem of response deviations occurring at the wavelength determined by the distance between the transducers.
The CU-41 is an elegant example of mechanical craftsmanship. The finish of its satin nickel-plated brass case and stainless-steel mesh screen is directly comparable to that found on the well-known German and Austrian studio microphones. Though physically large—180mm (7.1 inches) long by 50mm (2 inches) in diameter—the unit's shape and general appearance are aesthetically pleasing. The three-pin XLR-type output terminals are gold plated; standard DIN 45 596 48-volt powering is employed. Optional accessories include dual power supplies, elastic shock mounts or conventional stand adapters, and a range of cables. The CU-41 specification sheet states that cable runs as long as 400m (1,200 feet) are possible without signal degradation up to 20 kHz.
This single-pattern microphone is clearly intended for "purist" applications. One can appreciate the difficulties in providing variable patterns and capsule attenuator pads in the two-element design; the engineers also have chosen to dispense with any of the low-frequency equalization capabilities commonly found in competing studio microphones equipped with directional patterns.
Given these obvious and deliberate restraints, we are left with a first-class microphone with excellent sonic and dynamic-range characteristics. In our most recent evaluation session, recorded on May 13, 1985, a stereo pair of the Sanken CU-41s was recorded simultaneously along with pairs of many fascinating and prized "vintage" tube microphones (AKG, Neumann, and Schoeps), contemporary FET models from these same manufacturers, as well as Milab VIP-50 (FET) and Coles 4038 (ribbon) models. A full report on all of these microphones, old and new, is being prepared for a future issue of *R-e/p*; for purposes of this present review, the CU-41 has been compared directly to three of its closest rivals: the AKG C414EB/P48, the Milab VIP-50, and the Neumann TLM170.
The recording and listening sessions for this report have followed the procedures described in the April 1984, December 1984, and February 1985 issues of *R-e/p*: the microphones were set up as identically as possible in a reverberant concert hall; a near-coincident cardioid technique was used to record vocal and piano music in stereo (each pair occupying two tracks on a 24-channel, 15 ips ANT Telcom noise reduction master); and the microphones were directly compared during playback sessions using control room monitor loudspeakers and AKG K-141 stereo headphones. However, two significant changes were made over previous evaluation sessions: a greater variety of music was recorded and subsequently auditioned, and a pair of Klein and Hummel O92 monitor loudspeakers has replaced the previous JBL Model 4320s.
Conditions of temperature, humidity, and atmospheric pressure in Clapp Recital Hall on May 13 of this year were well within "normal" limits: 20°C (68°F) ±2°, 50 to 60% humidity, and 29.86 inches of barometric pressure. Each of the four types of microphones required about the same amount of pre-amplifier gain on the Neve console used throughout the sessions: 40 to 45 dB. The C414EB-P48, VIP-50, TLM170, and CU-41 microphones all exhibited extremely low noise and wide dynamic-range characteristics.
In addition to the Mozart and Gershwin songs faithfully performed again for us by Carol Meyer, soprano, and Patricia Cahalan, pianist, we recorded other musical combinations and works graciously provided for us by faculty and students at The University of Iowa School of Music: Beethoven: *Sonata No. 9 in A for violin and piano ("Kreutzer")*, op. 47; first movement, Professors Leopold LaFosse and Kenneth Amada; Beethoven: *Sonata No. 21 in C ("WaldEASTERN ACOUSTIC WORKS!
FIRST PRIZE WINNER OF THE JAPAN AUDIO CONSULTANT SOCIETY COMPETITION
IT REALLY REALLY WORKS! It works in the industrial installation in Tokyo—where the testing took place that resulted in Nippon Onkyoka Kyokai naming the EAW-based Unicus System the best-performing high-level sound system in the world.
And it works in EAW's new FR Series, shown above: FR222, FR102, FR253, FR122, FR153.
The FR Series is our third-generation professional full-range loudspeaker system. It shares in the same advanced technology that helped win the international prize. And it now brings that technology within everybody's reach.
There are important reasons for the extraordinary quality of the FR Series. There's the crossover, for example—the most sophisticated you can get in a compact system. It comes as close as you can get to absolutely flat power response.
It all began with Kenton Forsythe calculating the design parameters with mathematical precision—and then adjusting them flawlessly in extensive and painstaking listening evaluations.
Exact acoustic measurement followed—based on a third order (18dB per octave) filter that achieves precise phase and response coherence. Then, special response-compensation equalizes the drivers.
There's the testing: A random sample of every driver production run is tested for a full hundred hours. Further, each completed system is tested individually, as well. So, no chances are taken with anything going out that isn't up to EAW's full quality standards.
And along with everything else, there are the real wood enclosures of cabinet-maker quality. We use cross-grain, 18-piles-to-the-inch, laminated European birch plywood that doesn't flex—and stands up even under the most rigorous travel conditions.
But the real prize—the one that counts most to us—is knowing that we've built into our product the kind of science and craftsmanship and integrity that makes our sound as close to perfect as it can sound.
And at prices that don't come close to the quality they buy.
EASTERN ACOUSTIC WORKS
59 Fountain Street/Box 111
Framingham, Massachusetts 01701
(617) 620-1478
For additional information circle #279
SUMMARY OF SANKEN CU-41
STUDIO MICROPHONE SPECIFICATIONS
Transducer Type: Two-way, condenser capsules.
Directional Pattern: Cardioid.
Frequency Response: 20Hz to 20 kHz, ±1 dB.
Sensitivity: at 1 kHz for 74 dB SPL, 0.7 mV.
Nominal Impedance: 150 ohm or less.
Recommended Load Impedance: 600 ohm or higher.
Equivalent Noise Level (A weighted RMS, IEC 179): 15 dB or less (0 dB = 0.0002 dynes/cm²).
Maximum SPL for 0.5% THD at 1 kHz: 134 dB; 1.0% THD at 1 kHz: 140 dB.
Connector: Gold plated three-pin XLR-type; pin #1 is ground; pin #2 is audio (positive); pin #3 is audio.
Supply Voltage: 48, ±6 V phantom.
Current Consumption: 4.2 mA.
Dimensions: 180 mm by 50 mm (7.1×2.0 inches).
Weight: 582 grams (1.3 pounds).
Optional Accessories: S-41 shock absorbing stand adaptor for use with floor stand or microphone boom (recommended for recording low frequencies); AD-41 stand adaptor; P-41 power supply; SC-F or SC-M microphone cable assembly in varying lengths of cable.
Price: Suggested list price is $1,495, complete with S41 Shock Mount.
Manufacturer: Sanken Microphone Co., Ltd., 2-8-8 Ogikubo, Suginami-ku, Tokyo 167, Japan.
Export Agent: Pan Communications, Inc., 5-72-6 Asakusa, Taito-Ku, Tokyo 111, Japan. 03-871-1370.
U.S. Distributors: Martin Audio Video Corp., 423 West 55th Street, New York, NY 10019. (212) 541-5900.
Studio Supply Co., Inc., 1717 Elm Hill Pike, Suite B-9, Nashville, TN 37210. (615) 366-1890.
Your Recordings Can Only Sound as Good as the Cables Used to Record Them
Introducing Prolink™ High Performance Studio Cables by Monster Cable®
Many people in the recording business used to think that cables were just cables. And in fact, many of us still do.
A Sound of their Own. Many engineers have found that the opposite is true. They are discovering that ordinary cables have “a sound of their own” and distort music recording and reproduction in ways that we were never even aware of. Critical areas such as clarity, depth of bass response, quickness of transients, and the “naturalness” and “presence” of voices and instruments, are all lost through conventional cables.
A Monster New Technology in Cable Design. Monster Cable has shown music listeners worldwide that the sound of their playback systems could be significantly improved simply by changing their cables to the Monster. Now you can obtain an improvement in the sound of both recording and playback that will surprise the most critical and skeptical of engineers, simply by switching from your current connecting cables to Prolink by Monster Cable.
Come Hear the Monster. We invite you to hear our entire line of microphone, speaker, and studio hookup cables. Put them through your most critical listening and durability testing. You’ll discover just how good your recordings can really sound.
For your free brochure please call or write Paul Stubblebine, Professional Products Division Manager.
Monster Cable® Products, Inc.
101 Third Street, San Francisco, CA 94107
415 777-1865 FAX 415 777-1864 MCSYUI
Available on GSA Contract.
In Canada contact AKG Philips
Mr. Carlo Ralletti 416-292-3161
R/e p 202 □ October 1985
highs. The "crisp" versus "velvety" qualities of these two microphones was evident while auditioning vocal and violin sounds, but the very slight differences were subtle indeed. These perceptions would seem to tip the scales by a very minute amount to the TLM170 in the "warmth" department, for what that is worth. But, expressed in a different way, the TLM170 could be characterized as having more low-frequency output than the CU-41, with a slightly recessed or withdrawn upper-midrange response, and perhaps offering a bit less extended extreme high-frequency response. Yet the CU-41 and the TLM170 sounded more alike than any of the microphones reviewed here.
The AKG C414EB/P48 is a strong competitor. Again, the main differences between its sound properties and those of the CU-41 are found in the higher frequency regions. The C414's upper-midrange response is more pronounced than that of the CU-41, while its extreme top-end does not appear to be as extended. By comparison, the C414 has a "confined" or "contained" quality. My perception of "containment" here is not necessarily a critical one — the C414EB/P48 could be a good choice for enhancing a given recording situation where a "tighter" effect is desired. With its four polar patterns (including hypercardioid), attenuation and equalization capabilities, and relatively unobtrusive and elegant appearance, the C414EB/P48 microphone is quite versatile — and not nearly so "bright" as many earlier AKG models.
Since the Milab VIP-50 is even more expensive than the Sanken, one should expect great things from this Swedish import. The VIP-50 is a worthy competitor in all respects except for three: cost, physical appearance, and a rather destructingly "bright" sonic quality. (I have been assured by the importer of Milab microphones that our evaluation models were pre-production units of a design that is likely to evolve further, but I do feel entitled to expect a finish other than black crinkle paint and even perhaps a pleasing — yet distinctive — shape for a microphone as expensive as this one.)
It was difficult to judge just how extended the high-frequency response of the VIP-50 may actually be, owing to the emphasis in the 5 to 10 kHz region. I suggest that this "bright" characteristic should be brought under greater control before full production is undertaken. These early examples of the VIP-50 simply have too much of a "condenser sound" for my liking, especially since they are being introduced at a time when most other condenser-microphone manufacturers are striving for as uniform frequency response as possible.
In one category, the CU-41 and the VIP-50 are directly comparable: they are quite expensive (around $1,500 each). The TLM170 costs about one-third less; and two C414s can be bought for about the price of one CU-41 or VIP-50. As we have noted, the Sanken CU-41 has no switchable pattern, attenuation, or equalization settings; the other three microphones are all very versatile in these categories. Like the C414EB/P48, the CU-41 apparently contains an output transformer, while the VIP-50 and TLM170 are transformerless. The VIP-50 has an additional valuable feature: it may be switched between a microphone-level or a line-level output circuit, both electronically balanced.
The Sanken CU-41 microphone represents an uncompromising concept based on a single unidirectional pattern, and therefore it should appeal to "purists" and "minimalists" of the cardioid persuasion, just as the Bruel and Kjaer 4000 (omnidirectional) and Schoeps MK41 (hypercardioid) designs have attracted their respective loyal followings.
I am indebted to Jan Hebel of Martin Audio Video Corporation, in New York, for his assistance in arranging the loan of the Sanken microphones for this evaluation.
"If I've learned anything in twenty years in this industry, it's this; In any studio installation, quality gear is never the whole story. The quality of the sound. . . .that's the bottom line."
Wes Dooley
audio engineering associates
1029 North Allen Avenue, Pasadena, CA 91104
(818) 798-9127 (213) 684-4461
The newest addition to the company's line of compressors/limiters, the Model 166 Dual Channel Dynamics Processor, combines stereo or dual-mono operation with some familiar features, such as OverEasy Compression, PeakStop Limiting, and Sidechain control, in an aesthetically-pleasing and low-cost package. In addition, an adjustable Noise Gate has been incorporated into each channel as a new function.
This hands-on report discusses the operation and usefulness of these features, and hopefully will provide the reader with insight towards the benefits and cost trade-offs designed into this device.
**Input/Output Characteristics**
Unfortunately, the Model 166 does not provide XLR-type input/output terminations. Instead, PCB-mounted, three-circuit (or stereo) ¼-inch phone jacks are used, apparently to reduce manufacturing costs. Inputs are balanced electronically, with differential amplifiers having an input impedance of 25 kohms. Outputs are unbalanced, and use line amplifiers to drive loads greater than 600 ohms. (Oddly, neither the unit's specification sheet nor operational manual provides a value for actual output impedance; both sources only state that the output impedance is "low.")
Each channel also has a Sidechain Input leading to a signal-detector circuit for external control of the compressor and gating. The Sidechain Input is unbalanced and has an impedance of 6.8 kohms; termination here is a two-circuit (or mono) ¼-inch phone jack. To help avoid any damaging mistakes in wiring, dbx wisely screened circuit connection symbols on the rear panel, next to the audio and sidechain jacks, which identify the tip, ring and sleeve designations.
The operational manual provides a thorough explanation of balanced and unbalanced hookups. For maximum hum rejection, dbx recommends that the user avoids common grounding of the Model 166's inputs and outputs. Instead, the manual suggests: "The best starting point is to ground the shield of the input cable and the source device (leaving it unconnected to the 166) and to ground the shield of the output cable to the ground of the 166 (leaving it unconnected at the receiving device)." For balanced sidechain hookups, the company states that most balanced sources will work properly without having to connect the low or minus side of balanced signal to the circuit ground (sleeve) of the detector circuit. However, it does offer one word of caution: "Some sources require the dotted connections shown (see Figure 1) — 'transformer-isolated' balanced outputs. We recommend making this connection *only if necessary* for your installation, because some active balanced and ground-referenced outputs may be damaged by doing so."
The quoted maximum input level to
In the early morning hours of November 15, 1984 tragedy struck the Bethany Lutheran Church of Cherry Hills, Colorado. A faulty electric organ was blamed for a multiple alarm fire that claimed much of the structure. Thankfully no one was injured in the blaze that caused over one million dollars in damage.
In the ensuing clean-up operation a Crown amplifier was discovered under charred timbers. Owing to the intense heat of the fire the chassis had warped and the AC cord was a puddle of wire and rubber.
The amplifier found its way to John Sego at Listen Up, Inc. of Denver. Armed with insatiable curiosity and a knowledge of Crown dependability John installed a new AC cord and proceeded to verify operation on the test bench. The amplifier met factory specifications in all functions.
In the photo above we offer you another glowing report of Crown durability.
CROWN
1718 W. Mishawaka Road, Elkhart, IN 46517
(219) 294-8000
For additional information circle #282
the device is +24 dBm, and the maximum output level from the device +21 dBm. Each channel features a hardwire bypass switch for easy comparison of the processed signal to the original sound.
**Controls Description**
Each channel has the following process controls per channel on the front panel: Gate Threshold control with Off position, and Gate Fast/Slow release rate (labelled Rel Rate) switch; Overeasy Compressor Threshold and Ratio controls; Peakstop Level control; Sidechain Monitor in/out switch; Output Gain control; and a Bypass in/out switch. Indicators include: Gate On, Gain Reduction (via an eight-segment LED display), Sidechain Monitor On, and Bypass On. A Stereo Couple in/out switch enables the controls for channel #1 to serve as the master for both sections stereo mode; in this situation the slave channel #2 controls, except for Sidechain Monitor and Bypass, are disabled. An LED indicator provides visual confirmation of this configuration.
The noise gate features two front-panel controls: Threshold and Rel Rate. Gate Threshold is variable from -60 to +10 dBm, and the user can disable the noise gate by turning this control counterclockwise to the Off position; an LED lights whenever the Peakstop Level control sets an absolute limit on final output peaks, and is user-adjustable from 0 to +22 dBm. The manufacturer recommends that the control be set one to two dB below the chosen maximum level, to allow some headroom for signal “rounding,” a process that reduces the higher-order harmonics found in conventional clipper circuits. An intensity-calibrated LED glows from dim to bright as the output signal further exceeds desired Peakstop Level settings.
The Sidechain Monitor switch (when enabled) directly connects the Sidechain Input to the Audio Out, a feature that allows the user to monitor the sidechain signal during setup. An LED verifies selection of this mode. The Bypass switch and confirming LED provide a hardware bypass, connecting Audio-In directly to Audio-Out. Balanced circuits will follow through this switch even in the absence of AC power.
**Operational Comments**
The dbx 166 Dual-Channel Dynamics Processor is packed with a lot of features, especially when one realizes the recommended retail price is only $549. Technology like the OverEasy Compressor and PeakStop Limiter, originally developed for the dbx Model 165 models, has been incorporated in this efficient and economical package. All controls were smooth and easy to read, even in a dimly lit studio control room. Control voltages, not audio signals, pass through these front-panel potentiometers, thereby eliminating any potential “noisy pot” problems in the future. The unit was found to be quiet in operation, and did not add any appreciable noise when inserted in a console module signal path.
As an application, the Model 166 was used to compress several mono noise gate shuts the signal off. The Rel Rate switch provides a fixed 10 dB per second release rate in the Slow mode, and a fixed 1,000 dB per second rate in the Fast mode. The gate attack time is internally set to two milliseconds for 28 dB signal rise, while the gate attenuation is constant at 40 dB.
The OverEasy compressor section is provided with six front-panel controls: Threshold, Ratio, Output Gain, Peakstop Level, Sidechain Monitor, and Bypass. The Threshold range is adjustable from -40 to +20 dBm, while compression Ratio can be varied from unity (1:1) to infinity-to-one (output then being constant, irrespective of input dynamics). Maximum compression is greater than 60 dB. Attack and release times are program dependent, and have been factory set.
The Output Gain control ranges from -20 to +20 dBm and occurs prior to the Peakstop Level circuit (see circuiting diagram of Figure 2). The mixes during audio layback from an Ampex ATR-124 to a Sony BVH-1100 one-inch videotape machine. The noise-gate feature worked quite well, considering the fact that only the threshold level is adjustable and just two fixed release rates are available. For some reason, a section of new Scotch 250 tape (which normally provides satisfactory results) developed a nagging print-through problem, a factor that was very noticeable at the start of each mix. The Model 166’s gate immediately cleaned up this objectionable noise, with only a minor threshold adjustment.
The unit’s OverEasy Compressor section worked just as expected; audiSTU STU STUDIO.
When you hear the fidelity and accuracy of the AKG K 240DF Studio Monitor Headphones, you’ll know why it’s become a standard for recording engineers and professional musicians around the world.
This latest version of our well known K 240 (now K 240DF) has been created to meet a recently proposed IRT (Institute for Broadcast Technology) international standard. The K 240DF establishes a uniform sound quality free from environmental variables. As opposed to sound from loudspeaker monitors, that is colored by variations in control room design, the K 240DF is unchanging and reliable.
Each K 240DF is tested in a diffused sound field to arrive at a headphone design with a flat frequency response (+/-2dB) and matched sensitivity. This professional headphone is close to perfection—without coloration or distortion. The self-adjusting headband supports the circumferential ear cups, each containing hand selected large dynamic moving-coil transducers and acoustic filters. Minimum weight is well distributed for maximum comfort over long-time wear.
The AKG K 240DF Studio Monitor Headphone, a solid favorite, is now just right for your Studio!
AKG Acoustics
77 Selleck Street
Stamford, CT 06902
© AKG 1985 © Akustische und Kino-Geräte GmbH, Austria
ble results were just as smooth sounding as the output from the more expensive Model 165A (mono $699; stereo $1,398 — Editor). The Gain Reduction meter presented a good indication of gain-reduction action, although it is a poor substitute for the accurate three-mode VU meter featured on the Model 165A. It takes some time to get used to its readings, since the eight LED segments are arranged in sort of a descending logarithmic pattern, from 1 dB to 30 dB. The same meter also doubles as an indicator of noise-gate attenuation. Therefore, to properly measure compressor gain reduction, the noise gate must be turned off; otherwise, a combination reading of gating and compression is always present.
Sorely missed are the three-segment Threshold LEDs present on both the Model 160X and 165A — these useful indicators would provide a visual representation of the action below and above the compression knee. (Perhaps they had to be eliminated for lack of front-panel space.)
The Output Gain and Peakstop Level controls were quite accurate, and performed as expected. The sound of the peak limit or clipper circuit was free of harshness, and its indicator represented the limiting action correctly. Setting Peakstop to its maximum setting effectively put it out of the signal path; the loudest transients did not once trigger the limiter on.
Although the side-chain function was not tried, the unit's Operational Manual does provide some useful application ideas, including de-essing, broadcasting, anticipatory compression, keyed gating, and selective gating. For de-essing application, the manual recommends: "In the absence of a de-esser, small amounts of high-frequency (6 to 10 kHz) boost in the side-chain path frequently will help in the processing of vocals that may have been brightly equalized beforehand, or that may suffer from prominent sibilance ('ess' sounds)." For broadcasting: "A pre-emphasis filter network placed in the sidechain of 166 processing pre-emphasized audio permits higher average signal levels to be run within the headroom limits of the broadcast chain."
For anticipatory compression (shown in Figure 3), the manual states: "If you feed the program directly into the sidechain and send the audio signal through a delay before the 166 audio input, the 166 can 'anticipate' the need for gain change . . . Such a special effect sounds similar to the dynamic-envelope inversion you may be familiar with from reverse tape playback."
For keyed gating (Figure 4): "Controlling the gating of one signal by another permits perfectly in-sync playing and overdubbing among individual instruments or precise sonic augmentation — 'fattening' — of a weak solo . . . An example of the latter would be using the drum signal to key an oscillator which is set at an appropriate frequency to 'tune' and 'punch up' the drum sound."
For selective gating (Figure 5): "You can also do frequency-sensitive gating [to] tune the response of the gating action. If you're gating a kick drum, for example, in a track with lots of leakage, you can tune in to the frequency of the kick with an EQ and the gate will respond only to the drum."
In spite of all the favorable features included in this compact unit, there are some packaging drawbacks. Curiously, no power on/off switch of indicator is provided. Perhaps dbx felt that since the unit only draws 15 watts (and post-oil-embargo power is again, cheap) Model 166 owners wouldn't mind a little inconvenience for some additional product-cost savings. Also, there are no plug-in replaceable fuses (external or internal). A close examination of the easily accessible circuit board, however, did reveal soldered fused resistors leading from the +15 VDC power-supply rails to the remaining circuitry. Finally, the top and bottom panels flex easily under certain placement pressures. While for rack-mount applications this flexing does not present a problem, stacking several external devices on top of the Model 166 could possibly crush some of the vulnerable circuit components mounted inside.
**Summary**
The dbx 166 Dual-Channel Dynamics Processor offers excellent value for the price. Overall the unit performed as promised, equaling the performance found in more expensive products. Hopefully, the manufacturer will take into consideration some of the criticisms mentioned here when they design future new products. The provision of ¼-inch phone plugs might be all satisfactory for "semi-pro" equipment, but XLR-type connectors should be offered as an option for professional audio users. Also, the cost of a fuse holder and power on/off switch would not add much to the bottom line, yet the convenience to the new user would be greatly increased.
MANNY'S PROFESSIONAL AUDIO DIVISION
NEW YORK CITY'S LARGEST MUSIC DEALER HAS EXPANDED TO INCLUDE A FULLY OPERATIONAL PRO AUDIO DIVISION. COMPLETE WITH DEMONSTRATION FACILITIES AND OUR SPECIALIZED SALES STAFF, WE CAN ASSIST YOU IN SELECTING ANYTHING FROM MICROPHONES TO A COMPLETE MULTI-TRACK RECORDING STUDIO. WE SHIP WORLDWIDE. WE'RE JUST A PHONE CALL AWAY.
MANNY'S MUSIC
156 WEST 48th STREET
NYC, NY 10036
212 819-0576
EMT 252 DIGITAL REVERBERATION SYSTEM AVAILABLE FROM GOTHAM AUDIO
The new system is said to provide very natural reverberation through digital processing using high resolution 16-bit analog/digital conversions and a 32 kHz sampling frequency. In addition to generating reverberation effects, the unit provides three delay-based effects: straight delay, loop echo and chorusing effects.
The reverberation program provides up to nine individual reflections before reverberant signal, and individually adjustable time and amplitude of reflections. Frequency response of the system is adjustable in four separate bands. Reverberation is generated using the entire audio frequency range of the source signal, resulting in a very natural sounding output. Four reverberation algorithms are implemented in the Model 252: a main reverberation program; a reduced bandwidth (EMT 250) program; a Doppler-shifted program; and a non linear decay program.
The delay mode provides up to 480 milliseconds of delay for three individual taps, plus a "cluster tap" of six individual taps in fixed relation to one another. Echo mode provides four "loops" of up to 440 milliseconds of delay, with feedback amplitude adjustable on each. The chorus mode provides up to four voices from one input source, with control of depth and rate of the chorus effect.
The processing system is housed in a 19-inch rack mounting enclosure, with control of all functions and display of all settings appearing on a separate eight- by 12-inch remote, which can be placed up to 300 feet from the processor.
The remote provides 128 memory presets for each of the reverberation and effects programs: half are preset at the factory, and half programmable by the user.
Presets are stored in battery-backed memory: 16 fixed and 16 user presets for the main reverberation program, and eight fixed and eight user presets for each of the six other reverberation and effects programs. Single-button recall of random or sequential presets is provided, as well as means to copy settings from one preset to another with desired modifications. Factory-fixed settings can be converted to user-modifiable settings, optionally allowing all 128 memory locations for user presets.
The EMT 252 digital reverb system has a professional net user price of $16,500.
GOTHAM AUDIO CORPORATION
For additional information circle #287
SYMETRIX MODEL 544
FOUR-CHANNEL NOISE GATE AND EXPANDER
Designed specifically for professional studio and live-performance applications where distortion-free gating is mandatory, the 544 is said to offer a maximum amount of processing power. The new unit encloses four channels in 1½ inches of rack space while, at the same time, providing a full complement of user-variable expander/gate controls including attack time, release time, range/ratio, and threshold.
Each channel can be set to trigger internally, or keyed from external input signals for special effects. Gate mode response has been optimized for highly transient material such as drums and percussion. The downward expander is described as being exceptionally linear, and doubles as an expander and noise reducer. Intelligent automatic time control circuitry works in conjunction with the manual release time control to eliminate low-frequency distortion in both gate and expand modes.
In addition to its variable parameter and mode-select controls, each channel provides the user with a five-segment LED gain-reduction display for visual indication of the unit's performance.
Suggested retail price of the Model 544 is $549.
SYMETRIX
For additional information circle #288
NEW STUDIO DOMINATOR THREE-BAND LIMITER FROM APHEX
The Studio Dominator™ is an intelligent three-band limiter with a proprietary circuit that varies the threshold for limiting, unlike traditional "dumb-over-threshold" devices. A unique Transient Enhancement Circuit is said to actually increase the perception of transients, while maintaining absolute peak limiting.
Tuneable crossover frequencies, plus high- and low-frequency drive controls allow the
"We're Committed To Building A Quality Product That Really Works..."
"We're Committed To Being Number One."
Patrick Quilter
Vice President/Engineering
QSC Audio
Barry Andrews
President
QSC Audio
Commitment runs deep at QSC. We're dedicated to continually improving our products and our company. For us, building a better product and backing it up with top-notch customer support is the key to success. It's as simple as that. The QSC linear output circuit is one outcome of our commitment to design excellence. Its three-stage signal path optimizes the sonic advantages of traditional push-pull amplifier circuits. By combining a multiple level DC power supply with conventional power transformers and rectifiers, we've improved on previous efforts at increasing heat efficiency—anticipating the benefits of "Class D" and "smart power supply" amplifiers, without relying on unproven technology. This has enabled us to build a power amp that is more compact and reliable, and which delivers unmatched audio performance. The diligent research that went into our Series Three paid extra dividends in the development of our economical Series One amplifiers. Both series feature our patented Output Averaging™ short circuit protection, dual isolated power supplies, calibrated gain controls, premium components throughout, and complete rear panel connection facilities that include balanced XLR and 1/4" jacks, octal sockets for active and passive input modules and a full selection of output connectors. Our dedication to design excellence goes hand-in-hand with our commitment to providing full-service support on all our products. When you put it all together, QSC amplifiers reflect the commitment to leadership, service and design innovation that has guided us since we were established in 1968. For more information contact: QSC Audio Products, 1926 Placentia Avenue, Costa Mesa, CA 92627, (714) 645-2540.
For additional information circle #289
user to create different effects. Limiting can be preshaped to match the medium's saturation characteristics for maximum signal-to-noise performance.
Because of its unique design, the new unit is described as being ideal for use in any situation where clipping is a problem, such as digital audio, disk mastering, video post production, and optical film. According to Marvin Caesar, Aphex President, the device could effectively eliminate watching the clash lights when mixing in Dolby Stereo for optical release, and will still allow a "transparent sound with the perception of natural transients."
Caesar also stated that the Studio Dominator is the perfect companion to the company's Compellor compressor/leveler/limiter, which maintains steady average level, without coloration. The Dominator is a peak processor that can be used to achieve different sounds and effects. The combination of the two units "provides the ultimate flexibility in dynamics control." Caesar offers
**APHEX SYSTEMS LIMITED**
For additional information circle #292
---
**MEYER DISTRIBUTING JAPANESE ATL STAGE MONITORING CONSOLE**
Meyer Sound Laboratories, Inc. will be distributing a mid-sized stage monitor console that is a combined effort of Meyer and its Japanese distributor, Acoustic Technical Laboratory (ATL). The console will be available in limited quantities for users interested in high fidelity stage sound.
The console configuration is 24 by 8, with an additional four auxiliary mixes. All 12 outputs have large LED metering that may be switched to VU or peak reading. Any of the 12 mixes may be re-assigned in any order to the eight main outputs via a fast electronic matrix assignment system.
Each transformerless input channel has switchable phantom power, a highpass filter, and four band "true complimentary" EQ. A switching system allows a master fader to control the send to the matrix. Insert points and direct outs are furnished for each input.
Monitoring solo points are at input, summing, and output stages, and peak indicators at each of these stages is said to provide distortion-free operation. Talkback can be assigned to individual outputs for improved musician-mixer communications. Two auxiliary inputs can be used to route effects to any output.
**MEYER SOUNDS LABORATORIES**
For additional information circle #293
---
**BLACK AUDIO DEVICES LAUNCHES MICROPHONE STAND REPAIR AND REPLACEMENT PARTS**
Swivel Levers replace the "dumbbells" on AKG-type booms. Original parts can be easily lost, leaving the boom virtually useless; yet factory replacements are virtually impossible to find. The Swivel Levers are a precision-machined replacement that, according to Black Audio, will neither fall off nor bend, like the factory parts.
Thread Strips enable the loose fit to be taken out of the threads on mike stands, booms, and accessories, and restore a like-new snug fit to threads that have become loose or stripped from age, long-term use, or damage. They come in packages of 12, and are available in four precision thicknesses.
Suggested list price of the Swivel Lever is
---
**IS YOUR EDUCATION COMPLETE?**
**C~duce** (c-diū-s), v. To lead sound engineers astray from habitual use of microphones, stands and isolation booths. To include commitment to studio quality sound with maximum separation at a cost effective price. To persuade abandonment of setting-up problems and clutter in the studio or on stage, by attractive thing or quality.
**C~duceable** (c-diū-sāb'l). a. Drums. Congas. Bongos. Timbales etc.. Acoustic Guitar, Mandolin. Lute. Balalaika. Violin. Cello. Double Bass. Harp. Banjo. Piano. Harpsichord. Celeste. Dulcimer. Zither. Speaker Enclosures. Solid Electric Guitars et cetera.
**C~ducees** (c-diūsi-s), n. Many prominent musicians in all aspects of the music industry (i.e. jazz, folk, country, classical or rock). As in Chick Corea, The Gatlin Brothers, Chrystal Gayle, Mahavishnu Orchestra, Toto, Mobile Studio, Abbey Road Studio, Sidney Opera House, Resorts International, Texas Hall Of Fame, Oberlin College of Music, English, Dutch, German, Swiss and Danish Radio, B.B.C. T.V., et al.
**C~ducer** (c-diū-sar). n. Studio quality contact microphones.
---
**C-TAPE DEVELOPMENTS LIMITED**
P.O. Box 1069, Palatine, IL 60078
(800) 562-5872 or (312) 359-9240 Telex: 280502
For additional information circle #290
$6.00, and Thread Strips $2.00 per pack.
**BLACK AUDIO DEVICES**
*For additional information circle #294*
---
**"HUM-KILLER" FROM HAASE NOW AVAILABLE IN U.S. FROM ESL**
Until now, it has been customary to use notch filters or equalizers to tackle the annoying hum interferences that originate from ground loops, defective equipment, light dimmers, and the like. However, the use of such filters does not reduce the interference harmonics, while phase distortion and insufficiently narrow bandwidth negatively influences the sonic quality.
The Haase Hum Killer uses very narrow bandwidth filters (±3% of center frequency), not only for the bass frequencies but up to and inclusive of the 13th harmonic. Attenuation of the fundamental, even and odd frequencies is independently adjustable. The stereo unit is available for 60 Hz or 50 Hz fundamental frequencies.
Attenuation ranges are: 40 dB at 50 Hz to 25 dB at 650 Hz; or 40 dB at 60 Hz to 22 dB at 780 Hz. Phase shift is quoted to be less than 10 degrees between channels and frequency response throughout the passband ± 1.5 dB, 20 Hz to 20 kHz.
Front panel functions include linear/filter selector (for A/B comparison); mute switch for even or odd harmonics filter; and attenuation controls for F1, F-even, and F-odd.
**ELECTRONIC SYSTEMS LABORATORIES INC.**
*For additional information circle #295*
---
**RPG DIFFUSOR SYSTEMS UNVEIL QRD-734 MODULAR ACOUSTICAL DIFFUSORS**
The new low-cost, space saving model measures 23½ by 47½ by 8¼ inches, and weighs 30 pounds. The diffusors can be wall-mounted in clusters, providing horizontal and vertical diffusion, or ceiling-mounted in standard suspended grid systems.
The RPG Diffusor System is a new, formerly unavailable reflection phase grating room treatment that is said to enhance the acoustics of any critical listening or performing environment, by uniformly diffusing sound, without absorption or attenuation.
Each QRD-734 panel diffusor has a suggested pro-user price of $245; prices for a complete control-room installation, requiring 48 square feet or more, begin at $1,470.
**RPG DIFFUSOR SYSTEMS, INC.**
*For additional information circle #296*
---
**SOUND DESIGNER SOFTWARE FOR MACINTOSH CONTROL OF EMULATOR II SYNTHESIZER**
The computer music system includes an interface that allows sampled sounds to be transferred between the Apple Macintosh and E-mu systems Emulator II at 500 kbits per second. Sound waveforms are displayed on the Mac’s high-resolution screen. The software provides extensive sound editing capabilities, including Macintosh-style “cut-and-paste” editing.
The waveform display can be magnified to show extremely fine detail, with editing accuracy to 33 microseconds. Calibration scales provide exact readouts of time and amplitude values at any location in the waveform, and the waveform display can be horizontally and vertically scrolled.
Sound Designer also includes Fast Fourier Transform-based frequency analysis and modification of sounds; digital equalization; enveloping; digital mixing and digital compression; as well as a variety of other digital waveform processing functions for modifying sampled sounds and creating unique sounds.
Direct digital synthesis (including FM and waveshaping) can be performed on the Mac, and the resulting sounds transferred to the Emulator II for playback.
**DIGIDESIGN, INC.**
*For additional information circle #297*
NEW MODEL 450 EIGHT-INPUT MIXER FROM FOSTEX
Designed for both multitrack recording and sound reinforcement/live recording, the Model 450 features eight balanced mike inputs, each with phantom powering, in-line monitoring (so that tape returns do not occupy line inputs), solo on all inputs and four program busses; three-band parametric-type equalization; a dedicated stereo master bus, and accessible patch points on the top panel.
The eight input channels, each with a direct output, feed four main program busses which, in turn, are mixed to stereo on the master buss. The two auxiliary busses (one stereo, the other mono) are independent, and may be used in a number of pre-EQ, post-fader configurations. The monitor/headphone bus is also independent.
Suggested retail price of the Model 450 is $995.
FOSTEX CORP. OF AMERICA
For additional information circle #290
SANKEN UNVEILS FOUR NEW STUDIO CONDENSER MICROPHONES
• The CU31 is a low-impedance, cardioid-pattern condenser model with a quoted dynamic range of 129 dB, frequency 0 Hz to 18 kHz, and self-noise of 19 dB or less. It incorporates a "Push-pull" DC bias transducer element and a titanium membrane diaphragm. Quoted sensitivity is 0.355 mV/0.1 Pa, and maximum SPL for 1.0% THD 148 dB. Supply voltage is 48V phantom. Dimensions are 113mm in length by 20.5mm in diameter.
• CU-32 is a right angle cardioid model with a quoted dynamic range of 129 dB, frequency range of 20 Hz, and 19 dB or less self-noise. Apart from its length (117mm) and capsule orientation, the CU-32 is electrically and performance identical to the CU31.
• The CMS-6 MS stereo condenser has a quoted dynamic range of 108 dB, frequency range 18 kHz, and a 19 dB or less self-noise. With the companion CMS-MBB battery PSU and switchable matrix, the output can be altered from MS to L-R. The mike utilizes "push-pull" DC bias transducers, and a titanium membrane diaphragm. Maximum SPL for 1.0% THD is a quoted 127 dB, and nominal impedance 150 ohms. Dimensions are 170mm in length by 40.5mm in diameter.
• The CMS-2 is a MS-type, single-point stereo condenser microphone that has a quoted dynamic range of 129 dB, frequency range of 20 Hz to 18 kHz, and self-noise of 16 dB or less. Like other Sanken models, the CMS-2 utilizes a push-pull DC Bias condenser element with a titanium membrane diaphragm. Dimensions are 176mm in length by 43mm in diameter.
SANKEN MICROPHONE COMPANY
For additional information circle #302
SOUNDCRAFT TV24 BROADCAST PRODUCTION CONSOLE
Scheduled for unveiling to the U.S. market at the forthcoming New York AES Convention, the TV24 is an "in-line" master recording console that provides live stereo and mono mix with routing to eight stereo audio sub-groups, and is intended for TV and video production. A completely independent 24-track recording and monitoring facility also is provided.
Standard features include fader start on every channel; stereo equalizers for the audio sub-groups; fast status control that reconfigures the whole console at the touch of a button; and a comprehensive monitoring selection.
SOUNDCRAFT ELECTRONICS, INC.
For additional information circle #301
BROOKE SIREN SYSTEMS ADDS SOFT LIMITING TO DPR402 COMPRESSOR-LIMITER
The DPR402 normally provides a hard-knee compressor and de-esser transfer function. To reconfigure the unit for Soft-Knee simply requires the addition of a resistor on the rear barrier strip. This change can be made to one or both channels of the DPR402 independently, and does not affect the unit's other functions. Full technical details are available by contacting Jim Jacobelli at BSS: (516) 249-3660.
Suggested retail price in the U.S. for the DPR402 is now $1,095.
BROOKE SIREN SYSTEMS
For additional information circle #303
SUMMIT AUDIO ANNOUNCES HIGH-RELIABILITY 990 OP-AMP
Better heat dissipation is said to be achieved with an aluminum radiating surface, tapped to accept a standard heat sink or to conduct to an outer surface.
The op-amp comes supplied with gold pins in either ±15 or +24 volt configuration.
SUMMIT AUDIO
For additional information circle #304
NEW PROPHET 2000 DIGITAL SAMPLING KEYBOARD FROM SEQUENTIAL
The new eight-voice sampling instrument enables the user to sample any sound using features that include a variable input-level control, complex sample editing (reverse, mix, truncate), and automated looping functions such as computer assisted zero crossover and zero slope selection, to help find the best possible loop points. A built-in 3 ½-inch disk drive provides for fast and easy storage of custom sounds and programs. A large selection of pre-recorded sounds are available from the company's library of factory disks.
The Prophet 2000 also features multiple wavetables stored in on-board memory for building "traditional" synthesizer sounds. Such sounds can be played alone, or in conjunction with sampled sounds by splitting the keyboard or layering sounds on top of each other.
A velocity-sensing, five-octave keyboard is said to provide precise control over loudness, modulation amount, timbre, sample start points and crossfading between two separate sounds; its weighted action is described as responding positively to every nuance of the player's individual technique. The keyboard can be split, with different sounds assigned to each half, or with multiple sounds layered on top of each other. Up to 12 keyboard combinations can be created, with instant access of up to 16 sound variations. The Prophet 2000 also features pitch and programmable modulation wheels, as well as arpeggiation capabilities that include programmable up, down, assign, extend, auto-latch, and transpose modes.
MIDI implementation includes full support of Modes 1 (poly) and 3 (omni), receivable in Mode 4 (mono), Clock In and Out, pitch and modulation wheels, main volume, switchable MIDI Thru and second MIDI Out, complete wavetable access/programming via an external computer, and the ability to transmit and receive sound samples over MIDI!
The sampling capability is based on 12-bit digital resolution. Samples with a duration of 16 seconds have a bandwidth of 8 kHz, with eight seconds having 15 kHz, and six seconds having 20 kHz. The user may sample at rates of 15.625 kHz, 31.250 kHz, and 41.667 kHz, up to 16.
SEQUENTIAL
For additional information circle #305
DENECKE ANNOUNCES DCODE TC-1 TIMECODE READER
The Dcode™ TC-1 is designed as a low-cost timecode reader for general film and video applications. In film, using timecoded film dailies, editors can use the unit to assist them in syncing dailies, making high-speed searches, logging, and keeping accurate time-date records of the actual production.
The TC-1 reads SMPTE or EBU time-code from 0.1 to 15 times speed, in both forward and reverse, from VTRs, VCRs, film editing machines and film synchronizers.
For transferring ¼-inch tape to mag film, the unit reshapes codes for film-to-tape transfers, and simultaneously displays code while generating 60 Hz sync pulse at 24 and 30 fps.
DENECKE, INC.
For additional information circle #306
"PART OF THE OVERALL DIMENSION OF MY COMPACT DISC PROJECTS HAS BEEN THE RESULT OF THE AN-2"
Tom Jung, President
Producer/Engineer
Digital Music Products, Inc.
Recently, I did a project, Music for Christmas by Keith Foley, with 9 synthesizers all MIDI-interfaced together and fed into the console. The AN-2 really opened up the sound and spread it out...it sounded three dimensional and very interesting. Anybody that has a synthesizer rack should have an AN-2.
I have also used the AN-2 on a lot of guitars—makes them sound great! It's as useful as reverb itself!"
For the name of your local dealer call Studio Technologies, Inc. at 312/676-9177.
STUDIO TECHNOLOGIES INC.
7250 NORTH CICERO AVENUE • LINCOLNWOOD, ILLINOIS 60646
For additional information circle #299
NEW WESTREX FILM RECORDER PRODUCTS
Following Mitsubishi's acquisition of Quad Eight/Westrex, the company is focusing on the technical marriage of film studios with digital technology. The new Westrex ST 6000 six-channel film recorder is now capable of running in slave mode to an X-800 digital 32-track. The film recorder can be driven by film pulses generated by hardware designed to convert timecode to this standard. Such a system enables slaving a large number of dubbing transports, projectors, and film recorders to numerous digital machines, allowing for replacement of analog dubbers with digital tracks, thus improving the overall performance of the standard film chain presently in use.
Currently in development is a new film Synchronizer/Master Controller that will enable lock-up of film transports requiring pulse rates of 2400 Quadrature down — this is reported to be the first time that studios will not have to worry about conversion-rate devices to control the transports of different manufacturers. To simplify the marriage of the film and digital worlds, the new controller will generate timecode from pulse information without having to print it, therefore saving the use of an audio track.
Other features will include RS422 and parallel interfaces, keyboard control of timecode, preset, reset, and freeze-frame control. Autolocator features with Go-To capabilities and Return-to-Zero will be standard.
MITUBISHI PRO-AUDIO GROUP
For additional information circle #310
TASCAM STUDIO COMBINED MIXER AND EIGHT-TRACK RECORDER
Comprising and eight-track open-reel recorder and an eight channel fully assignable mixer, combined SMPTE/EBU interface, and a microprocessor-controlled Load function the Studio 8 is said to provide a production system of tremendous power and flexibility. Once a seven-inch reel of tape is threaded on the unit, the Load function ensures that the tape will never run off the reels, no matter what transport mode is in use. Thus, the tape can be manipulated with cassette-like ease while the fidelity and editing flexibility of the open-reel is retained.
SMPTE/EBU synchronizers and controllers are plug-compatible with an accessory jack fitted to the Studio 8, making it an ideal system to introduce high quality eight-track audio to the sync-video market, at a very low cost. Also, composers working in the film or video industry, and musicians using electronic systems based on MIDI/SMPTE, will also find such a feature essential to their work.
The integral eight-channel mixer features eight program busses, and eight-channel monitor capability. The mixer also has a bus during multitrack work. Any channel of the mixer may be recorded on any or all tracks of the recorder at any time, through a unique combination of Assign and Record Function switches. An auxiliary-bus system can be used as an additional cue or effects mix, and the stereo effects mix system accommodates a wide range of signal processors. The mixer's equalizer is a three-band, sweep-type parametric system, with range 50 Hz to 15 kHz.
Return-To-Zero, Search-To-Cue, and Real-Time Counter featured on the section, which utilizes three motors with full servo-control. The unit's dbx noise reduction system has a separate dbx defeat switch, so that SMPTE/EBU or other timecode can be recorded on track #8. Even with timecode on track #8, track #7 may be used for audio without bleed into the adjacent track.
Suggested retail price of the Studio 8 is $3,495.
TASCAM
For additional information circle #311
US AUDIO UNVEILS SINGLE-CHANNEL GATEX NOISE-GATE/EXPANDER
The new single-channel version is designed to be housed in and powered by the dbx
F 900 powered frame. In its gating mode the Model 904 employs program-dependent attack to eliminate turn-on "pop," while maintaining attack times sufficiently short to accommodate all percussion instruments. Program controlled sustain automatically lengthens the release time as dictated by program content, thereby reducing distortion when using shorter release times.
The unit offers two expansion modes, and features an expanded eight segment LED gain-reduction meter.
List price of the Gatex Model 904 is $250.
US AUDIO, INC.
For additional information circle #312
GOLD LINE MAD SERIES OF MULTIWAY DIRECT BOXES
Up to four active direct boxes are provided in a single space, AC powered, rack mountable unit, or in single or two-channel combinations.
The MAD Series is offered in three models:
- MAD 4 comprises four independent active direct boxes, each with a ¼-inch input, a balanced low impedance output (male XLR), and an unbalanced, actively buffered ¼-inch output. Each channel has its own gain control for matching of system signal levels, and a ground-lift switch.
- MAD 2 is an active two channel direct box, which is either phantom or 9-volt battery powered. A battery status LED indicator is provided for each channel.
- MAD 1 is a single channel active direct box identical in circuitry and features to the MAD 2.
Suggested retail prices of the MAD 1, 2 and 3 are $89.95, $174.95 and $349.95, respectively.
GOLD LINE
For additional information circle #313
MUSICWORKS UNVEILS MACMIDI SERIES OF MIDI-TO-MACINTOSH INTERFACES
Using available music composition programs on the Apple Macintosh, the MacMIDI series of interfaces allows Mac-composed songs to be played on synthesizers and other
SONEX CONTROLS SOUND.
With its patented anechoic foam wedge, SONEX absorbs and diffuses unwanted sound in your studio. And it can effectively replace traditional acoustic materials at a fraction of the cost. SONEX blends with almost any pro audio decor and looks clean, sharp, professional. Check into this attractive alternative for sound control. Call or write us for all the facts and prices.
See Us at AES booth #119-121
SONEX is manufactured by Illsbeck, and distributed exclusively to the pro sound industry by Alpha Audio.
Alpha Audio
2049 West Broad Street
Richmond, Virginia 23220 (804) 358-3852
Acoustic Products for the Audio Industry
In A/B tests, this tiny condenser microphone equals any world-class professional microphone. Any size, any price.
Compare the Isomax II to any other microphone. Even though it measures only 5/16" x 5/8" and costs just $189.95,* it equals any world-class microphone in signal purity.
And Isomax goes where other microphones cannot: Under guitar strings near the bridge, inside drums, inside pianos, clipped to horns and woodwinds, taped to amplifiers (up to 150 dB sound level!). Isomax opens up a whole new world of miking techniques – far too many to mention here. We've prepared information sheets on this subject which we will be happy to send to you free upon request. We'll also send an Isomax brochure with complete specifications.
Call or write today.
* Pro net price for Omnidirectional, Cardioid, Hypercardioid, and Bidirectional modes.
COUNTRYMAN ASSOCIATES INC.
417 Stanford Ave., Redwood City, CA 94063 • (415) 364-9588
For additional information circle #308
MacMIDI devices connect to audio, video, and instrument.
Working with MIDI software, also enables the Macintosh to be used for automatically transcription of music played on synthesizers or composed on the Macintosh into standard music notation.
The series consists of four devices:
- **MacMIDI Star** can connect a Mac to 16 MIDI channels on a star network of three synthesizers, each of which can connect to other synthesizers via MIDI-thru connections. Suggested retail price is $79.
- **MacMIDI 32** allows one Mac to play up to 32 independent MIDI channels, which can be assigned to an unlimited number of synthesizers or other MIDI devices. MacMIDI 32 also allows for input from two MIDI devices into the Mac allowing for simultaneous recording of two synthesizers or other MIDI devices. SRP is $149.
MacMIDI 32 can be upgraded to a MacMIDI Sync or MacMIDI SMPTE device by plugging in a MacMIDI Sync/Up board, priced at $149, or MacMIDI SMPTE/Up board, priced at $279.
- **MacMIDI Sync** includes all MacMIDI's features, plus it allows the Macintosh to synchronize (in/out) with drum machines, professional FSK studio equipment, and unmodified 35mm Kodak Carousel slide projectors. MacMIDI Sync is now available, SRP is $249.
- **MacMIDI SMPTE** includes all the features of MacMIDI Sync, plus it generates SMPTE timecode, allowing a Mac to be connected to external recorders for laying down music and special effects tracks on videotapes and films. SRP is $329.
**MUSICWORKS**
For additional information circle #317
**TAMA UNVEILS NEW INTERFACES FOR TECHSTAR ELECTRONIC DRUMS**
The TTB1000 Trigger Bank enables any Techstar Voice Module to be triggered from various sources, such as acoustic drums and drum machines. The TSQ1000 is a six-channel programmable drum sequencer that lets the user create and store drum patterns, and play on top of the patterns in real time.
The TSQ1000 is compatible with the Techstar TS305, TS306, and TS200 or other voice modules, and will control up to six different voices. Specification includes: program pattern of eight Banks by four Patches to provide 32 Patterns, one pattern maximum four bars program; maximum 64 steps per single pattern; six-channel sequence control; Run/Stop Key, Clear Key, Mode Key (Play/Write), LED display (Bank, Patch, Inst.), Step Key and LED (12/16/24/32/36/48/48/64); Tempo control (40 to 300); Trigger output level switch (15:5V); and sync connector.
The TTB1000 triggers any Voice Module from acoustic drums, drum machine, keyboard, tape machine, or other audio or trigger source. Each channel accepts inputs from XLR or phone jack sources, and creates a suitable trigger output. Specifications include six-channel interface; input sensitivity 50 to +10 dBm, or 20 to +20 dBm; and output trigger level 15V to 5 V.
**CHESBRO MUSIC COMPANY**
For additional information circle #318
**GML ANNOUNCES MIX EDITOR FOR AUTOMATION USERS**
The principal feature of the new version 3.1 software is a program, Mix Editor, which provides the mixing engineer with the capability of manipulating mixes much the same as a word processor enables a writer to manipulate text.
It operates by presenting the mixer with a short menu from which editing operations may be selected. Such operations include:
- **Merge**: Combine portions of any two mixes.
- **Splice**: Copy a portion of one mix to a later or earlier part of the same mix. (A typical application of this would be copying the "perfect mix" for one chorus of a song to the second and third choruses, etc.)
- **Copy**: Copy the data on one channel to any other channel.
- **Clear**: Erase selected data from one or more channels.
- **Swap**: Swap the data between any two channels. (This feature is said to be extremely useful in complex mixing sessions where one module of the console develops sudden audio problems. The swap command allows the user to simply move all of the mix data from the bad module to any other working, unused module.)
- **Trim**: Add or subtract gain (specified in dB) to selected channel faders, for any portion of a mix.
- **Extract**: Extract all desired data from a mix, erasing the rest (the inverse operation of "clear").
- **Mix shift**: Move the entire mix backwards or forwards in time (as related to the SMPTE timecode on tape).
- **Channel shift**: Move the data from one or more channels backwards or forwards in time. (This feature is said to have been requested by a client who frequently mixes certain tracks by "riding" the faders while listening, and desired a method of correcting for the slight time lag caused by mixing "on-the-fly" in this manner.)
All commands (except Mix Shift) operate by asking the user to specify parameters of edit operations in a simple and logical syntax. The parameters are: the list of channels to be processed in the edit operation; SMPTE timecode numbers representing the portion of a mix to be edited; and the automated functions to be processed, such as faders, mutes (and other switches), or others.
The Editor also includes a "help" facility, whereby the user can hit the Help key at any time, and receive concise on-screen information explaining the current operation.
The update to Version 3.1 is provided free of charge to all existing GML installations.
GEORGE MASSENBURG LABS
For additional information circle #319
EMILAR ANNOUNCES MODEL EL-15J AND -12J BASS DRIVERS
The new 15-inch EL-15J will handle 500 watts of continuous program material, and offers an average sensitivity rating of 104.5 dB SPL, 1 watt/1 meter over the frequency band of 200 to 800 Hz; overall frequency range is 20 Hz to 2 kHz.
Intended for use in live-performance sound reinforcement systems and high-level music playback systems, the EL-15J is said to provide sound-system designers and users with a new standard in quality sound reproduction.
The new EL-12J 12-inch driver has a rated power capacity of 400 watts continuous program material, and a 100.5 dB SPL average sensitivity over the 200 to 800 Hz bandwidth, 1 watt measured at 1 meter. Usable frequency range is 20 Hz to 3 kHz.
EMILAR
For additional information circle #320
SOUND TECHNOLOGY 3000 SERIES AUDIO TEST EQUIPMENT
The 3000 Series consists of two separate components: the 3100A generator and the 3200A analyzer. Both microprocessor-based instruments feature front-panel programmability that allows storage of extensive automated test sequences. Both also feature the ability to communicate test data through the audio line being tested via an exclusive Frequency Shift Keying (FSK) technique, allowing unmanned, automated remote transmission line testing without the need for external computers and modems.
Test results are achieved in less than 60 seconds, and may be graphed on a standard printer or plotter. If desired, the 3000 Series functions may also be controlled via RS-232C or GPIB (IEEE-488) communications ports.
The two-channel, electronically balanced and floating 3100A generator outputs sine waves, squarewaves, JMD, toneburst and sine-step waveforms. The two-channel companion 3200A analyzer will measure level, noise, frequency, harmonic distortion, inter-
University of Sound Arts
in Hollywood
Offering 5 wk / 10 wk / 6 month
RECORDING ENGINEERING WORKSHOPS
For the last nine years, The University of Sound Arts has been producing tomorrow's recording engineers and video producers today. We use the finest and the most modern state of the art equipment in the world, thus producing engineers who are most effective and current in their approach.
Also available: 1 yr. program in Audio-Video Technology
CALL COLLECT in California 213-467-5256 or 1-800-228-2095 or write to University of Sound Arts, 6363 Sunset Bl., RCA Bldg., Hollywood, CA 90028
The GOLDLINE Model 30 Digital, Real-Time, Spectrum Analyzer is the affordable and easy-to-use instrument that takes the guesswork out of audio system calibration including frequency response measurement of consoles and tape machines, as well as monitor system calibration
Affordable at just: $1895.00. Now available with the Option 020 Printer Interface Board to provide hard copy of all test parameters used during RTA measurements.
The Model 30 is the ultimate studio and audio system "tweaking machine"
- Full 30 Bands
- Six Memories
- Quartz Controlled
- Switched Capacitive Filtering to Eliminate Drift
- Ruggedized for Road Use
- Microprocessor Controlled
- Built-in Pink Noise Source
- "Flat," "A," or "User Defined" Weighted Curves may be employed
- ROM User Curves Available
Learn how easy the Model 30 is to use. Return the coupon below, or circle the reader service number to receive the Goldline catalog of products.
NAME
COMPANY
STREET
CITY
STATE ZIP
P.O. Box 115 • West Redding, CT 06896
(203) 938-2588
For additional information circle #315
October 1985 □ R-e p 219
modulation distortion, phase error, channel separation and quantizing noise (digital data).
Pro-user prices are: 3000A (single mainframe), $8,950; 3100A audio generator $3,950; and 3200A audio analyzer $5,350.
**SOUND TECHNOLOGY**
For additional information circle #324
**COMTRAN ANALOG FILTER AMPLIFIER DESIGN PROGRAM FROM JENSEN NOW RUNS ON HP-300 AND HP-217 COMPUTERS**
The Software division of Jensen Transformers has announced that its COMTRAN program, which includes advanced AC circuit analysis with optimization, group delay, time domain and integrated measurement capability, has been ported to the Hewlett-Packard 300 Series running BASIC 4.0 and to the Model 217, and also to other HP machines running BASIC 3.01.
The program, which consists of four modules, is intended for circuit modeling with desktop computers, and enables an engineer to quickly and thoroughly simulate and optimize the design of analog circuits. COMTRAN now runs on HP 300, 217, 9836, 9816, 9920, 9845, and 9020 computers, and it is described as being 50 times faster than the HP AC Circuits program.
Features include optimization of active or passive circuits with up to 98 nodes, calculation of group delay and relative phase, calculation of output waveform given any circuit and any input waveform; study of circuit behavior without building a prototype; combination of measurements with circuit models; and synthesis active filter circuits using fewer op-amps.
**JENSEN TRANSFORMERS, INC.**
For additional information circle #325
**VALLEY PEOPLE MODEL 440 COMBINED LIMITER, COMPRESSOR AND DYNAMIC SIBILANCE PROCESSOR**
The Model 440 is a single-channel device offering the convenience of a peak limiter, a high quality compressor/expander package, and a Dynamic Sibilance Processor section, each controlling a common VCA. Inter-coupling of the control circuitry used for each function allows the device to simultaneously limit, compress, expand, and eliminate high-frequency components in sibilance.
The unit's compressor control section features continuously adjustable threshold, attack time, ratio and release time. In addition, an interactive expander control is integrated with the compressor control circuitry to reduce residual noise that otherwise would be "pumped up" or accentuated by the compression process. Special release coupling is said to make the transition from compression to expansion imperceptible, thus eliminating problems associated with the use of separate single function units.
The limiter control section exhibits extremely fast attack characteristics — typically 1 microsecond per dB or less — continuously variable threshold, a fixed 60:1 ratio, and variable release time.
**VALLEY PEOPLE, INC.**
For additional information circle #326
**AURATONE RT5-S AND RT5-6 RACK-MOUNTABLE MONITORS**
Both systems have baffles with components mounted on the center axis, so that one unit can be inverted to form a symmetrical stereo pair. The modified polypropylene woofers are shielded to prevent flux leakage that might distort the image of an adjacent CRT.
The RT5-S features a ¼-inch wide dispersion, polyamide dome tweeter and a 12 dB per octave crossover network. Frequency response is a quoted ±3 dB from 70 Hz to 20 kHz, with power handling of 40 watts RMS. Impedance is 6 ohms. The enclosure occupies 5.25 inches of vertical rack space; width is 16.5 inches and depth 8.5 inches.
The RT6-S acoustic design is derived from the T6 Sub Compact Two Way monitor with the same one inch soft dome tweeter, 12 or 8 dB per octave crossover network and nearly identical Thiele-Small woofer parameters, except that the magnet structure is heavier and shielded against flux leakage. Frequency response is a quoted ±2.5 dB from 60 Hz to 20 kHz, with 50 watts RMS power handling. Impedance is 8 ohms, and enclosure measurements 8.75 by 16.5 by 9.5 inches (H×W×D).
Suggested pro net pricing is $165 each for the RT5-S, and $180 each for the RT6-S.
**AURATONE CORPORATION**
For additional information circle #327
LAKE EXPANDS CAPABILITIES
What's new at LAKE? Besides the influx of new people... A host of new computer systems. Computers that assist in the design, engineering, drafting, and service of audio/video systems. One of the most exciting new computer systems is the audio department's Tecron TEF System 10. A portable audio spectrum analyzer that can be used in the field and the data brought back to the office for further analysis.
LAKE is involved in the design and building of television stations, recording studios, post production editing systems, and sound reinforcement systems worldwide. A computer system that could quickly analyze the acoustic parameters of any space was very important to the engineering department. They are currently using the TEF 10 to help expedite the engineering requirements of an expanding customer base.
An example of its value was recently discussed at a meeting I attended. It seems that microphones placed at a specific area on stage were experiencing excessive feedback. The client had tried a number of corrective measures to no avail. LAKE's engineers, using the TEF 10 were able to pinpoint the problem, something that at first glance seemed insignificant, a steam pipe located near the speaker cluster was causing a strong reflection into the problem area. Covering the pipe with absorbent material, eliminated the problem.
Without a doubt, this type of commitment on the part of LAKE in R&D, positions them as the systems company of choice in the audio field. Contact them at (617) 244-6881.
LAKE'S audio systems engineers Dennis Smyers (foreground) and Steve Blake analyze data on the TEF System 10
Another In A Series Of Application Notes On Sound System Design From EAW.
Subject: The EAW MR Series: a complete range of mid-bass horns.
The Eastern Acoustic Works MR Series Mid-Bass Horns are the only systems to offer increased output capabilities without sacrificing distortion, coverage or response linearity. Since the introduction of the MR102 in 1978 it has become the standard on six continents for high quality music reproduction systems.
Features:
• Flat frequency response +/- 2 dB on axis over entire operating range (no other mid-bass horn comes near).
• Flat power response less than -6 dB deviation from on axis response up to 45 degrees off axis, over the entire operating range.
• Complex construction using high density polyurethane foam reinforcing cross-grain-laminated birch, for absolute acoustical integrity and freedom from output robbing resonances.
• Kenton Forsythe designed throat displacement plug enables unmatched accuracy.
• Complete range includes units for the following bandwidths: 150 Hz to 1.8k Hz, 250 to 1.8k Hz, and 275 Hz to 2.2k Hz.
• Each system is complete with specially designed mid-bass RCF driver, road enclosure, handles, and hardware.
Lower Midrange Reproducers
Forsythe Series
MR142L $810
Extended low frequency mid-bass system with RCF LAB L12/P11W 300mm driver, 150 Hz to 1.8k Hz, 109 dB SPL 1w @ 1m, 13.5 x 24.6 x 19.75
MR102L $665
High output horn loaded mid-bass system with RCF LAB L12/P11W 300mm driver, 250 Hz to 1.8k Hz, 109 dB SPL 1w @ 1m, 18.5 x 29.75 x 24.6
MR101L NEW $525
Extended high frequency mid-bass system with RCF L10/750 250mm driver, 275 to 2.2k Hz, 108 dB SPL 1w @ 1m, 13.5 x 24.6 x 19.75
For more information on these and other EAW professional audio products, call or write:
Eastern Acoustic Works, Inc., 59 Fountain St. Framingham, MA 01701. Telephone: (617) 620-1478
Forsythe Series
By Eastern Acoustic Works
For additional information circle #322
For additional information circle #323
**CLEAN PATCH BAYS**
**NO DOWN TIME**
**VERTIGO BURNISHER AND VERTIGO INJECTOR RESTORE ORIGINAL PERFORMANCE TO YOUR PATCH BAYS**
**VERTIGO 1/4 TRS AND TT BURNISHERS:**
Each eliminates noise in many contacts under normal patching situations.
**VERTIGO 1/4 TRS AND TT INJECTORS:**
Each injects cleaning solvent to eliminate intermittents in wearing contacts (normals) when patch cord has been removed.
**ONLY $29.95 EA.** Please write for additional information and order form today.
**VERTIGO RECORDING SERVICES**
12115 Magnolia Blvd. #116
North Hollywood, CA 91607
---
**Classified**
**RATES $82 Per Column Inch (2¼" x 1")**
One-inch minimum, payable in advance. Four inches maximum. Space over four inches will be charged for at regular display advertising rates.
---
**EQUIPMENT for SALE**
**EQUIPMENT FOR SALE**
Used & New Mixers, Amps., Effects, Mics., Etc. YAMAHA, JBL, BGW, SHURE, ETC. Low Prices, eg: BGW 750B's, $700. Lexicon 224, $4,950. Quantity discounts. A-1 AUDIO, 6322 DeLongpre Ave., Hollywood, Calif. 90028. (213) 465-1101.
**AUTOLOCATORS**
CM50 full function microprocessor based autolocator and SMPTE reader available for 20 different multitracks, typically: M79, MM1100, A80. You've seen it on the B16 and X80! Call us now if your multitrack needs a little help finding its way around. Prices around $1100.
Applied Microsystems Ltd., (213) 854-5098.
---
**FOR SALE**
Ampex MM1000 2" 16 track tape recorder.
$8,000 — OBO
(213) 684-5005
(818) 791-4004
---
**FOR SALE**
MCI JH-532C Console, Plasma Meters, Automation, Producers Desk, Center Grouping Masters, Reverb Returns with EQ. Excellent condition, asking $48.5K.
Call Alan (312) 822-9127.
---
**DAN ALEXANDER AUDIO**
Used equipment for sale
**MICROPHONES**
All Neuman and AKG tube type microphones available. Also RCA, Schoeps, Sony, etc.
**TAPE RECORDERS**
MCI JH114 16track
MCI JH114 24-track
3M M79 24 track Scully 280 16 track
Many Mono, 2-track and 4-track recorders available
**RECORDING CONSOLES**
Newe 30/16/16 custom — 1977
Newe 38 input — 1972
API 20/8 /16
Trident Series 80 — 1980
Trident Series 80B as new
Helios consoles available
**OUTBOARD GEAR**
Too much to list. Call for info.
**WANTED**
Fairchild, Teletronics, Pultech, ITI, Sontec, API, Lang, Marantz Model 9
DAN ALEXANDER AUDIO
PO Box 9830
Berkeley, CA 94709
Phone: 415-527-1411
---
**THE BEST SPECS COST LESS.**
- Frequency response: 20 Hz to 20 kHz ± .5 dB
- .5mv to 6 volt RMS capacity without clipping or distortion
- .05% THD
Whirlwind TRSP-1 transformer for signal isolation and splitting with uniform response (single secondary).
Whirlwind TRSP-2 transformer for signal isolation and splitting with uniform response (dual secondary).
Whirlwind TRHL-M transformer for Hi to Lo signal conversions and signal isolation.
The best specs in the business... for half the price. From The Interface Specialists
---
**IF YOU'RE NOT USING IT — SELL IT!**
**THE BERTECH ORGANIZATION**
YOUR NATIONAL CLEARINGHOUSE FOR FINE USED AUDIO & VIDEO
Our mailers reach thousands of professionals every month. We'll list your used equipment free of charge— or help you find that rare item you've been looking for.
**THE BERTECH ORGANIZATION**
Distributors, brokers and custom fabricators of quality audio and video equipment.
14447 CALIFA STREET
VAN NUYS, CA 91401
(818) 909-0262
OUTSIDE CA (800) 992-2272
THINK BERTECH FIRST!
**EQUIPMENT FOR SALE**
Ampex MM1000 16 track recorder $5K; Echostate II 6ft., $1.5K; 2 Neumann KM 86 mics, .9K/pair; 2 AKG 451 mics 4K/pair; Scamp rack, incl. power supply, 2 noise gates, 2 limiters, $1K.
Call 213-664-8227
---
**EQUIPMENT FOR SALE — Good Cond.**
1 Dolby M24-H ............... $12K
3M 79 16-track head stack ....... $1K
1 Eventide 1745M with pitch .... $2K
1 Eventide Flanger .......... Best Offer
2 White 4000 1/3 Octave ....... $1.5K each
2 3M64 2-trk SAKI hd stacks ... $1K ea.
(213)653-0240 Scott/Eric. Make offer.
---
**FOR SALE: A.P.S.I. 32 - 24 CONSOLE**
Six sends, 4-band semi-para. EQ; LED metering, mic patching, spare modules and power supply, patch bay. Very flexible and only 3 years old in very good cond. Coming out of gold record studio for automated board. Affordably priced.
Call (201) 673-5680
---
**EQUIPMENT FOR SALE**
SPHERE ECLIPSE B console: 20-channel 8 with graphic EQ, in good shape. 360 SYSTEM DIGITAL SAMPLE KEYBOARD excellent condition and priced to sell. ALLEN & HEATH System 8 console: 16 x 8.
Call Jim R. at (614) 663-2544
---
**STUDIO MAINTENANCE TECH**
Needed for Bay Area Service Co. to repair from system to component level.
CALL STAN: (415) 332-6100
THE PLANT STUDIOS
2200 Bridgeway
Sausalito, CA 94965
---
**P & G FADERS AT DISCOUNT PRICES**
Major manufacturer of audio consoles offers brand new P&G faders at tremendous savings due to surplus stock situation:
| Model | 10 Quantity Price Each | 100 Quantity Price Each |
|------------------------------|------------------------|-------------------------|
| PGF 3220/C/U 5k Audio | $29.00 | $24.00 |
| (U.S. list $45 each) | | |
| PGF 3200 2.7k VCA Track | $35.00 | $29.00 |
| (U.S. list $48.00 each) | | |
| PGF 3222/C/U 5k Audio Stereo | $45.00 | $39.00 |
| (U.S. list $65.00 each) | | |
Minimum quantity — 10
Call Mike Peters at: (818) 898-2341
MITSUBISHI PRO AUDIO GROUP
The R-e/p Library
The following technical books are available from R-e/p. To order any titles listed here, simply check the appropriate box to the left, complete the coupon provided, and send this page (or photocopy), with the correct amount in U.S. currency to the address below. Allow six weeks for delivery.
- Handbook of Multichannel Recording by F. Alton Everest ........................................ $14.00
- Master Handbook of Acoustics by F. Alton Everest .................................................. $15.00
- How to build a Small Budget Recording Studio by F. Alton Everest .............................. $13.50
- The Platinum Rainbow by James Riordan and Bob Monaco ........................................... $11.50
- The Principles of Digital Audio by Ken C. Pohlmann .................................................... $22.50
- Sound Recording by John Eargle ..................................................................................... $26.50
- How to Make and Sell Your Own Record by Diane Rappaport ....................................... $14.50
- Microphones, 2nd Edition by Martin Clifford ................................................................. $11.75
- Practical Techniques for the Recording Engineer by Sherman Keene ............................ $31.75
- The Microphone Handbook by John Eargle ...................................................................... $33.50
- The Recording Studio Handbook by John Woram .......................................................... $41.00
- Building a Recording Studio by Jeff Cooper ....................................................................... $31.50
- Acoustic Techniques for the Home and Studio by F. Alton Everest ................................... $17.00
- Modern Recording Techniques by Robert Runstein ......................................................... $18.45
- Film Sound Today by Larry Blake ...................................................................................... $10.00
Name: ..................................................................................................................................
Address: ..............................................................................................................................
City: .....................................................................................................................................
State: ..................................................................................................................................
Zip: ......................................................................................................................................
Total Amount Enclosed $ ....................................................................................................
All prices include postage for mailing in the United States. For all orders outside the U.S., including Canada and Mexico, please add $5.00 per book for postage.
Mail completed coupon to:
R-e/p, P.O. Box 2449, Hollywood, CA 90078.
From the feedback received during the proceedings, the time and effort expended was well worth it." The company plans to hold a similar meeting some time next year; dates and location will be published in *R-e/p* as soon as they become available.
---
**News**
**OPERA THEATER OF ST. LOUIS BROADCASTS IN TWO-CHANNEL AMBISONIC SURROUND SOUND**
Four operas have been recorded Ambisonically by KWMU-FM, the local NPR affiliate, for national distribution via satellite to the NPR Network throughout the U.S. Taped digitally before a live audience this past June, the programs feature world premiere performances of Minoru Miki's *Joruri* and Stephen Paulus' *The Woodlanders* (based on the novel by Thomas Hardy), as well as *The Barber of Seville* by Rossini, and Mozart's *Idomeneo*.
According to production director, Barry Hufker, "Opera Theatre is very creative in the use of their performing space with action — not only on the stage but also in and around the audience. We wanted to recreate that same experience for our listeners, and Ambisonics is the perfect way to achieve this."
---
**MITSUBISHI X-80 DIGITAL TWO-TRACK NOW AVAILABLE FOR RENT FROM GERR ELECTRO-ACOUSTICS**
The Canadian head office for Digital Entertainment Corporation, the North American representative of the Mitsubishi Pro-Audio Group, has already rented the X-80 to several well-known Canadian bands, including Rush, Triumph, Parachute Club, Rational Youth and New Regime. Recording studios such as Sounds Interchange, Manta, Metalworks, McClear Place, ESP, Round Sound, Capitol Records and CBS Records have also rented the X-80.
Gerry Eschweiler, sales manager for GERR, says: "The Mitsubishi X-80 has become an industry standard in the U.S. and Europe for the digital mixing of master tapes. At GERR, we want to make that standard available to the Canadian recording industry."
---
**NEW COLOSSUS FOUR-TRACK DIGITAL RECORDER FROM BY THE NUMBERS TO BE UNVEILED AT NEW YORK AES**
The new recorder is said to incorporated a significant development in multichannel digital recording technology, and which can be implemented at costs far less than those associated with present digital standards. By The Numbers is a joint venture between Louis Dorren, and associates, and Brad Miller. Dorren is the inventor of the Dorren/Quadra-cast four-channel FM broadcasting standard approved by the FCC. Miller is well known as a producer of the Mystic Moods Orchestra series, and numerous environmental recordings for record and film. The company has secured the services of John Eargle and JME Consulting Corporation in the areas of market planning, development, and product licensing.
The proprietary code developed by Dorren will first be embodied in a product, trade-named Colossus™ — a portable unit capable of recording four channels of 16-bit audio with bandwidth in excess of 25 kHz. Dorren emphasizes that the proprietary PCM code makes no use whatever of data compression.
---
**JOINT VENTURE ANNOUNCED BY AMEK AND GML**
GML, Inc. of Los Angeles and AMEK, Ltd. of Manchester, England, have formed a new corporation to research, develop, manufacture and market a large architecture, fully automated Virtual Master Recording and Mixdown Console.
The new company will be incorporated in England as AMI., Ltd.(AMEK/Massenburg Labs), with design and manufacturing responsibilities performed in England, and design, programming and prototyping responsibilities performed in Los Angeles.
... continued overleaf ...
---
**SPARS**
*Society of Professional Audio Recording Studios*
Announces
**The SPARS National Studio Exam**
*by Professionals, for Professionals*
**WHAT IS IT?**
The SPARS *National Studio Exam* is designed to measure your knowledge in every area of studio operation. The exam has been developed by industry professionals and educators in cooperation with the Educational Testing Service of Princeton, New Jersey, authors of the well-known Scholastic Aptitude Test (SAT).
**WHY TAKE ANOTHER TEST?**
The SPARS *National Studio Exam* will give you a clear picture of your own studio knowledge. What's more, you can elect to have your exam subsection scores reported to the professional studio community to affirm your mastery of specific knowledge and expertise...whether you are being considered for employment or advancement, or just want to share that information with your current employer. And, if you are applying to schools with an audio engineering program, you can request that your test results be sent to them as an aid to appropriate placement in basic or more advanced courses.
Your subsection scores will give you a diagnostic look at just how you compare with your peers in this fiercely competitive industry. In a market flooded with applicants, your results in the new SPARS *National Studio Exam* may give you just the edge you're looking for in advancing your own career.
SPARS manufacturing members have established scholarships to be awarded to individuals who demonstrate need and who demonstrate ability through their score on the SPARS *National Studio Exam*. Your score report will be totally CONFIDENTIAL, released only to those that YOU select.
**WHAT DO I DO?**
Write or call the SPARS National Office and request the SPARS *National Studio Exam* Information Bulletin.
SPARS
P.O. 11333
Beverly Hills, CA 90213
(213) 466-1244
Contact us soon, the first national administration of the exam is scheduled for Saturday, December 7, 1985 at over 20 locations throughout the country. Deadline for registration is November 1st, 1985.
The SPARS *National Studio Exam* is sponsored by a grant from the Sony Corporation.
Introduction for the new console is set for the NAB Convention in spring of 1986. First deliveries are scheduled the following month.
**OCEAN AUDIO PLACES TWO NEW CONSOLE RENTAL PACKAGES; ANNOUNCES SALES**
- Debernedetto Recording, Bridgeport, CT, has rented with an option to purchase a 56-input Neve 8108 console, formerly in use at Abbey Road Studios, London. The new facility offers both scoring and music-recording capabilities.
- Cougar Run Studio, Incline, NV, a resort studio located in the Tahoe basin, has rented with the option to purchase a 4s-input Neve 8108 with NECAM I automation.
- Clinton Recording, New York City, has purchased a 40-input Neve Model 8078 with NECAM II, formerly located at Studio 301, Sydney, Australia.
- House of Music, Clifton, NJ has purchase a 40-input Neve 8078 console formerly as Nova Studio, London.
- Smoketree Ranch, Chatsworth, CA, has purchased a 49-input 8078 formerly at Sound Holland.
- London Bridge Studio, Seattle, WA, recently took delivery a 32-input Neve 8048 formerly at Polygon Studio, France.
**NEW COMPANY FORMED TO MARKET DISKMIX CONSOLE AUTOMATION**
Michael Tapes, president of Sound Workshop Professional Audio Products, has announced the formation of Digital Creations Corporation (DCC) to develop and market computer-based audio-production products. DISKMIX (Release 2.0) is the first such product, being introduced at the New York AES Convention.
All DCC product will be developed in partnership with Paul Galburt, president of SWI Engineering. The Tapes/Galburt team was responsible for developing the series of Sound Workshop consoles.
The new release of DISMIX typifies the future product direction for Digital Creations, Tapes says. A custom-designed, "computer-on-a-card" that mounts inside of an IBM PC or most compatibles, DISKMIX handles only system-specific functions such as time-code and console communications. The more generic user-interface and disk processing is handled by the PC and its industry-standard operating system.
Full product information is being released at the NY AES, with demos at booth #110. Digital Creations Corp. is located at 1324 Motor Parkway, Hauppauge, NY. (516) 582-6229.
---
**People on the Move**
- **LEE POMERANTZ** has been promoted to the position of sales manager at SOUND WORKSHOP PROFESSIONAL AUDIO PRODUCTS, INC. Over the past three years he has held the positions of quality control/customer service manager, and most recently served as technical product manager. Prior to joining the company, he was chief engineer at The Workshoppé Recording studio, Douglaston, New York. In addition, **MICHAEL CUNEO** has been promoted to operations manager, from his previous position as head of material control and purchasing for console production.
- **JIM RONDINELLI** has joined SOUND GENESIS, the San Francisco-based pro-audio dealer and service center, as sales representative for accounts in the corporate studio market. He joins the company from the University of Iowa, where he was awarded a BA in business administration.
- **WALTER J. KELLEY** has been named vice-president of audio/video sales at LAKE SYSTEMS CORPORATION, a builder of audio/video systems based in Newton, MA. Kelley has been with the company since 1972, and most recently served as audio/video sales manager.
Studer 961/962: Small Wonder
It's a wonder how a console so small can do so much ... and sound so good!
The Swiss have a special talent for making great things small. A case in point: the new 961/962 Series mixers from Studer. In video editing suites, EFP vans, remote recording, and radio production, these compact Studers are setting higher standards for quality audio.
Sonic performance is impeccable throughout, with noise and distortion figures well under what you'd need for state-of-the-art digital recording. By refining and miniaturizing circuits developed for our 900 Series production consoles, Studer engineers have squeezed a world-class performance into suitcase size.
The 961/962 Series is fully modular, so you can mix-and-match modules to meet your requirements. The 961/962 features stereo line level input modules with or without 3-band EQ, plus mono mic/line inputs and master module with compressor/limiter. Other choices include a variety of monitor, talkback, auxiliary, and communication functions. The 961 frame holds up to 14 modules, the 962 accepts up to 20.
Other new features in the 961/962 Series include improved extruded guide faders, balanced insert points, FET switching, electronic muting, Littlite® socket, and multifrequency oscillator.
Thanks to its light weight, DC converter option, and sturdy transport cover, you can put a 961/962 mixer on the job anywhere. And, with Studer ruggedness and reliability, you can be sure the job will get done when you get there.
Packed with performance and features, 961/962 consoles will surely make a big splash in audio production circles. Small wonder. Call your nearest Studer representative for more details.
With snap-on cover, mixer is road-ready in seconds.
STUDER REVOX
Studer Revox America, Inc.
1425 Elm Hill Pike/Nashville, TN
37210/(615) 254-5651
New York (212) 255-4462 • Los Angeles (818) 780-4234
Chicago (312) 526-1660 • Dallas (214) 943-2239
San Francisco (415) 930-9866
For additional information circle #338
The world's first and best selling unidirectional surface-mounted condenser mic. Clean and simple.
No carpet strips or plastic baffles needed. Until now, all surface-mounted mics have been omnidirectional. Trying to add directionality has required a lot of busy work. The new SM91 brings the big advantages of unidirectionality to boundary effect microphones by incorporating a condenser cartridge with a half-cardioid pattern that isolates the speaker from surrounding noises.
The new smoothie. The sleek SM91 delivers wideband, smooth response throughout the audio spectrum, while greatly reducing the problems of feedback, low-frequency noise and phase cancellation. Ideal for instruments or vocals. The SM91 does a great job of isolating a vocalist or instrument in musical applications. It's also an excellent mic for meeting and conference rooms. And it's the ideal mic for live theater.
A preamp ahead of its time. The ultra-low noise preamplifier provides switch-selectable flat or low-cut response, excellent signal-to-noise ratio and a high output clipping level. A low-frequency cutoff filter minimizes low-end rumble—especially on large surfaces.
If you're going omni. Our new SM90 is identical in appearance to the SM91 and just as rugged.
For more information or a demonstration, call or write Shure Brothers, Inc.
222 Hartrey Ave.
Evanston, IL 60202
(312) 866-2553.
SHURE®
BREAKING SOUND BARRIERS
For additional information circle #350
|
VILLAGE NOTES
FEBUARY WATER/SEWER/GARBAGE BILLS WILL NOT BE MAILED OUT UNTIL THE 12TH OF FEBRUARY, 2013
PLEASE BE ADVISED THAT A VALID BUSINESS LICENSE IS REQUIRED for all businesses operating within the Village of Innisfree. This includes home-based businesses. 2013 Business Licenses are available from the Village Office at a cost of $20.00 per year for residents and $30 FOR PEDDLERS. For more information please contact the Village Office at 780-592-3886.
PLEASE BE ADVISED THE VILLAGE CAN NO LONGER ACCEPT USED OIL OR FILTERS. Watch for further information in regards to alternate options for residents.
PLEASE BE ADVISED THAT Pursuant to Schedule A of Bylaw No. 510-95 – 2013, DOG LICENSES ARE REQUIRED FOR YOUR DOGS and are available at the Village Office for the low fee of $20.00 each pet.
Fines for not picking up after your pet can be up to $250.00 – so please pick up after your pet. PLEASE KEEP YOUR PET ON A LEASH.
PLEASE BE ADVISED the cardboard recycling bin has been moved to the AgriPlex parking lot. Your telephone books can now be taken to the cardboard recycling bin. Your cardboard cannot be picked up with household garbage. Newsprint, glossy paper and catalogues may be left in the bins west of the Village Office. Eyeglasses and cell phones may be left at the Village Office.
NOTICE - INCREASE IN WATER RATES
Please note that the scheduled increase in Water Rates is still under review. There will be no increase on your February Water & Sewer Bill.
The Innisfree Informer can now be seen on our website at www.villageofinnisfree.ca
A REMINDER to all citizens of the Village that garbage put out in bags alone is more difficult to pick up, can make a mess and attracts animals. PLEASE put your garbage out in bins.
Thank you to the following Businesses who have purchased their Business Licenses for 2013. We would like to remind all business owners that your 2013 Business License can be purchased for $20.00 at the Village Office.
Debs Agency – Bank Services
Jen’s Small Engine Repair– Small Engine Repair
Ron’s Auto & Ag. – Auto repairs and Ag Repairs
Floyd’s - Rototilling/Grasscutting/Snow Shoveling
Innisfree Hotel – Tavern
Jard Industrial Supply – Industrial Supplies
Mardar Electric - Electrician
UpperEdge Hair Design – Hairstyling
Joanna Hlushak Catering – Food Services
Innisfree Firewood – Firewood & Odd Jobs
Innisfree Station – Bottle Depot
Ron’s Auto & Ag – Auto & Ag Repairs
Petro Canada – Fuel & Convenience Store
Lillian Kostynuk – Food Services
Creative Catering – Food Services
Fred Horyn – Handyman Services
Dolphin Cleaning – Cleaning Services
Innisfree Lumber & Farm Supply
Curling Rink Concession – Food Services
LJ Bookkeeping & Tax Services - Accounting
Message from the Mayor
I have to say we definitely are experiencing an old fashioned winter this year.
Council had decided in early November to change our usual sleigh ride to a community toboggan party at the old golf course on December 29th. This was well received. Cell phones and televisions were turned off and warm clothing put on and everyone had fun, young and old!
At the drop of a hat, Dwayne Fowler and family had a parking area graded and the toboggan slope groomed by pulling tires behind a snowmobile. We want to thank you very much; your family is a great asset to this community.
We want to thank Dean Lindballe and family for providing a snowmobile to help pull up the toboggans. Again, a special thank you.
Thanks to Morris Anderson for finding the square bales to sit on and to Laurie Moody for rounding up hot dogs, marshmallows and hot chocolate. A generator was supplied for the hot water and coffee.
Thanks to everyone who attended. Without your support this event would not have been the resounding success that it was!
Morris Anderson has a saying – “Come up with a good idea. Everyone will pitch in to GET IT DONE!” Innisfree and District should be proud. We are establishing a solid track record of getting things done.
Ron Konieczny
Mayor
RUN YOUR CLASSIFIED AD in the Innisfree Classified Section. We deliver to over 385 clients!
$5.00 per month
$10.00 for a FULL PAGE
Call Laurie at the Village Office by the last Wednesday of the month at (780) 592-3886
*FREE CLASSIFIED ADS*
** To Give Away **
Glassware/wine beer and other sorts – Call Sue at 780-592-3822
WANTED:
Service Lot Wanted for a trailer. Phone Ken at 780-658-3723
LOST
I Pad lost at Kindergarten Silent Auction. Please call 780-266-7123
Innisfree Recreation Centre Anyone wanting to book the Rec. Centre, call Karen Anderson at 780-592-2268
JENN’S SMALL ENGINE REPAIR. For all your small engine repairs from weed eaters to 2000 cc motor bikes and skidoos, quads, etc., Innisfree operated.
(780) 801-2625
The Seniors Hall and Seniors Drop In Center are available. Contact Robert at 592-3969.
LAST CALL The last call you will have to make before you DIG. Locate all your private underground lines. Phone MarDar Electric at 780-592-2236
TRENCHING available. No job too big or too small. Phone 780-208-1461
ZUMBA
Come one, come all! Join us for dance and exercise. Tuesday nights 7:30 PM at the School Gym. Contact Natasha (780) 603-0678 for more information.
DITCH THE WORKOUT AND JOIN THE PARTY!
UNITED CHURCH SERVICES
Held on the second and fourth Sunday of the month. 9:00 AM at the Seniors Drop-In Centre
EVERYONE WELCOME!
The Village still has its van available to book for appointments and shopping. Call Laurie at the Village Office to book. 780-592-3886.
Please allow 2 days notice.
A REMINDER that the school is collecting Campbell’s Soup Labels. They can be dropped off at the school or the Village Office.
Thank You
The family of Alec Paranych would like to extend our heartfelt thank you to everybody who supported us during this difficult time. From food, donations, flowers, cards and kind words, it made dealing with this loss more bearable. Your many acts of kindness and sympathy continue to be a great comfort to us in our time of sorrow. Vicky Paranych and Family
SENIORS NEWS Alice Sydora
Our main function for February is a Crib Tournament on Sunday, February 17th, at 7pm at the Seniors Drop-in Centre. Everyone welcome!
Exercises every Monday and Wednesday morning 10am at the Drop-in Centre.
Fun Bingos every Tuesday at 7pm at the drop-in
Card Parties every Thursday 1pm at the Seniors Drop-in Centre
Innisfree Curling Club
Innisfree Curling Club will be hosting a Ladies Bonspiel February 22-24, 2013.
Come for a ladies weekend of curling and laughter.
You don’t have to play to attend. Come out and show your community spirit!
Please contact Shelly @ 780-603-7412 or Bobbi-Jo @ 780-581-5899 to enter a team or for more information.
Innisfree 4-H Multi Club
Our first meeting of 2013 was held on Jan. 7th. We made plans for the district dance on Jan. 19th in Innisfree. We also set the date for our Public Speaking and Communications which will be held on Feb. 15th at the Senior’s Hall. Most of the members stayed after the meeting to work on diaries. The food members had a workshop on Jan. 17th in Ranfurly and made soft pretzels. Our next meeting will be held on Feb. 4th.
Innisfree-Minburn 4-H Beef Club
This month we had our monthly meeting in Minburn. One of the Key Members, Davin Charron, came and did a presentation on Public Speaking. After the presentation we had reports on the NE Volleyball tournament in Vermillion, our Christmas party and our community service project which was a skating party in Innisfree. For our Christmas party we went sledding and later had hotdogs and snacks. It was a blast! Quite a few of our members will be attending “you be the Judge” in Vermillion. Our Public Speaking is going to be on February 10 at the Minburn Seniors Hall. Our Leaders Tour is going to be on February 18. The next meeting is going to be in Innisfree on February 6.
Yoga! Yoga! Yoga! Come and join us for yoga classes. Thursdays, 7:00 pm at the Millennium Building. Drop-ins welcome. Get fit and have fun!
Village Of Innisfree Library — Marilyn Newton
Winter Reading Program: Starts this month! The theme is “Reading Railroad”. There is no age restriction; anyone from 0-200 can participate! Anyone who reads, or is read to, for a total of 5 hrs in the month of February will be entered to win a kobo eReader. I also have little prizes here in the library for anyone who brings me their reading log to show me their progress! We will present the top readers in each elementary class with a prize at the beginning of March. Stop by the Library or Village Office to pick up your reading logs today!
Mitten Tree: Thanks to your generous donations, our mitten tree was a huge success! We collected 61 winter items to help those in need! We donated some of these items locally. We will also be taking some to the Interval Home in Lloydminster and to the Bissell Center in Edmonton. Next year will be a Pajama tree.
Logo Contest: We would like to thank everyone who took the time to make a logo. We had some really great entries, and the board had a hard time choosing, but we have a winner! Congratulations to Norah Melnyk on designing the winning entry! This kindergartener is a very talented artist and has won herself a mini kobo eReader. We will be revealing our logo soon, so keep an eye out for it!!
Book Club: January had us reading Fried Green Tomatoes at the Whistle Stop Café by Fannie Flagg. Most of us had seen the movie, which was based on the book, and enjoyed it. As there always seems to be, there were some differences between the book and movie, but it was just as entertaining! Our February pick is The Time Traveler’s Wife by Audrey Niffenegger. This book has also been made into a movie. We will be meeting on Feb. 11 in the Millennium Building Kitchen. New members are always welcome and we hope to see you there!
We would like to thank the families of Doris Anderson and Helen Spevakow for their memoriam donations to the library. If you know of any new babies in the area, please let us know so we can get a “Born to Read” bag to them!
Upcoming Events: Introduction to Computers Course – Info below
VegMin Learning Society & The Village of Innisfree Library Present . . .
Introduction to Computers
(a condensed 10 hour version)
2, 5 hour sessions
9-12 and 1-3
Feb 15 & Mar 15, 2013
In the 4H Room at the Millennium Building
Early Registration - $130.00
After Feb. 8—$140.00
Call 780-632-7920 to register
Space is Limited
Library Hours are now on the Informer calendar!
THE COUNCIL OF THE VILLAGE OF INNISFREE
Invites You to Join Us For a TOWN HALL MEETING FEBRUARY 24th at 1:00 pm At the Senior’s Hall
This is an opportunity for Council to share with our citizens our visions and goals for the coming year.
If you have any questions that you would like to have Council address, please submit them in writing to the Village Office before February 20th.
WE LOOK FORWARD TO SEEING YOU THERE!
The Innisfree & District Agricultural Society would like to send out a huge THANK YOU to everyone in the community and surrounding area that came out and supported the 2013 Critters Game!!
Thank you to the volunteers, the players and Land Seed & Agro Services Ltd. (Lisa Anderson) for supplying the jerseys.
Thank you to the following individuals and businesses for donating items to the raffle table and supporting our event!!!!
- WEBB’S MACHINERY LTD
- RON’S AUTO & AG INC.
- VITERRA-INNISFREE
- DEBORAH A. TOVELL ACCOUNTING
- CO-OP BULK FUEL- VERMILION
- ATB FINANCIAL-INNISFREE
- MINCO GAS CO-OP LTD.
- GEORGE & WENDY NOTT
- DEBBIE McMANN
- UFA BULK FUEL-WOWDZIA ENTERPRISES
- PAULA PIDRUCHNEY-AVON
- KAREN ANDERSON
- COURAGE CANADA TRAIL RIDE
- LAND SEED & AGRO SERVICES LTD. (LISA ANDERSON)
- INNISFREE HOTEL
- 790 CFCW
WINNERS FROM THE RAFFLE TABLE @ THE 2013 CRITTERS GAME WERE:
Melissa Cannan, Wendy Fleming, Dylan Fowler, Allan Sharp, Mike Fleming, Darrel Pasieka, Tina Bielesch, Gerry Sullivan, Ed Gushnowski, Lisa Anderson, Bonny Anderson, Thelma Rogers, Ron Konieczny, Randy Cannan, Sophie Kassian, Cameron Gizowski, Brayden Drury
50/50 Winner Jen Hodel
YOUR SUPPORT FOR OUR COMMUNITY IS GREATLY APPRECIATED!!!!!!
A big thank you to the Innisfree Outlaws, the CFCW Critters, the volunteers & everyone who came out to support them!
Photo Courtesy of Karen M. Nedzielski
INNISFREE RECREATION CENTRE
SHROVE PANCAKE SUPPER
TUESDAY FEBRUARY 12, 2013
AT THE INNISFREE REC CENTRE
5PM-7PM
12 YEARS TO ADULT - $7.00
6 YEARS TO 11 YEARS - $5.00
5 AND UNDER - FREE
NOTICE TO ALL CITIZENS
The Village of Innisfree reminds all residents that if you are going to put your garbage out BEFORE the morning of the day that it is being picked up, IT MUST be in a can or bin.
Pursuant to By-law 584-12 (Unsightly Premises), Any garbage that has been torn apart by animals WILL NOT be picked up and the resident responsible could be subject to a fine of $75.00 for each offence.
FEBRUARY 17TH
AT 7 PM
AT THE SENIORS DROP IN CENTRE
CRIB TOURNAMENT!
$5.00 PER PERSON
PRIZES WILL BE GIVEN
BRING A DISH (POTLUCK)
RANFURLY AG SOCIETY STEAK SUPPER
February 26th, 2013
Ranfurly Rec Centre
6:00 pm – 7:30 pm
EVERYONE WELCOME!
INNISFREE
FISH & GAME
26th Annual Supper & Dance
Saturday, February 16, 2013
Innisfree Rec. Centre
Music by: “DJ Brett Maron”
Cocktails: 5:00 p.m.
Supper: 5:30 p.m. – 7:30 p.m.
Tickets: $30/person
12 & under – free
Door Prizes
Innisfree Curling Club
Ladies Bonspiel
February 22-24, 2013
This year’s theme night
“Reality TV”
Come for a ladies weekend of curling and laughter!
Prizes, Games, & FUN!!!!
2012/13 INNISFREE AGRI-PLEX
SKATING/HOCKEY SCHEDULE
MONDAY: 3:30 TO 6:00 SKATING CLUB
7:00 TO 9:00 LOCAL SHINNEY FOR JUNIORS
TUESDAY: 3:30 TO 6:00 PUBLIC SKATING
8:00 TO 10:00 OPEN FOR ICE RENTALS
WEDNESDAY: 3:30 TO 5:00 PUBLIC SKATING
7:00 TO 9:00 LOCAL SHINNEY FOR SENIORS
THURSDAY: 8:00 TO 10:00 OPEN FOR ICE RENTALS
FRIDAY: 3:30 TO 5:00 PUBLIC SKATING
7:00 TO 9:00 LOCAL FAMILY SHINNEY
SATURDAY AND SUNDAY 2:00 TO 3:30 PUBLIC SKATING
3:30 to 5:00 SHINNEY
FOR MORE INFORMATION CALL ART @ 592-3935
SKATING FEES MUST BE PAID BEFORE SKATING WILL BE ALLOWED.
PRIORITY WILL BE GIVEN FOR RENTAL OF THE ICE.
PLEASE CHECK THE BOARD FOR ANY CHANGES.
SEASON SKATING FEES ARE: $10.00 PER PERSON OR
$30.00 PER FAMILY
OR
$2.00 PER PERSON EACH TIME FOR CASUAL SKATING
ARENA RENTAL IS $50.00 PER HOUR or
$30.00 for Agric. Society Members OR $500.00 PER TEAM
INVESTING IN TURBULENT TIMES
With Chris Turchansky
Managing Director, ATBIM
How to structure investments to weather the inevitable storms.
Economic conditions and outlook
Tuesday February 5th, 7 pm
Pomeroy Inn & Suites
Kalyna/Pysanka rooms
Vegreville
Appetizers & refreshments will be served (No charge)
RSVP to: Peter Arnold
Phone 780-632-5092
Email: email@example.com
For more details call Deb McMann
780-592-5092
MARDAR ELECTRIC
592-2236
TRENCHING
FARM & INDUSTRIAL MOTORS
WELL & SEWER PUMPS
APPLIANCE REPAIRS
Big or Small
ON SITE REPAIRS
ELECTRICAL & APPLIANCE PARTS
FOR THE DO-IT-YOURSELFER
FOR ALL YOUR ELECTRICAL NEEDS!
WE ARE YOUR LOCAL REPAIR SPECIALISTS!
JOURNEYMAN ELECTRICIAN AND CERTIFIED APPLIANCE REPAIR TECHNICIAN
ATB Financial™
54% of Albertans invest less than they feel they need to.¹
16% of Albertans are not saving or investing for retirement at all.¹
P.S. The RRSP tax contribution deadline is March 1, 2013—let’s talk before then.
What’s your retirement statistic?
Deb’s Agency Services
592-2083
Meeting Your Banking Needs
ATB Financial
Innisfree & District ECS Raffle
1st Prize: 1 night stay at the Fantasyland Hotel, +2 tickets to Jubilations Dinner Theatre, +2 Continental Breakfasts
2nd Prize: $400.00 Future Shop Gift Card
3rd Prize: $100.00 The Keg Restaurant Gift Card
4th Prize: $50.00 Canadian Tire Gift Card
Please contact a kindergarten parent if you would like to purchase tickets.
Tickets will also be available at the Innisfree ATB Branch and the Innisfree Library
Tickets 1 for $5.00 or 5 for $20.00
Cxene Brooks 780-603-3585 Carmen Kassian 780-592-3900
Melanie Ferguson 780-266-7123 Amberlyn Myshaniuk 780-592-3936
Cassandra Fletcher 780-632-0569 Marilyn Newton 780-853-7250
Naomi Foyster-Melnyk 780-592-2060 Shelly Nott 780-592-2412
April Hauck 780-603-5156 Nola Yacekyo 780-592-3998
Tickets on sale from Dec 21, 2012 to Feb 6, 2013
Draw will take place on February 16, 2013 at the Fish & Game Dance at the Innisfree Rec Centre
Questions regarding the raffle may be directed to Naomi Foyster-Melnyk: 1-780-592-2060
The students, parents and staff of the Innisfree Kindergarten would like to thank all those who supported us with your donations and bids at the Silent Auction.
Thank you!
Come Join our
HAPPY DAYS
Playgroup
Every Thursday
10:30 - 12:30
The Old Ranfurly School
Starting December 6, 2012
For more information please contact Amberlyn Myshaniuk at 780 592-3936
Friends of the Innisfree Library (FILS)
- Promote
- Volunteer
- Fundraise
- Support
Contacts:
Karen Anderson 780-592-2268 or Holly Cependa 780-592-3840
This advertisement has been sponsored by: The Beachside Bed & Breakfast 'on Wapasu Lake'
| SUNDAY | MONDAY | TUESDAY | WEDNESDAY | THURSDAY | FRIDAY | SATURDAY |
|--------|--------|---------|-----------|----------|--------|----------|
| | | | | | 1 | 2 |
| | | | | | | Michella Fortier |
| 3 | 4 | 5 | 6 Leanne Fleming | 7 Shirlly Hennig | 8 | 9 |
| 10 Linda Whitten Maybell Pierce | 11 Harry Hlus | 12 Peter & Violet Nedzielski | 13 Audrey Weder | 14 | 15 Shelley Scott | 16 |
| 17 Leah Bergman | 18 Delores Wowk Trevor Dougan Laramie Hlus | 19 Deb McMann Tim Tillotson | 20 Arnold & Lorna Usenik | 21 Gerald Upton | 22 Larry Grabas | 23 Ashley Saksiw |
| 24 | 25 Leanne Hlus | 26 Lil Carter Jeffrey Nott | 27 | 28 Happy Leap Year Birthday! Kariyee Melnyk Kim Jackson | | |
| SUNDAY | MONDAY | TUESDAY | WEDNESDAY | THURSDAY | FRIDAY | SATURDAY |
|--------|--------|---------|-----------|----------|--------|----------|
| | 4 | 5 | 6 | 7 | 8 | 9 |
| | Bingo 7pm
Seniors Exercise 10am | Fun Bingo 7pm | Seniors Exercise 10am
4H Beef Club Mtg
Library 2pm-7pm | Card Party 1pm
Yoga 7pm
Library 12pm-5pm | Library 9am-2pm | |
| 3 | | | | | | |
| 10 | 11 | 12 | 13 | 14 | 15 | 16 |
| 4H Public Speaking | Bingo 7pm
Book Club Mtg
Seniors Exercise 10am | Fun Bingo 7pm
Shrove Pancake Supper | Seniors Exercise 10am
Library 2pm-7pm | Card Party 1pm
Yoga 7pm
Library 12pm-5pm | Library 9am-2pm
Intro to Computers Class | Fish & Game Supper & Dance |
| 17 | 18 | 19 | 20 | 21 | 22 | 23 |
| Crib Tournament 7pm | Bingo 7pm
Seniors Exercise 10am | Fun Bingo 7pm | Seniors Exercise 10am
Library 2pm-7pm | Card Party 1pm
Yoga 7pm
Library 12pm-5pm | Ladies Bonspiel
Library 9am-2pm | Ladies Bonspiel |
| 24 | 25 | 26 | 27 | 28 | | |
| Ladies Bonspiel
Town Hall Mtg 1pm | Bingo 7pm
Seniors Exercise 10am | Fun Bingo 7pm
Ranfurly Ag Steak Supper | Seniors Exercise 10am
Library 2pm-7pm
Council Mtg 6:30pm | Card Party 1pm
Yoga 7pm
Library 12pm-5pm | | |
|
Variability in paralimbic dopamine signaling correlates with subjective responses to d-amphetamine
Christopher T. Smith\textsuperscript{a, *}, Linh C. Dang\textsuperscript{a}, Ronald L. Cowan\textsuperscript{a, b}, Robert M. Kessler\textsuperscript{c}, David H. Zald\textsuperscript{a, b}
\textsuperscript{a} Department of Psychology, PMB 407817, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37240-7817, United States
\textsuperscript{b} Department of Psychiatry, Vanderbilt University School of Medicine, 1601 23rd Ave South, Suite 3057, Nashville, TN 37212, United States
\textsuperscript{c} Department of Radiology, UAB School of Medicine, United States
\textbf{Article history:}
Received 18 October 2015
Received in revised form 26 April 2016
Accepted 6 May 2016
Available online 10 May 2016
\textbf{Keywords:}
d-amphetamine
Dopamine
Ventromedial PFC
Insula
Ventral striatum
\textbf{Abstract}
Subjective responses to psychostimulants vary, the basis of which is poorly understood, especially in relation to possible cortical contributions. Here, we tested for relationships between participants’ positive subjective responses to oral d-amphetamine (dAMPH) versus placebo and variability in striatal and extrastriatal dopamine (DA) receptor availability and release, measured via positron emission tomography (PET) with the radiotracer \textsuperscript{18}F-fallypride. Analyses focused on 35 healthy adult participants showing positive subjective effects to dAMPH measured via the Drug Effects Questionnaire (DEQ) Feel, Like, High, and Want More subscales (Responders), and were repeated after inclusion of 11 subjects who lacked subjective responses. Associations between peak DEQ subscale ratings and both baseline \textsuperscript{18}F-fallypride binding potential (BPnd; an index of D2/D3 receptor availability) and the percentage change in BPnd post dAMPH (\%ΔBPnd; a measure of DA release) were assessed. Baseline BPnd in ventromedial prefrontal cortex (vmPFC) predicted the peak level of High reported following dAMPH. Furthermore, \%ΔBPnd in vmPFC positively correlated with DEQ Want More ratings. DEQ Want More was also positively correlated with \%ΔBPnd in right ventral striatum and left insula. This work indicates that characteristics of DA functioning in vmPFC, a cortical area implicated in subjective valuation, are associated with both subjective high and incentive (wanting) responses. The observation that insula \%ΔBPnd was associated with drug wanting converges with evidence suggesting its role in drug craving. These findings highlight the importance of variability in DA signaling in specific paralimbic cortical regions in dAMPH’s subjective response, which may confer risk for abusing psychostimulants.
© 2016 Elsevier Ltd. All rights reserved.
\section*{1. Introduction}
Significant individual variability exists in subjective responses to oral d-amphetamine (dAMPH) in humans (Brauer et al., 1996; Brown et al., 1978; de Wit et al., 1986; Domisse et al., 1984). While some subjects report strong experiences of liking, high, and euphoria, others are unable to discriminate between drug and placebo (Chait et al., 1985, 1989). Understanding individual differences in these positive subjective responses is important as their magnitude after early drug exposure have been linked to drugs' abuse potential (Lambert et al., 2006). Thus, they may serve as risk factors for repeated drug use, leading to addiction (de Wit and Phillips, 2012; Haertzen et al., 1983). Despite their importance, the neural and neurochemical events that contribute to subjective response differences to dAMPH have yet to be fully elucidated.
Given that dAMPH causes the release of the neurotransmitter dopamine (DA) primarily via blockade and reversal of the dopamine transporter (DAT) (Jones et al., 1998) and animal work has linked DA release in nodes of the mesocorticolimbic system with reward processes (Wise and Rompre, 1989), researchers have proposed that DA release in this system may be directly or indirectly responsible for dAMPH's euphoric effect in humans. Indeed, previous work has found that dAMPH-induced DA release measured in the striatum with \textsuperscript{123}I-IBZM SPECT is associated with a positive reinforcement factor (Abi-Dargham et al., 2003). PET studies using \textsuperscript{11}C-raclopride have, more specifically, implicated ventral striatum (VS) dAMPH-induced DA release with self-reported dAMPH-induced euphoria (Drevets et al., 2001) or drug wanting (Leyton
Furthermore, an analysis using an earlier sample of the participants included in this study found a positive relationship between DA release in striatum measured with $^{18}$F-fallypride PET and “Want More” drug ratings on the Drug Effects Questionnaire, DEQ (de Wit et al., 1986; Morean et al., 2013), after oral dAMPH (Buckholtz et al., 2010). Whether differences in dopaminergic functioning in other nodes of the mesocorticolimbic DA system impact subjective responses to dAMPH is currently unknown. However, data suggest functional connections between the VS and paralimbic cortical areas (Haber and Knutson, 2010; Lee et al., 1999) and there is evidence that paralimbic areas are involved in addiction relevant processes such as value coding in the medial prefrontal cortex (mPFC) and neighboring orbitofrontal cortex (OFC) (Bartra et al., 2013; Clithero and Rangel, 2014; Diekhof et al., 2012; First et al., 1997) and drug craving in the insula (Kilts et al., 2001; Naqvi et al., 2014).
Here, we sought to characterize the relationship between the subjective effects of dAMPH assessed with DEQ High, Like, Feel and Want More ratings and $^{18}$F-fallypride measures of DA D2/3 receptor availability and dAMPH-induced DA release in a sample of healthy young adults. Until now, work focused on assessing a potential relationship between extrastriatal DA characteristics and subjective responses has been limited due to $^{11}$C-raclopride and $^{123}$I-IBZM’s inability to reliably estimate DRD2/3 availability (measured as binding potential, BPnd) outside the striatum. The radiotracer $^{18}$F-fallypride, however, is able to estimate DRD2/3 BPnd in PFC, temporal lobes, and the insula in addition to the striatum (Mukherjee et al., 2002; Riccardi et al., 2008) and can index DA release after d-amphetamine (dAMPH) administration, measured as %ABPnd from baseline (Riccardi et al., 2006a; Sliinstein et al., 2010). We were particularly interested in using fallypride to test whether DA functions in paralimbic cortical areas, specifically the mPFC/OFC, related to subjective responses to dAMPH given past evidence that activity in these areas are increased in response to psychostimulants in drug naive individuals (Vollm et al., 2004) and correlate with their self-reported euphoric effects (Udo de Haes et al., 2007).
2. Methods and materials
2.1. Subjects
Forty-six (23 men; ages 18–35, mean = 22 ± 2.86) healthy individuals participated in the study. Participants had no known past or present neurological or psychiatric diagnoses, no history of substance use disorders, and no current use of psychoactive medications or substances as assessed by Structured Clinical Interview for DSM Disorders I (First et al., 1997) administered at screening. On interview, none of the subjects reported having ever used amphetamine or cocaine. In terms of other psychostimulants, three subjects acknowledged past use: Dexatrim for a few days in one case, ephedrine 4 times in one case, and ephedrine once daily for three months in a final case. Data were reanalyzed excluding the case with the more extensive ephedrine exposure, but this did not produce any marked change in the results. Women were tested during the follicular phase of their cycle. Participants gave written informed consent, as approved by the Vanderbilt University Institutional Review Board.
It should be noted that 30 of our subjects were included in an earlier report by Buckholtz et al. (2010) with the other 2 subjects from that study not included here as they lacked full DEQ measures at both placebo and dAMPH sessions. The remaining 16 subjects in our sample were collected after completion of analyses for Buckholtz et al. (2010). Although that report focused on correlations with impulsivity, it did note a relation between striatal DA release and drug wanting as part of a secondary analysis aimed at understanding the functional link between impulsivity and striatal response to amphetamine. However, it did not explore the pattern of correlations with the different DEQ scales, and critically did not test for relations between subjective responses and cortical DA indices.
We performed analyses with both the entire sample and limited to the 35 participants (22 male; age: 21.9 ± 2.72) who demonstrated at least some evidence of a subjective response to dAMPH versus placebo (DEQ Responders). The rationale for performing initial analyses excluding participants who lack a subjective response to dAMPH is that they might be qualitatively different, for instance due to atypical DAT functions. Inclusion of such participants in analyses related to subjective responses may hide real relations. However, we also report the results of our key analyses when the 11 Nonresponders were included in order to capture the full range of subjective responses and for comparability to prior studies of dAMPH that typically include Nonresponders in analyses.
2.2. Drug administration
Participants, themselves blind to drug administration order, received placebo for their first experimental PET session and a target dose of 0.43 mg/kg oral dAMPH during their second PET session (separated by a minimum of 24 h). The actual administered dose of dAMPH was rounded to the nearest 2.5 mg (mean actual dose: 30.5 mg, range: 20–42.5 mg) based on individual participants’ weight to achieve the targeted 0.43 mg/kg dose. We note that because our primary interest in the study was the relation between individual differences in DA measures and subjective responses, we used a standardized administration order instead of a counter-balanced design. This standardized administration order avoids the potential introduction of systematic variance across subjects caused by some subjects receiving dAMPH first, and others receiving it second (order effects). If order effects do exist (which is a reasonable possibility for a study with psychostimulants), the avoidance of this source of systematic variance makes the standardized administration order design more efficient for detecting relations among variables across subjects. Having the placebo occur first also avoided any lingering effect of the dAMPH across sessions.
2.3. Procedure
Participants were tested for pregnancy before each PET session. They were instructed not to eat for 3 h before the sessions to standardize drug absorption. Subjects completed the Drug Effects Questionnaire (DEQ; see below) 60, 120, 180, 270, and 345 min after ingesting the capsule. Plasma samples were obtained 60, 120, 180, and 270 min after capsule ingestion.
2.4. Drug effects questionnaire
Individuals rated each term on 100 mm labeled magnitude scale (Lishner et al., 2008); 1) feel any substance effect(s) (“Feel”), 2) feel high (“High”), 3) like the effects (“Like”), 4) want more of the substance (“Want More”) from NOT AT ALL (0 mm) to MOST IMAGINABLE (100 mm). The Drug Effects Questionnaire (DEQ) has good psychometric properties (Morean et al., 2013) and is sensitive to the effect of dAMPH (Brauer et al., 1996; de Wit et al., 1986). DEQ values were recorded as proportions of the 100 mm scale (values range from 0 to 1). Each DEQ rating post dAMPH was subtracted from the placebo rating taken at that same timepoint such that all analyzed ratings reflect responses to dAMPH relative to placebo. We defined Nonresponders as having a max average DEQ rating (across all 4 subscales; DEQAll) < 0.10 (> 1 standard deviation below mean DEQAll across all subjects). Across the dataset as a whole, DEQ
subscale ratings were highly correlated with one another (min \( \rho = 0.64 \) (between High and Like), max \( \rho = 0.84 \) (between Want More and Like); all \( p < 0.001 \)) suggesting consistency in DEQ\(_{\text{All}}\) measure. One Nonresponder subject, however, did show a divergence in DEQ High/Feel (positive) and DEQ Like/Want More (negative) to dAMPH resulting in a low DEQ\(_{\text{All}}\) score despite modestly positive High and Feel ratings (>0.10). As Nonresponders were included only in follow-up analyses this one case of divergence in DEQ ratings did not affect the clusters identified in our DEQ/PET regression analyses in Responders.
2.5. Peripheral amphetamine absorption measure
Plasma amphetamine levels were analyzed via a selegiline + metabolites assay conducted by NMS Laboratories.
2.6. Fallypride PET data acquisition
\([^{18}\text{F}]\)-fallypride ((S)-\(N\)-[(1-allyl-2-pyrrolidinyl)methyl]-5-(3\(^{[18}\text{F}\) fluoropropyl)-2,3-dimethoxybenzamide) was produced in the radiochemistry laboratory attached to the PET unit, following synthesis and quality control procedures described in US Food and Drug Administration IND 47,245. Data were collected on one of two GE Discovery PET scanners located at Vanderbilt University Medical Center, with the first twelve subjects collected on a Discovery LS model and the remainder (\( n = 34 \)) on a Discovery STE system. Both scanners possess similar in plane resolution, but the Discovery STE has thinner axial slices (3.27 vs. 4.25 mm). All subjects received both their placebo and dAMPH scan on the same scanner system, and no differences were observed in BPnd measures across scanners (Buckholtz et al., 2010). Approximately 3 h after placebo or dAMPH administration, serial scan acquisition was started simultaneously with a 5.0 mCi slow bolus injection of DA D2/3 tracer \([^{18}\text{F}]\)-fallypride (specific activity > 3000 Ci/mmol). CT scans were collected for attenuation correction prior to each of the three emission scans, which together lasted approximately 3.5 h with two breaks for subject comfort. With the PET scanner upgrade to the STE system that occurred after the first 12 subjects, the PET acquisition time protocol for the first dynamic run was slightly altered (see Supplemental Table S1). However, including PET scanner/acquisition type as a covariate did not alter any of the BPnd and \%ΔBPnd relationships we report below.
2.7. Fallypride PET data processing
After decay correction and attenuation correction, PET scan frames were corrected for motion using SPM8 (Friston et al., 1995) with the last dynamic image frame of the first series serving as the reference image. The mean PET image created from the realignment was then registered to each subject's high-resolution T1 MRI image (FLIRT, 6 degrees of freedom), which was nonlinearly registered to MNI space (FNIRT) in FSL (Smith et al., 2004). Putamen and cerebellum reference region ROIs were created from the WFU Pickatlas (Maldjian et al., 2003) with the cerebellum modified such that the anterior \( \frac{1}{3} \) of the ROI along with voxels within 5 mm of cortex were excluded to prevent contamination of the PET signal from nearby areas such as midbrain or occipital cortex. These reference region ROIs were then warped to each subject's PET space using the FLIRT and FNIRT FSL transform matrices (MNI → T1 → PET) and used in a simplified reference tissue model (SRTM (Lammertsma and Hume, 1996)) performed in PMOD software (PMOD Technologies, Zurich Switzerland) to estimate fallypride binding potential (BPnd, a ratio of specifically bound fallypride to its free concentration). Specifically PMOD's PXMOD tool was used to estimate BPnd voxel-wise using a published basis function fitting approach (Gunn et al., 1997). The cerebellum served as the reference region due to its relative lack of D2/3 receptors (Camps et al., 1989). The resulting BPnd maps for placebo/baseline and dAMPH days were linearly registered to one another (FLIRT, 6 degrees of freedom) and the difference in BPnd maps (\%ΔBPnd) after dAMPH was calculated as:
\[
\% \Delta \text{BPnd} = \frac{(\text{baseline BPnd} - \text{dAMPH BPnd})}{\text{baseline BPnd}} \times 100\%
\]
Thus, an increase in \%ΔBPnd corresponded to an increase in synaptic DA release. Subject-specific baseline BPnd and \%ΔBPnd images were then warped to MNI space using the saved FSL transforms to create MNI-normalized BPnd and \%ΔBPnd images (resampled to 2 mm isotropic voxels). These MNI-normalized images were then analyzed (using an explicit MNI brain mask) in SPM8 to test for their relation to subjective responses to dAMPH.
2.8. Data analysis
Taking a whole-brain approach, we regressed MNI-normalized placebo/baseline fallypride BPnd and \%ΔBPnd data against max DEQ ratings from each subscale separately using SPM8. Cluster-level significance was set at \( p < 0.05 \) family-wise error (FWE) corrected. For clusters showing a significant relationship between fallypride measures and max DEQ ratings, we extracted mean BPnd or \%ΔBPnd data using Marsbar (Brett et al., 2002) and subjected these values to a bootstrapped Pearson correlation (to identify 95% confidence intervals, CI) and multiple regression analyses (to test for the impact of potential confounds on BPnd) in SPSS. We covaried for potential confounds of plasma amphetamine levels, effective dAMPH dose, sex (known to impact DA signaling (Pohjalainen et al., 1998; Riccardi et al., 2006b)), and subject age (found to negatively correlate with BPnd (Mulkerjee et al., 2002; Narendran et al., 2011)) in these follow-up regression analyses.
3. Results
3.1. DEQ ratings, dAMPH responders, and sex distributions
Readers are directed to Smith et al., 2016 (Smith et al., 2016) and Table S2 for details of DEQ ratings by Responder Group. While the proportion of males and females varied across Responder groups (\( \chi^2 = 9.68, p = 0.002 \)), we note our female Responders did not differ from male Responders in their max DEQ ratings or in any of the PET relationships we report below. Also, the addition of sex as a predictor in our fallypride-DEQ regressions did not remove the relationships we observed.
3.2. Baseline DRD2/3 availability and DEQ ratings: relationship between vmPFC BPnd and DEQ\(_{\text{High}}\)
Regressing our placebo fallypride BPnd data on each max DEQ rating in our 35 DEQ Responders, we identified a large cluster (\( k = 388 \)) in vmPFC (MNI coordinates of max T value: −10, 40, −2; \( T = 4.52; p_{\text{FWE}} = 0.039 \)) showing a positive correlation with max DEQ High ratings (DEQ\(_{\text{High}}\), \( r = 0.57, p < 0.001; \text{CI}: 0.34, 0.75; \text{Fig. 1} \)). We observed no areas showing a negative relationship between BPnd and DEQ\(_{\text{High}}\). No area showed a positive or negative relationship between BPnd and the other DEQ ratings (Feel, Like, and Want More). To rule out possible confounds, we tested whether BPnd in the identified vmPFC cluster remained predictive after controlling for PET scanner type and timing differences (length of overall placebo PET acquisition time; see Methods, Table S1), peak plasma-amphetamine level, effective dAMPH dose, sex, and subject age. After controlling for these variables, there was no decline in the
3.3. dAMPH-induced DA release
Testing for areas showing significant dAMPH-induced DA displacement of $[^{18}\text{F}]$-fallypride, we performed a one-way T-test in SPM8 on the %ΔBPnd data. Confirming previous work (Cropley et al., 2008; Riccardi et al., 2006a; Slifstein et al., 2010), we identified a large cluster ($k = 4874$) comprised of bilateral striatum that also encompassed the midbrain that displayed significant DA release (Fig. S1) in addition to areas in the temporal cortices and bilateral insula (Table S3).
3.4. Relationship between dAMPH-induced DA release and maximum DEQ ratings
Multiple regression analysis on each DEQ subscale revealed 3 clusters showing positive relationships between %ΔBPnd and higher max DEQ Want More ratings (DEQ$_{\text{Want}}$; Table 1A; Fig. 2) at a cluster-corrected $p_{\text{FWE}} < 0.05$. The 3 clusters localized to the right VS (extending ventrally into the area of the subcallosal gyrus and olfactory tubercle), the left insula and the vmPFC. Although we did not observe a significant association in the left VS in the voxelwise analysis, we note that a post-hoc ROI analysis using the left VS from the WFU PickAtlas revealed a statistically significant relationship with DEQ$_{\text{Want}}$ in the same direction as the right VS ($r = 0.351$, $p = 0.039$, CI: 0.094, 0.556). Furthermore, we note that the vmPFC itself did not show statistically significant %ΔBPnd at the group level, though some subjects had positive %ΔBPnd here (Fig. S2). This was in contrast to the VS and left insula, where there was evidence of significant %ΔBPnd at the group level (Table S4). We observed no areas showing a negative relationship between %ΔBPnd and DEQ$_{\text{Want}}$. Importantly, as shown in Table 1B (also see Fig. S2), the inclusion of all subjects (including DEQ Nonresponders) in our DEQ$_{\text{Want}}$ ROI analyses did not alter the statistical significance of correlations between %ΔBPnd and DEQ$_{\text{Want}}$ ratings in the clusters from Table 1A. The addition of PET scan acquisition times between dAMPH and placebo sessions (Table S1), sex, age, dAMPH dose, and plasma amphetamine levels as predictors similarly did not alter these relationships.
3.5. Relationships to DEQ feel and like?
Using our a priori cluster-level threshold (see Methods), no areas were identified where either BPnd or %ΔBPnd related to max DEQ Feel and/or Like ratings. However, in a post-hoc follow-up analysis we observed that %ΔBPnd in the DEQ$_{\text{Want}}$ vmPFC cluster correlated positively with all DEQ max ratings (DEQ$_{\text{High}}$: $r = 0.37$, $p = 0.01$, CI: 0.093–0.617; DEQ$_{\text{Like}}$: $r = 0.34$, $p = 0.021$, CI: 0.083–0.561; DEQ$_{\text{Feel}}$: $r = 0.30$, $p = 0.042$, CI: 0.019–0.543), suggesting that this may be a common area at which dAMPH's effects on DA transmission are associated with its subjective effects. The stronger relationship with DEQ$_{\text{Want}}$ is evident, though, as vmPFC %ΔBPnd still showed a relation to DEQ$_{\text{Want}}$ ($\beta = 0.59$, $R^2 = 0.28$) even after controlling for the other three DEQ ratings and no other DEQ rating is significantly associated with vmPFC %ΔBPnd when DEQ$_{\text{Want}}$ is entered as the first predictor. Thus, although the vmPFC showed some modest associations with multiple ratings, %ΔBPnd change in vmPFC was more strongly associated with dAMPH wanting versus other subjective effects.
4. Discussion
4.1. vmPFC DA and subjective responses to dAMPH
Higher vmPFC D2/3 BPnd on placebo was associated with higher
Table 1
Brain areas showing significant positive relationships between %ΔBPnd and max DEQ Want More ratings.
| Area (MNI coord at peak T value) | Cluster size (k) | Peak-level T value | pFWE-corrected | r, (95% CI) |
|---------------------------------|------------------|--------------------|----------------|-------------|
| **A. DEQ Responders** | | | | |
| Right ventral striatum (4, 6, −8)| 275 | 4.39 | <0.001 | 0.59, (0.41, 0.76) |
| vmPFC (−4, 42, −6) | 195 | 4.24 | <0.001 | 0.68, (0.51, 0.81) |
| Left insula (−40, 2, −6) | 215 | 3.95 | <0.001 | 0.62, (0.39, 0.79) |
| Area | ROI Cluster size (k) | r, p (95% CI) |
|-----------------------------------|----------------------|---------------|
| **B. All subjects, ROI analyses**| | |
| Right ventral striatum | 275 | 0.33, 0.024 (0.05, 0.59) |
| vmPFC | 195 | 0.57, <0.001 (0.37, 0.71) |
| Left insula | 215 | 0.50, <0.001 (0.31, 0.68) |
A. Table reports areas identified via a positive regression analysis in SPM8. We report the MNI coordinates of the peak T value from the SPM as well as the cluster size and cluster-level significance from each area. In addition, we report the correlation value as well as 95% confidence interval between the mean %ΔBPnd in each cluster and max DEQwant ratings.
B. %ΔBPnd from the clusters identified in Responders were tested for relationships with max DEQ Want More ratings (DEQWant) in all subjects and the result of the correlations are reported along with 95% confidence intervals. Cluster size (k) is number of 2 mm isotropic voxels present in the cluster.
Fig. 2. %ΔBPnd is positively correlated with max DEQ Want More ratings in right ventral striatum, vmPFC, and left insula. Figure displays all significant clusters identified from DEQWant regression ran on Responders in SPM8 surviving a cluster-level p, family-wise error correction of p < 0.05, overlaid on a MNI template brain (coordinates are in MNI space). See Table 1A for MNI coordinates, voxel size, and peak T-values of these clusters. Note positive %ΔBPnd reflects DA release.
DEQ High ratings in response to oral dAMPH. To our knowledge, this is the first study suggesting that individual differences in subjective responses to psychostimulants are related to individual differences in dopaminergic functional characteristics in the vmPFC. Of note, the vmPFC area identified here extends into the anterior cingulate and subgenual cingulate cortices, whose activity have been implicated in psychostimulant response (Breiter et al., 1997; Uddo de Haes et al., 2007; Volim et al., 2004) and sympathetic arousal (Beissner et al., 2013), providing further support for its potential importance in mediating variation in dAMPH subjective responses. A potential issue arises in interpreting vmPFC D2/3 BPnd measured in a placebo condition, since it is possible that expectancy alters DA functioning. This is particularly relevant given recent evidence that cocaine cues can cause DA release in the vmPFC/medial orbitofrontal region (Milella et al., 2016). However, given the stability of cortical BPnd estimates (Dunn et al., 2013), the fact that subjects knew there was only a 50% chance of receiving amphetamine, and none had been exposed to dAMPH, it is reasonable to expect that most of the variance in vmPFC BPnd across subjects is driven by stable trait differences in DA functions in this region. As such, we strongly suspect that the present findings reflect trait differences that influence the sensitivity to experiencing subjective high in response to dAMPH.
It is notable that we only observed significant placebo BPnd relationships with stimulated DEQHigh and not the other DEQ measures. Although there are significant correlations among the different DEQ variables, they are not identical. Subjective High is a complex phenomenon that can encompass the perception of multiple cognitive, autonomic and mood experiences. As such it is not synonymous with euphoria or liking. Indeed, although DEQHigh & DEQLike ratings were correlated across the entire study population, the relationship between the variables is not particularly tight (among Responders high ratings were only related to euphoria/liking at a trend level: ρ = 0.29, p = 0.093).
4.2. vmPFC DA: individual differences without overall measurable DA release
Further evidence for the importance of the vmPFC to subjective responses comes from the changes in BPnd following dAMPH, where %ΔBPnd in vmPFC correlated with Want More ratings on dAMPH. Thus, high D2/3 receptor availability (BPnd) and greater changes in D2/3 binding (%ΔBPnd) in vmPFC were associated with multiple subjective responses to dAMPH with BPnd more related to drug “high” and %ΔBPnd related to drug “wanting”. We note however that interpretation of the positive relationship between vmPFC %ΔBPnd and DEQWant must be treated with caution as no significant %ΔBPnd was detected here after dAMPH at the group level. Given that $^{18}$F-fallypride is relatively weak at detecting small cortical changes in DA release, these apparent individual differences could reflect error in measurement. However, two pieces of data suggest otherwise. First, the BPnd estimates in the vmPFC showed reasonable stability across scan days: despite the fact that one scan had a drug manipulation, the placebo and dAMPH day BPnd data were as highly correlated in vmPFC ($r = 0.89$, $p < 0.001$) as they were in the right VS ($r = 0.91$, $p < 0.001$) and left insula ($r = 0.90$, $p < 0.001$) clusters across all subjects. Second, among Responders, vmPFC %ΔBPnd was also highly correlated with %ΔBPnd in VS: $r = 0.74$, $p < 0.001$ and insula: $r = 0.72$, $p < 0.001$, suggesting that common functional processes influence the %ΔBPnd response to dAMPH in these three regions (or, alternatively, a common unidentified methodological factor causes similar patterns across these striatal and extrastriatal regions).
One important methodological factor should be mentioned when interpreting the high incidence of positive and negative %ΔBPnd in the vmPFC. PET scanning was conducted during a period of relatively stable plasma amphetamine levels starting 3 h after drug administration, similar to previous oral dAMPH protocols with fallypride (Riccardi et al., 2006a). However, with scans continuing until over 6-h post dAMPH administration, these measurements may not only reflect initial DA release, but also compensatory or autoregulatory changes in DA, which could present as a seemingly paradoxical change in %ΔBPnd in some subjects. Importantly, there is precedent for dAMPH-induced increases and decreases in DA release in the vMPFC as this has been observed in rodent studies using microdialysis (Hedou et al., 2001). Interestingly, in that work the presence of increases or decreases appear dependent upon previous drug exposure, with decreases in DA occurring in drug naive rats, and increases in animals who had undergone drug sensitization (reflected in greater psychomotor responses to the drug). While our exclusion criteria should have resulted in negligible prior sensitization to psychostimulants (only 3 subjects reported any previous dAMPH-like psychostimulant use (2 used ephedrine, 1 dexatrim) and removing the 1 subject with >4 previous psychostimulant uses did not alter any of our results), this animal work highlights the need to consider both increases and decreases in DA transmission in response to dAMPH, with those showing changes consistent with DA release possessing potentially greater behavioral effects to the drug.
4.3. VS DA release and dAMPH wanting
DA release in right VS positively correlated with DEQWant, suggesting that this region may be important in attributing incentive salience to dAMPH. Correlations between greater VS/striatal oral dAMPH-induced DA release and drug wanting have been seen previously (Buckholtz et al., 2010; Leyton et al., 2002), and this observation was expected given that the present sample included subjects from the Buckholtz et al. study (and therefore cannot be considered an independent replication). Nevertheless, these associations contrast with the work of Drevets where injected dAMPH-induced euphoria correlated with VS DA release (Drevets et al., 2001) and highlight an ongoing debate as to the relationship between DA and reward processes – whether DA conveys the hedonic value of rewarding stimuli themselves (“euphoria” (Wise and Rompre, 1989)) or motivates the pursuit of rewards by attributing incentive salience to reward-related stimuli (“wanting” (Berridge, 2007)). However, given the high correlation between our measure of dAMPH Want More with High ($\rho = 0.72$) and Liking ($\rho = 0.84$) and the absence of an assessment of drug wanting in the work by Drevets et al., it is possible that VS DA release in that study could have correlated with wanting as well. Despite the high correlation between DEQ Like and Want More ratings in our data, we found no evidence of a relationship between VS DA release and dAMPH “liking”. Thus, our results are consistent with preclinical work showing that DA in the ventral striatum/nucleus accumbens attributes incentive salience to stimuli to promote “wanting” not “liking” (Berridge, 2007; Wyvell and Berridge, 2000). An important observation in our VS data however, is that DA release here was also high in DEQ Nonresponders (Table S4) suggesting that VS DA release occurs in most individuals after acute dAMPH but the degree to which this release relates to dAMPH wanting may differ across individuals via mechanisms not yet determined. Indeed the present data suggest that VS DA release and subjective response to dAMPH may be dissociated in the population of individuals who lack a subjective response to the oral administration of the drug. Further work is needed to understand how VS DA release could convey different subjective signals as part of larger functional circuits.
4.4. Insula DA release and dAMPH wanting
Despite a number of studies suggesting the insula’s importance in drug craving (Naqvi et al., 2014) and the perpetuation of addiction (Gaznick et al., 2014; Naqvi et al., 2007), a link between psychostimulant-induced DA release in the insula and subjective drug “wanting” (incentive salience (Robinson and Berridge, 1993);) had not been shown previously. This is probably due to the inability for raclopride (the predominantly used PET ligand in dAMPH-DA release studies) to reliably measure DA release outside the striatum. The insula is thought to integrate interoceptive activity with other inputs to form a combined representation of homeostatically salient features of one’s internal and external environment (Craig, 2011) and serves as an important hub of a stimulus salience network in the brain (Uddin, 2015). Given this previous work, DA release in the insula may serve to convey the incentive salience value of dAMPH to the rest of the brain, promoting increased dAMPH wanting.
We acknowledge that our insula finding could result from potential partial volume effects in our PET data. The proximity of this structure to the putamen, an area with high fallypride BPnd, raises the possibility that spillover from the putamen could bias insula fallypride signal. However, we note that we observed no relationship between DEQ Want More and %ΔBPnd in the left putamen in our voxelwise analysis, suggesting anatomical specificity of our left insula finding. Future PET work investigating DA signaling in the insula should be mindful of the possibility of partial volume effects in this structure and take care to address them in their analysis and interpretation of any insula finding.
4.5. Role of VS, insula, and vmPFC DA in drug seeking: a network conveying subjective value and incentive salience
Models of drug seeking behavior propose that the insula responds to interoceptive signals of drug administration and vmPFC
reflects the drug's relative/subjective value which are critical processes in determining the incentive value placed on the drug (Naqvi and Bechara, 2010; Naqvi et al., 2014). Projections from these structures to VS help to motivate continued drug use even in the face of negative consequences (Seif et al., 2013), a key hallmark of drug addiction. Our findings fit with this conceptualization and implicate DA release in these structures in initial wanting responses to dAMPH, supporting a role for these areas in the incentive motivational circuitry that promote drug seeking. Interestingly, placebo D2-receptor availability in these regions was not predictive of these wanting responses. Only placebo BPnd in the vmPFC predicted subjective experiences of drug-induced high. Thus, there is a partial dissociation between the impact of individual differences in vmPFC D2 receptors at placebo/baseline in predicting drug "High" versus the impact of differences in DA release in the vmPFC, VS, and insula in drug wanting. How the nodes we have identified here coordinate their activity to promote pleasurable and drug seeking effects will require further investigation with techniques that can measure the dynamics of neurochemical and neural activity over time.
4.6. Relationship to the D2/3 deficit model of addiction?
We observed no negative relationships between Fallypride BPnd and dAMPH responsivity. The D2 deficit model of addiction posits that lower levels of D2 receptors may lead individuals to self-administer drugs of abuse to compensate for low DA tone (Reward Deficiency Syndrome; Blum et al., 2000). This hypothesis is based on observations that low D2/3 receptor levels are associated with heightened cocaine self-administration in rats and monkeys (Dalley et al., 2007; Nader et al., 2006) and that increasing D2 receptor levels can lower levels of cocaine self-administration (Thanos et al., 2008). While cocaine dependence has been associated lower levels of D2/3 receptor availability (Martinez et al., 2004; Volkow et al., 1990, 1993) and dAMPH-induced DA release (Martinez et al., 2007) in the human striatum assessed with [11C]-raclopride PET, the relationship between D2/3 levels and human psychostimulant addiction risk remains unknown. In the current study, one might have expected that dAMPH Responders or level of subjective response(s) to dAMPH would be associated with lower levels of baseline/placebo striatal BPnd or dAMPH-induced DA release here, if these are indeed measurable traits of psychostimulant addiction risk. This was not the case, however, and may be due to the fact that in all but 2 subjects (who had used ephedrine previously, see Methods) this was our participants' first exposure to a psychostimulant. It is possible that repeated dAMPH administrations are needed to alter D2/3 receptor levels to produce the commonly observed deficit in these receptors in human addicts. Alternatively, other aspects of DA signaling (beyond D2/3 receptor availability as measured with the D2/3 antagonists raclopride and fallypride) may be the key processes altered after psychostimulant exposure. For example, other PET work suggests that D2/3 agonist binding potential (and the post-synaptic D2/3 receptor levels it is thought to represent) is not different in cocaine dependent individuals when compared to controls (Narendran et al., 2011). Further work is needed to determine whether differences in D2 signaling are indeed risk factors for developing stimulant addiction and what particular components in DA signaling are altered with repeated stimulant use in human subjects.
4.7. Subjective dAMPH responders vs nonresponders
We are, to our knowledge, the first PET study investigating subjective effects of psychostimulants to look at Responders separately from Nonresponders. We note that previous work has not focused on such divisions despite substantial heterogeneity in the subjective responses reported (Drevets et al., 2001; Leyton et al., 2002). One important distinction between our DEQ Responders and Nonresponders was that Nonresponders were predominantly female (~91%) while Responders were more frequently male (~63%). We note that the only other study (outside our group) to associate dAMPH wanting with DA release examined males exclusively (Leyton et al., 2002). Thus, if males are more responsive to the positive subjective effects of dAMPH, this study was biased toward seeing a high proportion of Responders in their sample. Furthermore, the relationship between dAMPH-induced DA release and a variety of behavioral and personality measures have been shown to vary across the sexes (Riccardi et al., 2006b, 2011), suggesting dAMPH's effects may not be consistent across males and females. In our sample, though, the reason why ~43% of females were Nonresponders is not currently clear. Female Responders and Nonresponders did not differ in age, dAMPH dose, or peak plasma amphetamine levels nor in either placebo BPnd or %ΔBPnd from the clusters we identified in our DEQ regression analyses (max $t(21) = 1.36$, min $p = 0.19$). Furthermore, all females were tested in the early follicular phase of their menstrual cycle on both PET scan days and no hormone that we measured (plasma estrogen, estradiol, or progesterone) significantly differed between female Responders or Nonresponders on either PET scan day. Given literature suggesting a potential relationship between female hormones and DA signaling (Bazzett and Becker, 1994; Becker, 1990; Czoty et al., 2009; Di Paolo et al., 1988; McDermott et al., 1994; Nordstrom et al., 1998), though, the role of female hormone effects on PET measures of DA signaling should be investigated in future studies.
5. Conclusion
In conclusion, the data presented here suggest that variation in vmPFC DA signaling (baseline/placebo BPnd and %ΔBPnd) are related to the level of subjective effects reported after oral dAMPH. Specifically, higher vmPFC DA D2/3 receptor availability under placebo conditions is associated with greater self-reported High ratings after dAMPH and DA release post dAMPH in the vmPFC is related to higher Want More drug ratings. Furthermore, we confirm a role of dAMPH-induced DA release in VS in drug wanting and identify, for the first time, a role for the left insula in this process as well. Taken together, our results suggest dAMPH-induced DA release in a network of structures associated with value (vmPFC/VS) and interoceptive/affective (vmPFC/insula) processing may work together to convey incentive salience to dAMPH in drug-naïve individuals. Furthermore, differences in DA signaling in these regions may confer risk for abusing psychostimulants in the future.
6. Funding and disclosure
This work was supported by Award Numbers R01DA019670 (DHZ) and F32DA041157 (CTS) from the National Institute on Drug Abuse, and Award Numbers R01AG043458 and R01AG044848 from the National Institute on Aging (supporting CTS).
The authors declare no conflict of interest of competing financial interests in relation to the work described.
Appendix A. Supplementary data
Supplementary data related to this article can be found at http://dx.doi.org/10.1016/j.neuropharm.2016.05.004.
References
Abi-Dargham, A., Kegeles, L.S., Martinez, D., Innis, R.B., Laruelle, M., 2003. Dopamine
mediation of positive reinforcing effects of amphetamine in stimulant naive healthy volunteers: results from a large cohort. Eur. Neuropsychopharmacol. 13 (6), 459–468.
Bartra, O., McGuire, J.T., Kable, J.W., 2013. The valuation system: a coordinate-based meta-analysis of BOLD fMRI experiments examining neural correlates of subjective value. Neuroimage 76, 412–427.
Bazzett, T.J., Becker, J.B., 1994. Sex differences in the rapid and acute effects of estrogen on striatal D2 dopamine receptor binding. Brain Res. 637 (1–2), 163–172.
Becker, J.B., 1990. Estrogen rapidly potentiates amphetamine-induced striatal dopamine release and rotational behavior during microdialysis. Neurosci. Lett. 118 (2), 169–171.
Beissner, F., Meissner, K., Bar, K.J., Napadow, V., 2013. The autonomic brain: an activation likelihood estimation meta-analysis for central processing of autonomic function. J. Neurosci. 33 (25), 10503–10511.
Berridge, K.C., 2007. The debate over dopamine's role in reward: the case for incentive salience. Psychopharmacology 191 (3), 391–431.
Blum, K., Braverman, E.R., Holder, J.M., Lubar, J.F., Monastra, V.J., Miller, D., et al., 2000. Research diagnostic syndrome: a biogenetic model for the diagnosis and treatment of impulsive, addictive, and compulsive behaviors. J. Psychoact. Drugs 32 (Suppl. 1-iv), 1–112.
Brauer, L.H., Ambre, J., De Wit, H., 1996. Acute tolerance to subjective but not cardiovascular effects of d-amphetamine in normal, healthy men. J. Clin. Psychopharmacol. 16 (1), 72–76.
Breiter, H.C., Gollub, R.L., Weisskoff, R.M., Kennedy, D.N., Makris, N., Berke, J.D., et al., 1997. Acute effects of cocaine on human brain activity and emotion. Neuron 19 (3), 591–611.
Region of interest analysis using an SPM toolbox. In: Brett, M., Anton, J.L., Valabregue, R., Poline, J.B. (Eds.), 2002. 8th International Conference on Functional Mapping of the Human Brain. Sendai, Japan. Neuroimage, Sendai, Japan.
Brown, W.A., Corriveau, P.J., Ebert, M.L.H., 1978. Acute psychologic and neuroendocrine effects of dextroamphetamine and methylphenidate. Psychopharmacology 58 (2), 189–195.
Buckholtz, J.W., Treadway, M.T., Cowan, R.L., Woodward, N.D., Li, R., Ansari, M.S., et al., 2010. Dopaminergic network differences in human impulsivity. Science 329 (5981), 532.
Camps, M., Cortes, R., Gueye, B., Probst, A., Palacios, J.M., 1989. Dopamine receptors in human brain: autoradiographic distribution of D2 sites. Neuroscience 28 (2), 275–290.
Chait, L.D., Uhlenhuth, E.H., Johanson, C.E., 1985. The discriminative stimulus and subjective effects of d-amphetamine in humans. Psychopharmacology 86 (3), 307–312.
Chait, L.D., Uhlenhuth, E.H., Johanson, C.E., 1989. Individual differences in the discriminative stimulus effects of d-amphetamine in humans. Drug Dev. Res. 16, 451–460.
Clithero, J.A., Rangel, A., 2014. Informatic parcelation of the network involved in the computation of subjective value. Soc. Cogn. Affect. Neurosci. 9 (9), 1289–1302.
Craig, A.D., 2011. Significance of the insula for the evolution of human awareness of feeling from the body. Ann. N. Y. Acad. Sci. 1225, 72–82.
Cropley, V.L., Smith, B., Nathan, P.J., Brown, A.K., Segal, N.L., Lermer, A., et al., 2008. Smaller effect of dopamine depletion on effect of dopamine depletion on [18F]fallypride binding in healthy humans. Synapse 62 (6), 399–408.
Czoty, P.W., Riddick N.V., Gage H.D., Sandridge, M., Nader, S.H., Garg, S., et al., 2009. Effect of menstrual cycle phase on dopamine D2 receptor availability in female cynomolgus monkeys. Neuropharmacology 34 (3), 548–554.
Dalley, J.W., Fryer, T.D., Brichard, L., Robinson, E.S., Theobald, D.E., Laane, K., et al., 2007. Nucleus accumbens D2/3 receptors predict trait impulsivity and cocaine reinforcement. Science 315 (5816), 1267–1270.
de Wit, H., Phillips, T.J., 2012. Do initial responses to drugs predict future use or abuse? Neurosci. Biobehav. Rev. 36 (6), 1565–1576.
de Wit, H., Schuster, M.H., Johanson, C.E., 1986. Individual differences in the reinforcing and subjective effects of amphetamine and diazepam. Drug Alcohol Depend. 16 (4), 341–360.
Di Paolo, T., Falardeau, P., Morissette, M., 1988. Striatal D-2 dopamine agonist binding sites fluctuate during the rat estrous cycle. Life Sci. 43 (8), 665–672.
Diekhof, E.K., Kaps, L., Falkai, P., Gruber, O., 2012. The role of the human ventral striatum and the medial orbitofrontal cortex in the representation of reward magnitude - an activation likelihood estimation meta-analysis of neuroimaging studies of passive reward expectancy and outcome processing. Neuropsychologia 50 (7), 1252–1266.
Domminse, C.S., Schulz, S.G., Prasimushchari, N., Blackard, W.G., Hamer, R.M., 1984. The acute pharmacological and behavioral response to dextroamphetamine in normal individuals. Biol. Psychiatry 19 (9), 1305–1315.
Drevets, W.C., Gautier, C., Price, J.C., Kupfer, D.J., Kinahan, P.E., Grace, A.A., et al., 2001. Amphetamine-induced dopamine release in human ventral striatum correlates with euphoria. Biol. Psychiatry 49 (2), 81–96.
Dunn, J.T., Clark-Papasavas, C., Marsden, P., Baker, S., Clej, M., Kapur, S., et al., 2013. Establishing test-retest reliability of an adapted ([18F])fallypride imaging protocol in older people. J. Cereb. Blood Flow Metab. 33 (7), 1098–1103.
First, M.B., Gibbon, M., Spitzer, R.L., Williams, J.B.W., Benjamin, L.S., 1997. Structured Clinical Interview for DSM-IV-TR Axis I Personality Disorders, (SCID-II). American Psychiatric Press, Inc., Washington, D.C.
Friston, K., Holmes, A.P., Worsley, K.J., Poline, J.-P., Frith, C.D., Frackowiak, R.S.J., 1995. Statistical parametric maps in functional imaging: a general linear approach. Hum. Brain Mapp. 2, 189–210.
Gunn, R.N., Lammertsma, A.A., Hume, S.P., Cunningham, V.J., 1997. Parametric imaging of ligand-receptor binding in PET using a simplified reference region model. NeuroImage 6 (4), 279–287.
Gaznick, N., Tranell, D.C., McNutt, A., Bechara, A., 2014. Basal ganglia plus insula damage yields stronger disruption of smoking addiction than basal ganglia damage alone. Nicotine Tob. Res. Off. J. Soc. Res. Nicotine Tob. 16 (4), 445–453.
Haber, S.N., Knutson, B., 2010. The reward circuit: linking primate anatomy and human imaging. Neuropharmacology 58 (1), 4–26.
Haertzen, C.A., Koehler, T.R., Miyasato, K., 1983. Reinforcement tests from the first drug experience can predict later drug habits and/or addiction: results with coffee, cigarettes, alcohol, barbiturates, mescal, and major tranquilizers, stimulants, marijuana, hallucinogens, heroin, opiates and cocaine. Drug Alcohol Depend. 11 (2), 147–165.
Hedou, G., Homberg, J., Feldon, J., Heidbreder, C.A., 2001. Expression of sensitization to amphetamine and dynamics of dopamine neurotransmission in different laminae of the rat medial prefrontal cortex. Neuropharmacology 40 (3), 366–382.
Jones, S.R., Gainetdinov, R.R., Wightman, R.M., Caron, M.G., 1998. Mechanisms of amphetamine action revealed in mice lacking the dopamine transporter. J. Neurosci. 18 (6), 1979–1986.
Kilts, C.D., Schweitzer, J.B., Quinn, C.K., Gross, R.E., Faber, T.L., Muhammad, F., et al., 2001. Neural activity related to drug craving in cocaine addiction. Arch. General Psychiatry 58 (4), 334–341.
Lambert, N.M., McLeod, M., Schenk, S., 2006. Subjective responses to initial experience with cocaine: an exploration of the incentive-sensitization theory of drug abuse. Addict. Abingdon Engl. 101 (5), 713–725.
Lammertsma, A.A., Hume, S.P., 1996. Simplified reference tissue model for PET receptor studies. NeuroImage 4 (3 Pt 1), 153–158.
Lee, T.W., Girshman, M., Sejnowski, T.J., 1999. Independent component analysis using an extended infomax algorithm for mixed subgaussian and supergaussian sources. Neural Comput. 11 (2), 417–441.
Leytou, M., Boileau, I., Benkefal, C., Diksic, M., Baker, G., Dagher, A., 2002. Amphetamine-induced increases in extracellular dopamine, drug wanting, and novelty seeking: a PET/[11C]clopipride study in healthy men. Neuropharmacology 27 (6), 1027–1035.
Lishner, D.A., Cozier, A.B., Zald, D.H., 2008. Addressing measurement limitations in affective rating scales: development of an empirical valence scale. Cogn. Emot. 22 (1), 180–192.
Maldjian, J.A., Laurienti, P.J., Kraft, R.A., Burdette, J.H., 2003. An automated method for neuroanatomic and cytoarchitectonic atlas-based interrogation of fMRI data sets. NeuroImage 19 (3), 1233–1239.
Martinez, D., Broft, A., Foltin, R.W., Slifstein, M., Hwang, D.R., Huang, Y., et al., 2004. Cocaine dependence and D2 receptor availability in the functional subdivisions of the striatum: relationship with cocaine-seeking behavior. Neuropharmacology 29 (6), 1190–1202.
Martinez, D., Narendran, R., Foltin, R.W., Slifstein, M., Hwang, D.R., Broft, A., et al., 2007. Amphetamine-induced dopamine release: markedly blunted in cocaine dependence and predictive of the choice to self-administer cocaine. Am. J. Psychiatry 164 (4), 622–629.
McDermott, J., Liu, Y., Mizuno, D.E., 1994. Sex differences and effects of estrogen on dopamine and DOPA release from the striatum of male and female CD-1 mice. Exp. Neurol. 125 (2), 306–311.
Milella, M.S., Fotros, A., Gravel, P., Casey, K.F., Larcher, K., Verhaeghe, J.A., et al., 2016. Cocaine cue-induced dopamine release in the human prefrontal cortex. J. Psychiatry Neurosci. 41 (3), 150207.
Morean, M.E., de Wit, H., King, A.C., Sofuoglu, M., Rueger, S.Y., O’Malley, S.S., 2013. The drug effects questionnaire: psychometric support across three drug types. Psychopharmacology 227 (1), 177–192.
Mukherjee, J., Christian, B.T., Duggan, K.A., Shih, B., Narayanan, T.K., Satter, M., et al., 2002. Brain imaging of [18F]-fallypride in normal volunteers: blood analysis, discrimination, test-retest studies, and preliminary assessment of sensitivity to aging effects on dopamine D-2/D-3 receptors. Synapse 46 (3), 170–188.
Nader, M.A., Morgan, D., Gage, H.D., Nader, S.H., Calhoun, T.B., Buchheimer, N., et al., 2006. PET imaging of dopamine D2 receptors during chronic cocaine self-administration in monkeys. Nat. Neurosci. 9 (8), 1050–1056.
Naqvi, N.H., Bechara, A., 2010. The insula and drug addiction: an interoceptive view of pleasure, urges, and decision-making. Brain Struct. Funct. 214 (5–6), 435–450.
Naqvi, N.H., Gazznick, N., Tranell, D., Bechara, A., 2014. The insula: a critical neural substrate for craving and drug seeking under conflict and risk. Ann. N. Y. Acad. Sci. 1316, 53–70.
Naqvi, N.H., Rudrauf, D., Damasio, H., Bechara, A., 2007. Damage to the insula disrupts addiction to cigarette smoking. Sci. N. Y. 315 (5811), 531–534.
Narendran, R., Martinez, D., Mason, N.S., Lopresti, R.J., Himes, M.L., Chen, C.M., et al., 2011. Imaging of dopamine D2/3 agonist binding in cocaine dependence: a [11C]NPA positron emission tomography study. Synapse 65 (12), 1344–1349.
Nordstrom, A.L., Olsson, H., Halldin, C., 1998. A PET study of D2 dopamine receptor density at different phases of the menstrual cycle. Psychiatry Res. 83 (1), 1–6.
Pohjalainen, T., Rinne, J.O., Nagren, K., Syvalahti, E., Hietala, J., 1998. Sex differences in the striatal dopamine D2 receptor binding characteristics in vivo. Am. J. Psychiatry 155 (6), 768–773.
Riccanti, P., Balster, R., Salmon, R., Anderson, S., Ansari, M.S., Li, R., et al., 2008. Estimation of baseline dopamine D2 receptor occupancy in striatum and extrastriatal regions in humans with positron emission tomography with [18F] fallypride. Biol. Psychiatry 63 (2), 241–244.
Riccardi, P., Li, R., Ansari, M.S., Zald, D., Park, S., Dawant, B., et al., 2006a. Amphetamine-induced displacement of [18F] fallypride in striatum and extrastriatal regions in humans. *Neuropsychopharmacology* 31 (5), 1016–1026.
Riccardi, P., Park, S., Anderson, S., Doop, M., Ansari, M.S., Schmidt, D., et al., 2011. Sex differences in the relationship of regional dopamine release to affect and cognitive function in striatal and extrastriatal regions using positron emission tomography and [(18)F]fallypride. *Synapse* 65 (2), 99–102.
Riccardi, P., Zald, D., Li, R., Park, S., Ansari, M.S., Dawant, B., et al., 2006b. Sex differences in amphetamine-induced displacement of [(18)F]fallypride in striatal and extrastriatal regions: a PET study. *Am. J. Psychiatry* 163 (9), 1639–1641.
Robinson, T.E., Berridge, K.C., 1993. The neural basis of drug craving: an incentive-sensitization theory of addiction. *Brain Res. Rev.* 18 (3), 247–291.
Seif, T., Chang, S.J., Simms, J.A., Gibb, S.L., Dadgar, J., Chen, B.T., et al., 2013. Cortical activation of accumbens hyperpolarization-active NMDARs mediates aversion-resistant alcohol intake. *Nat. Neurosci.* 16 (8), 1094–1100.
Silfstein, M., Kegeles, L.S., Xu, X., Thompson, J.L., Urban, N., Castrillon, J., et al., 2010. Striatal and extrastriatal dopamine release measured with PET and [(18)F] fallypride. *Synapse* 64 (5), 350–362.
Smith, C.T., Weiler, J., Cowen, P.J., Fessler, R.M., Palmer, A.A., de Wit, H., et al., 2016. Individual differences in timing of peak subjective responses to d-amphetamine: relationship to pharmacokinetics and physiology. *J. Psychopharmacol. Oxf. Engl.* 30 (4), 330–343.
Smith, S.M., Jenkinson, M., Woolrich, M.W., Beckmann, C.F., Behrens, T.E., Johansen-Berg, H., et al., 2004. Advances in functional and structural MR image analysis and implementation as FSL. *NeuroImage* 23 (Suppl. 1), S208–S219.
Thanos, P.K., Michaelides, M., Umegaki, H., Volkow, N.D., 2008. D2R DNA transfer into the nucleus accumbens attenuates cocaine self-administration in rats. *Synapse* 62 (7), 481–486.
Uddin, L.Q., 2015. Salience processing and insular cortical function and dysfunction. *Nat. Rev. 16* (1), 55–61.
Udo de Haes, J.L., Maguire, R.P., Jager, P.L., Paans, A.M., den Boer, J.A., 2007. Methylenedioxymethamphetamine-induced activation of the anterior cingulate but not the striatum: a 11C-DASB PET study in healthy volunteers. *Hum. Brain Mapp.* 28 (7), 625–635.
Volkow, N.D., Fowler, J.S., Wang, G.J., Hitzemann, R., Logan, J., Schlyer, D.J., et al., 1993. Decreased dopamine D2 receptor availability is associated with reduced frontal metabolism in cocaine abusers. *Synapse* 14 (2), 169–173.
Volkow, N.D., Fowler, J.S., Wolf, A.P., Schlyer, D., Shiuie, C.Y., Alpert, R., et al., 1990. Effects of chronic cocaine abuse on postsynaptic dopamine receptors. *Am. J. Psychiatry* 147 (6), 719–724.
Volim, B.A., de Araujo, I.E., Cowen, P.J., Rolls, E.T., Kringelbach, M.L., Smith, K.A., et al., 2004. Methamphetamine activates reward circuitry in drug naive human subjects. *Neuropsychopharmacology* 29 (9), 1715–1722.
Wise, R.A., Rompre, P.P., 1989. Brain dopamine and reward. *Annu. Rev. Psychol.* 40, 191–225.
Wyvell, C.L., Berridge, K.C., 2000. Intra-accumbens amphetamine increases the conditioned incentive salience of sucrose reward: enhancement of reward “wanting” without enhanced “liking” or response reinforcement. *J. Neurosci.* 20 (21), 8122–8130.
|
SECTION 3.3 ALCOHOLIC BEVERAGES
AMENDING SECTION 1.12.5
ADMINISTRATIVE AMENDMENTS
AMENDING SECTION 2.5.6
NEIGHBORHOOD COMMERCIAL
TO ALLOW OUTDOOR DINING
AMENDING SECTION 9.3 TO ADD
OR AMEND DEFINITIONS
UNIFIED DEVELOPMENT CODE
AN ORDINANCE AMENDING PART II OF THE CITY OF IRVING LAND DEVELOPMENT CODE, “UNIFIED DEVELOPMENT CODE (UDC)” AS FOLLOWS: AMENDING SECTION 3.3 “ALCOHOLIC BEVERAGES; SALE, SERVING, OR STORAGE” TO AMEND REGULATIONS FOR THE SALE OF ALCOHOL FOR ON PREMISE CONSUMPTION IN THE CITY OF IRVING; AMENDING SECTION 1.12.5 “ADMINISTRATIVE AMENDMENTS” TO ADD PROVISIONS FOR AMENDMENTS TO APPROVED R-AB SITE PLANS; AMENDING SECTION 2.5.6 “NEIGHBORHOOD COMMERCIAL (C-N)” TO ALLOW FOR OUTDOOR DINING; AND AMENDING SECTION 9.3 “DEFINITIONS” TO ADD OR AMEND DEFINITIONS; PROVIDING FOR CONFLICT RESOLUTION, A SEVERABILITY CLAUSE, A SAVINGS CLAUSE, A PENALTY, AND AN EFFECTIVE DATE.
WHEREAS, food and beverage sales are critical to the financial success of local facilities and promote and enhance the use and enjoyment of such facilities by tourists, convention registrants, and residents; and
WHEREAS, adequate assurances of safe business practices will be obtained through thorough review and contract obligations of any vendors operating in City owned facilities; and
WHEREAS, on August 10, 2021 and September 9, 2021, the Irving Convention and Visitor’s Bureau (ICVB) Destination and Development Committee expressed concerns about the barriers to open and operate a restaurant with alcohol service including the cost and time that is necessary through the existing rezoning, the differences and inconsistencies within the businesses in the city, and the barriers to serve alcohol outside of a food service establishment, and
WHEREAS, on September 27, 2021 the Irving Convention and Visitor’s Bureau (ICVB) Board of Directors adopted a resolution supporting amendments to regulations controlling the sale and serving of alcoholic beverages; and
WHEREAS, on October 1, 2021 the Hotel Association of North Texas provided a letter in support of amendments to the SP-1 (R-AB) zoning process; and
WHEREAS, on October 14, 2021 city staff briefed the City of Irving City Council on the proposed Alcoholic Beverage regulations, they expressed support thereof, and directed staff to proceed with the adoption process; and
WHEREAS, on November 1, 2021 the Board of Directors of the Irving-Las Colinas Chamber of Commerce adopted a resolution supporting amendments; and
WHEREAS, on November 1, 2021 and January 18, 2022 city staff briefed the Planning and Zoning Commission on the proposed Alcoholic Beverage regulations who provided additional feedback; and
WHEREAS, the City has also received communications from individual businesses affected by the ordinance as written; and
WHEREAS, on August 4 and September 15, 2022 city staff briefed the City of Irving City Council on the proposed Alcoholic Beverage regulations, and who expressed support of revised amendments thereof; and
WHEREAS, on November 10 and December 8, 2022 the Irving City Council reviewed a draft of the proposed alcoholic beverage regulations, expressed support thereof with some revisions, and directed staff to proceed with the adoption process; and
WHEREAS, on January 17, 2023 city staff briefed the Planning and Zoning Commission on the proposed Alcoholic Beverage regulations, who expressed support thereof, and directed staff to proceed with the adoption process; and
WHEREAS, on February 6, 2023, after notice and public hearing, the Planning and Zoning Commission considered the proposed amendments and made their final report; and
WHEREAS, after notice and public hearing, and upon consideration of the recommendation of the Planning and Zoning Commission and of all testimony and information submitted during the public hearing, the City Council has determined that amending Unified Development Code Sections 3.3 “Alcoholic Beverages; Sale, Serving, or Storage,” Section 1.12.5 “Administrative Amendments,” Section 2.5.6 “Neighborhood Commercial (C-N),” and Section 9.3 “Definitions” is in accordance with the comprehensive plan, is in the best interest of the public, and is for the purpose of promoting the health, safety, morals, and general welfare of the citizens and protecting and preserving places and areas of historical, cultural, or architectural importance and significance.
NOW, THEREFORE, BE IT ORDAINED BY THE CITY COUNCIL OF THE CITY OF IRVING, TEXAS:
SECTION 1: That Section 3.3 “Alcoholic Beverages: Sale, Serving, or Storage” of the City of Irving Unified Development Code is amended to read as follows:
3.3 **Alcoholic Beverages**
3.3.1 Notwithstanding any other provision of this ordinance, the storage, possession, sale, serving, or consumption of any alcoholic beverages, when permitted by the laws of the State of Texas, shall be regulated and governed by the use regulations and requirements within this Section. The Texas Alcohol Beverage Commission may be abbreviated as TABC throughout this Section.
3.3.2 **Uses Permitted.** After compliance with all codes of the City of Irving, compliance with the Texas Alcohol Beverage Code, compliance with Texas Alcohol Beverage Commission rules and regulations, and receipt of a Certificate of Occupancy, an Alcohol Beverage Establishment may operate within a zoning district in accordance with this section.
a) Restaurants, hotels, retail, service, or entertainment establishments identified as a permitted use in zoning districts as provided in Section 2.5.2, Nonresidential Land Use Table shall be permitted to sell alcohol for on premises consumption at a
maximum of 40% food gross revenue to 60% alcohol gross revenue with R-AB zoning as provided in Sect. 3.3.3 below.
b) Restaurants, retail, service, or entertainment establishments identified as a permitted use within the Urban Business Overlay District (UB), the Heritage Crossing District (HCD) including properties zoned S-P-1/R-AB within the HCD perimeter, and Planned Unit Development District 6 (PUD6) are permitted to sell alcohol for on premises consumption at a ratio of a maximum of 30% food gross revenue to 70% alcohol gross revenue with R-AB zoning as provided in Sect. 3.3.3 below.
c) It shall be unlawful for any person to manufacture, distill, brew, import, transport, or store any alcoholic beverages for purposes of sale or distribution in any residentially zoned district within the City of Irving.
3.3.3 Application
a) Restaurant With Attendant Accessory Use Of The Sale Of Alcoholic Beverages For On-Premises Consumption (R-AB) Zoning Required. The storage, possession, sale, serving, or consumption of any alcoholic beverages to be sold or served by the holder of a mixed beverage permit or the holder of a private club permit issued by the State of Texas, in bottles or any other container direct to the customer or person for consumption on the premises of the holder of a mixed beverage permit or in a private club, shall be permitted only in a restaurant as defined in Chapter 9, Definitions, within a S-P-1 site plan district under section 2.7.3 of this ordinance after the applicant has made a written request for a change in zoning under said section 2.7.3 of this ordinance to permit such use.
b) Application.
1) All persons applying for a zoning designation of S-P-1 (R-AB) pursuant to this section shall sign an application that includes all material required to be submitted by this ordinance.
2) A nonrefundable filing fee according to the latest fee schedule approved by the City Council shall accompany each application for S-P-1 (R-AB) zoning.
3) Failure to submit complete plans, data and information required to accompany a zoning application by this section 3.3, within three (3) months of filing of the case shall result in a presumption that the case has been withdrawn and the city staff may close the file and process same no further.
c) Required Submittals. The site plan to be submitted pursuant to said section 2.7.3 shall satisfy all of the requirements of section 2.7.3 and the following additional requirements:
1) The specifically delineated area to be zoned for restaurant S-P-1 (R-AB) and all areas necessary to provide adequate and necessary ingress-egress and parking. Only within the area specifically delineated (R-AB) may alcoholic beverages be sold for consumption on premises. Provided, however, the holder of a mixed beverage permit operating an accessory use within a hotel that includes the zoning designation of S-P-1 (R-AB) may deliver mixed
beverages, including wine and beer, to individual rooms of the hotel pursuant to section 28.01(b) of the Alcoholic Beverage Code of the State of Texas.
2) Narrative description of the planned activities in the restaurant which includes projected breakdown of revenues between food sales and sales of alcoholic beverages and any use of the restaurant premises for dancing, gaming devices, and/or electronic amusement games.
d) All persons applying for and receiving approval of S-P-1 (R-AB) zoning under this ordinance shall commence construction as evidenced by receipt of a building permit for the restaurant in accordance with the approved site plan within twelve (12) months of the zoning being approved. The city reserves the right and the applicant shall acknowledge the right of the city to rezone subject property in the event construction is not commenced within the stated twelve-month period.
3.3.4 Amendments to S-P-1 R-AB district. Minor amendments and adjustments may be made to a R-AB district as permitted in section 1.12.5. Any change to a R-AB district that does not qualify for an administrative amendment shall complete the rezoning process for City Council consideration of the change.
3.3.5 TABC Permit or License Required. No person shall sell alcoholic beverages within the city without obtaining a city certification to sell alcoholic beverages at a specific address, maintaining a valid TABC license or permit for that location, and paying all appropriate fees to the City. A TABC license or permit does not grant the holder any right to violate the city’s zoning ordinance or any other city regulations.
a) Fees. Upon application for certification from the city, the applicant shall pay the City a fee in the maximum amount permitted by law for the particular license or permit issued by the Texas Alcoholic Beverage Commission, except when said fee is waived according to the provisions of the Texas Alcoholic Beverage Code. Following payment of the fee and certification of compliance with this ordinance, as set forth herein, the City Secretary shall certify the TABC license/ permit application for that location. A refund of the fees levied under this section may not be made for any reason.
b) Permit Renewals. Within 30 days of confirmed renewal of a TABC license or permit, the operator shall submit to the city: a) a copy of the license/ permit renewal as provided by TABC and b) the appropriate fee due to the city. If TABC requires certification by the city that will not be considered a renewal and shall be processed as an initial application.
c) Change of business name, location, or ownership. Upon change of business name, location, or ownership, any person selling alcohol in the City of Irving shall provide the city a copy of the completed TABC Location Packet for Reporting Changes or Business Packet for reporting changes and any fee, if applicable. Any change in the operations of an establishment covered by this section that requires a change in the TABC license shall also be submitted to the city to update the record of the permit. If TABC requires certification by the city the change may require the completion of an initial application as provided in this section.
3.3.6 Sales Near Protected Uses.
a) Religious Facility, School, Hospital or Residential. The sales and serving for on-premises consumption and retail sales for off-premises consumption shall not be permitted within 300 feet of a religious facility, public or private school, or public hospital. The sales of alcohol for on-premises consumption with a mixed beverage permit or on the premises of a private club shall not be permitted within three hundred (300) feet of any property zoned or classified R-40, R-15, R-10, R-7.5, R-6, R-3.5, R-2.5, R-MF, R-MF-1, R-MF-2, R-MF-3, R-TH, R-MH, R-ZL, R-PH, and R-XF and any property actually used for residential purposes irrespective of its zoning category.
b) Exemptions. The regulations contained in this subsection shall not apply when the business for which a permit or license is requested is located on property within the Urban Business Overlay District, Planned Unit Development (PUD) 6, is zoned or has a development plan for Transit Oriented Development District, or is a City-owned property.
c) Measurements.
1) The measurement of the distance between the place of business where alcoholic beverages are sold and a church, public hospital or R district or residential use as provided in 3.3.6 a) shall be along the property lines of the street fronts and from front door to front door, and in direct lines across intersections.
2) The measurement of the distance between the place of business where alcoholic beverages are sold and a public or private school shall be:
a. in a direct line from the property line of the public or private school to the property line of the place of business, and in a direct line across intersections; or
b. if the permit or license holder is located on or above the fifth story of a multistory building, in a direct line from the property line of the public or private school to the property line of the place of business, in a direct line across intersections, and vertically up the building at the property line to the base of the floor on which the permit or license holder is located.
d) Measurement Exhibits.
Exhibits 1 and 2
Case 1: Alcohol Beverage Establishment and Religious Institution from Front Door to Front Door, exiting Lot by shortest route, following legal path and entering adjacent lot and heading towards front door by shortest route.
Case 2: Alcohol Beverage Establishment and Religious Institution from Front Door to Front Door, exiting Lot by shortest route, following legal pedestrian path to cross the street and bypass unrelated properties, and entering adjacent lot and heading towards front door by shortest route.
Exhibits 3 and
Case 3: Alcohol Beverage Establishment and Religious Institution on Same Lot Measure from Door to Door by Shortest Legal Route.
Case 4: Alcohol Beverage Establishment and School Measure from Lot line to Lot line by Shortest Physical Distance.
e) Variances. The city council may grant a variance to 3.3.8(a) if they determine that enforcement of the regulation in a particular instance is not in the best interest of the public, constitutes waste or inefficient use of land or other resources, creates an undue hardship on an applicant for a license or permit, does not serve its intended purpose, is not effective or necessary, or for any other reason the Council, after consideration of the health, safety, and welfare of the public and the equities of the situation, determines is in the best interest of the community.
1) Applications for an alcohol distance variance request shall be heard as a public hearing before the City Council.
2) Notice of the variance request shall be mailed to all property owners within five hundred (500) feet of the property from which the alcohol distance variance is being requested, according to the latest approved city tax roll.
3.3.7 Reporting Gross Sales.
a) Annual Report. The person operating a restaurant selling alcohol for on premises consumption with a zoning designation of S-P-1 (R-AB) shall, on an annual basis and no later than on the thirtieth (30th) day of January, file with the city secretary an affidavit on an officially approved form provided by the city secretary that reflects gross sales for the preceding twelve-month period, or since the restaurant began its operation, whichever is shorter, breaking down the sales between the sale of food and the sale of alcoholic beverages.
1) For purposes of breaking down the sales between food and alcoholic beverages, sales taxes, alcoholic beverage taxes and any other applicable taxes or fees shall not be included in the calculations.
2) The city reserves the right to request persons operating a restaurant with a zoning district designation of S-P-1 (R-AB) to submit an annual audit of the gross sales broken down between food sales and mixed beverages sales at the person's expense. All filings including all sales and beverage tax filings shall remain confidential.
b) Copies of TABC Reports. The person operating a restaurant with a zoning designation of S-P-1 (R-AB) shall on an annual basis file with the city secretary a copy of the filings supplied to the State of Texas (TABC) for sales tax and mixed beverage (alcoholic beverages) tax purposes.
c) Audit. The city shall retain the right to request an audit of applicable records to determine if a business is violating this chapter or any provision of the Unified Development Code. The person operating a restaurant with a zoning designation of S-P-1 (R-AB) shall permit the city treasurer to view the books, records, and receipts relative to sale of food or nonfood revenue and alcoholic beverages at any time after four (4) hours' notice. The city attorney, city manager, city council, city treasurer, mayor or city secretary may examine said records. Said records may be introduced in
court for the purpose of showing the person operating a restaurant with a zoning designation of S-P-1 (R-AB) is in violation of this ordinance.
d) **Public Entertainment Facility** (PEF).
1) Premises which include restaurants with attendant accessory uses of the sale of alcoholic beverages for on-premises consumption shall be a PEF if it meets all of the following:
a. Located in the Urban Business Overlay District;
b. Comprises a single, undivided tract of at least fifteen (15) acres;
c. Contains a public entertainment facility ("PEF"), as defined by Section 108.73, Texas Alcoholic Beverage Code; and
d. Zoned S-P-1 (R-AB)
2) On a PEF premises, the combined gross sales in Irving from alcoholic beverages for the entire PEF premises on an annual basis may be seventy (70) percent or less of the combined total sales of food and alcoholic beverages for the entire PEF premises. For the purposes of subsection 3.3.7, an owner or operator of a PEF premises shall report a combined total of all food and alcoholic beverage sales for all of the establishments contained within the PEF premises and a breakdown for each establishment within the PEF premises, whether or not there are more than one mixed beverage or private club permit holders.
3) The owner or operator of a proposed PEF premises applying for S-P-1 (R-AB) zoning to allow restaurants with attendant accessory uses of alcoholic beverages for on-premises consumption shall comply with all the requirements of subsection 3.3.3, and shall comply with all applicable requirements of section 2.7.3.
3.3.8 **City-Owned Properties and Facilities Funded by Hotel Occupancy Tax.** The on-premise storage, possession, sale, serving, and consumption of any alcoholic beverage is authorized and a permitted use as an accessory use in any city-owned facility and/or whose construction or operation is funded in whole or in part by Hotel Occupancy Tax revenue. The on-premise storage, possession, sale, serving, and consumption of any alcoholic beverage in any city-owned facility whose construction or operation is funded in whole or in part by hotel occupancy tax revenue, is an exception to the provisions of section 3.3.3.
SECTION 2. That Section 1.12.5 “Administrative Amendments”, of the City of Irving Unified Development Code is amended to add Subsections (d)(6) and (e)(5) as follows:
1.12.5 Administrative Amendments
d) Minor Changes to Site Plans in SP, S-P-1 and S-P-2 Site Plan Districts except those based on mixed use or TOD Transit Oriented District districts. Minor amendments may be
accepted to an approved site plan which comply to those general parameters listed in section c) above in addition to the below provisions that:
6) For S-P-1 R-AB site plan zoning districts, allow flexibility in the usage of the premises. Items that may be revised without the completion of the rezoning process include:
i. Changes of the interior design plan, provided the adjustments of areas for uses remains proportional to the approved site plan
ii. Changes in the elevation or exterior features
iii. Changes in the menu, restaurant and operator with a valid Certificate of Occupancy. The administrative amendment process cannot add or revise allowed uses.
e) Minor Changes to S-P-1 and S-P-2 Site Plan Districts based on mixed use districts and TOD Transit Oriented District General Plans or Detail Plans. Minor amendments may be accepted to an approved site plan in the Heritage Crossing District (HCD), TOD and any future mixed-use and/or form-based districts which comply to those general parameters listed in section c) above in addition to the below provisions that:
5) For S-P-1 R-AB site plan zoning districts, allow flexibility in the usage of the premises. Items that may be revised without the completion of the rezoning process include:
i. Changes of the interior design plan, provided the adjustments of areas for uses remains proportional to the approved site plan
ii. Changes in the elevation or exterior features
iii. Changes in the menu, restaurant and operator with a valid Certificate of Occupancy. The administrative amendment process cannot add or revise allowed uses.
SECTION 3: That Section 2.5.6 “Neighborhood Commercial (C-N)” of the City of Irving Unified Development Code is amended to revise subsection (a)(2) to read as follows:
2.5.6 Neighborhood Commercial (C-N)
a) Principal uses. The following uses shall be permitted as principal uses:
2. Café, restaurant, or cafeteria. Outdoor dining shall be permitted. Except outdoor dining shall not be permitted closer than 250 feet and no amplified music shall be operated within 500 feet of a single family zoned lot, both as measured at the closest edge of the patio space of the outdoor dining service to the residential property line. This limitation does not apply when the patio is fully screened from the residential property by a permanent building or to residential zoned properties used for nonresidential purposes.
SECTION 4. That Section 9.3 “Definitions” of the City of Irving Unified Development Code is amended to amend existing definitions and add new definitions as follows. All definitions shall be renumbered to retain their alphabetical order.
Eating establishment shall include, but not be limited to, a restaurant, cafeteria, convention center, hotel, entertainment center or a Public Entertainment Facility as defined in Section
108.73, Texas Alcoholic Beverage Code, wherein alcoholic beverages are sold on the permitted premises.
**Hotel.** For purposes of Section 3.3, *hotel* means the premises of an establishment:
1) Where in consideration of payment, travelers are furnished food and lodging; and
2) In which are located at least ten (10) adequately furnished, completely separate rooms with adequate facilities so comfortably disposed that persons usually apply for and receive overnight accommodations in the establishment, either in the course of usual and regular travel or as a residence.
**Private Club** shall mean an establishment as qualified by Chapter 32 of the Texas Alcoholic Beverage Code for the operation of a social organization to which membership is by invitation only, and its meeting place in which only members and their guests are permitted.
**Private school** means a private school, including a parochial school, that:
1) Offers a course of instruction for students in one (1) or more grades from kindergarten through grade 12; and
2) Has more than one hundred (100) students enrolled and attending courses at a single location.
**Restaurant** shall mean a place of business open to the public for the provision of food and beverages to customers for compensation. A restaurant shall: provide food sales and service as the source of revenue; delineate areas for permanent seating and serving of patrons; and include a full kitchen or otherwise install appropriate kitchen facilities for preparation and preparation of a permanent menu which provides an assortment of foods for sale and consumption. Restaurants intending to provide alcoholic beverages service for consumption on the premises shall operate only as a Restaurant with attendant accessory use of the sale of alcoholic beverages for on-premises consumption (RAB) per section 3.3.5, including manufacture of such beverages on the premises, and shall hold an appropriate permit issued by the Texas Alcoholic Beverage Commission in accordance with the Texas Alcoholic Beverage Code, as amended, for the operation. Dancing and entertainment uses may be operated as accessory uses provided the activities do not displace the locations for primary food service activities.
SECTION 5. That this ordinance shall be and is hereby declared to be cumulative of all other ordinances of the City of Irving, and this ordinance shall not operate to repeal or affect any of such other ordinances except insofar as the provisions thereof might be inconsistent or in conflict with the provisions of this ordinance, in which event such conflicting provisions, if any, in such other ordinance or ordinances are hereby repealed.
SECTION 6. Should any paragraph, sentence, clause, phrase, or section of this ordinance be adjudged or held to be unconstitutional, illegal or invalid, the same shall not affect the validity of this ordinance as a whole or any part or provision thereof, other than the part so declared to be invalid, illegal, or unconstitutional, and shall not affect the validity of the comprehensive zoning ordinance as a whole.
SECTION 7. That nothing in this ordinance shall be construed to affect any suit or proceeding pending in any court, or any rights acquired, or liability incurred, or any cause or causes
of action acquired or existing, under any act or prior ordinance; nor shall any legal right or remedy of any character be lost, impaired, or affected by this ordinance.
SECTION 8. That all regulations contained in Unified Development Code Section 3.3 (Alcoholic beverages; Sale, Serving, or Storage) shall be retained in their entirety in the City of Irving Land Development Code, Part V, Repealed Zoning Districts.
SECTION 9. That any person violating or failing to comply with any provisions of this ordinance shall be fined, upon conviction, not less than one dollar ($1.00) nor more than two thousand dollars ($2,000.00), and each day any violation or noncompliance continues shall constitute a separate offense.
SECTION 10. That this ordinance shall take effect up adoption and shall be published in accordance with the provisions of the Texas Local Government Code and the Irving City Charter.
PASSED AND APPROVED BY THE CITY COUNCIL OF THE CITY OF IRVING, TEXAS, on February 9, 2023.
RICHARD H. STOPFER
MAYOR
ATTEST:
Shanae Jennings
City Secretary/Chief Compliance Officer
APPROVED AS TO FORM:
Kuruvilla Oommen
City Attorney
|
This appeal involves a boundary dispute between neighbours. The parties (for convenience, “parties” shall include Mr. Majewsky’s mother), acquired neighbouring acreages of land near Markdale, Ontario, during the 1980's. In 2008, the appellants, the Veverises, observed an aerial photograph that led them to believe their neighbour, the respondent, Mr. Majewsky (who had by then acquired
title from his mother) was encroaching on their land. The encroachments were confirmed through a survey obtained later that year.
[2] The trial judge found that the respondent had acquired possessory title to an area of the appellants' land on which a portion of the respondent's house, outbuildings and yard were situate ("the house lands"). The trial judge also held that the respondent was entitled to prescriptive easements, based on the doctrine of lost modern grant, over a portion of a laneway and a cedar trail that encroached on the appellants' lands.
[3] The appellants raise multiple issues on appeal: five concerning the house lands, five concerning the cedar trail and one concerning the laneway.\(^1\) We address these arguments in turn.\(^2\)
**The House Lands**
[4] First, the appellants argue that the trial judge erred in finding that the respondent had acquired possessory title to the house lands because, they say, the weight of the evidence demonstrated that the respondent's (or his mother's) possession of such lands was not adverse, but rather was with the permission of the appellants.
---
\(^1\) In oral argument, counsel for the appellants confirmed they were not pursuing the twelfth argument as set out in their factum.
\(^2\) No issue was raised on appeal concerning the trial judge's interpretation of s. 5(4) of the *Real Property Limitations Act*, R.S.O. 1990, c. L.15. As such, nothing in these reasons should be taken as commenting on that interpretation.
We reject this argument. The trial judge's conclusion was premised in large measure on a finding that, until 2008, the parties operated under the mutually mistaken belief that the respondent (or his mother) owned the house lands. Nonetheless, to rebut the claim that the respondent’s (or his mother’s) occupation of the house lands was adverse, the appellants point to permission given to the respondent, his mother or her spouse, to do various things on their (the appellants’) land – for example, to cut grass, walk on it, and park vehicles on it. However, nothing the appellants have identified gives rise to an inference that that the appellants gave permission to the respondent or his mother to occupy the house lands as if they owned it and for the purpose of building a permanent residence on it – which the respondent’s mother did in 1993. In the result, we see no error in the trial judge’s finding that possession of the house lands was adverse.
We will address the appellants’ second and third arguments together as they are related. The appellants’ second argument is that the trial judge erred in law at para. 76 of her reasons by holding it was unnecessary that the respondent establish effective exclusion of the appellants from possession of the house lands to establish a possessory title. Their third argument is that the trial judge erred in failing to find they were not effectively excluded from the house lands.
We do not accept these arguments.
At paras. 72 and 73 of her reasons, the trial judge correctly noted the three elements of the test for adverse possession (actual possession, intention to exclude the true owner from possession and effective exclusion of the true owner from possession) and correctly observed that it is unnecessary to prove the second element, where, as here, the parties have a mutual misunderstanding of true ownership.
Paragraphs 74 to 77 of the trial judge’s reasons, which include the impugned finding, read as follows:
All of the evidence in this case supports that the Majewskys openly used, maintained, possessed, and occupied the house and yard are including their out-buildings. The Majewskys serviced the trailers and sheds with electricity. They installed a septic tank and septic bed in the yard. They installed a yard overhead light. They acted as the owner and possessors of the land as they thought they were.
The plaintiff and the defendants were under the mistaken belief that it was the Majewskys’ land until 2008.
Mr. Veveris argued that it was not completely fenced in on all sides to establish it was exclusive to the Majewskys. I am not satisfied this was necessary or required when all parties operated under the mistaken belief that it was Majewsky property.
I am satisfied that [Mr. Majewsky] has made out a claim for possessory title of the house and yard including the out-buildings. [Emphasis added.]
Read fairly, the trial judge was not saying in para. 76 of her reasons that it was unnecessary to prove effective exclusion of the true owner of land to establish
a possessory title. Rather, she was saying that, in the context of this case, involving mutual mistake as to the ownership of the house lands – on which the Majewskys had established a permanent home – it was unnecessary that those lands be completely fenced to establish effective exclusion of the true owners. The Majewskys had achieved that result through the nature of their occupation and by virtue of the mutual mistake of the parties.
[11] In our view, this finding was well supported by the evidence. The respondent, and before him, his mother, used the house lands as their permanent residence from 1993 onward. As a result of mutual mistake, all parties believed the Majewskys owned the house lands. In our view, it was open to the trial judge to conclude that an inference of effective exclusion arose from the nature of the Majewskys’ use which included the erection of a permanent home and related outbuildings, installation of a septic tank system and customary usage of the adjacent yard; and that, further, that inference was not displaced by the fact of occasional neighbourly visits by the appellants.
[12] The appellants’ fourth and fifth arguments are also related. The appellants’ fourth argument is that the trial judge erred in using the period 1997 to 2007 as sufficient for establishing adverse possession instead of focusing on the ten-year period immediately preceding registration of the lands in Land Titles (May 25, 1999 to May 25, 2009). The appellants say this is significant based on their fifth argument: the trial judge misconstrued or ignored the evidence that, in 2008, which
was within the ten-year statutory period they say applies, the appellants objected to the respondent's encroachments on the appellants' lands.
[13] We do not accept these submissions. The appellants’ arguments on this issue as set out in their factum misinterpret the provisions of the *Real Property Limitations Act*, R.S.O. 1990, c. L.15, under which limitation periods are established for an owner to move to recover possession of land. Contrary to the appellants' submissions, *Sipsas v. 1299781 Ontario Inc.*, 2017 ONCA 265, 85 R.P.R. (5th) 24, at paras. 10 and 18, stands for the proposition that adverse possession can be established with respect to lands registered under *the Land Titles Act*, R.S.O. 1990, c. L.5, by possession meeting the necessary requirements during *any* continuous ten-year period prior to registration in Land Titles.
[14] In oral argument, the appellants asserted that the trial judge erred by misconstruing the starting date of the adverse possession claim. This was because the requirements for an adverse possession claim were never met due to the various errors they had identified. However, for the reasons we have already explained, we do not accept the appellants’ other arguments concerning the adverse possession claim.
[15] Based on the foregoing reasons, we reject the appellants’ grounds of appeal concerning the house lands.
The Cedar Trail
[16] As we have said, the appellants raise five issues in relation to the finding of a prescriptive easement over the cedar trail. In our view, two of those issues are dispositive and it is unnecessary that we address the appellants’ other issues concerning the cedar trail.
[17] As their ninth argument, the appellants contend that the trial judge erred in finding that the “harvesting and transporting of wood” use on the cedar trail was continuous. As their tenth argument, they submit the trial judge erred in finding that the cedar trail conferred a benefit on the dominant tenement. In our view, the trial judge’s reasons concerning accommodation of the dominant tenement and continuous use cannot be sustained.
[18] At para. 49 of her reasons, when addressing the laneway, the trial judge set out the law relating to prescriptive rights of way and correctly recognized, quoting from *Barbour v. Bailey*, 2016 ONCA 98, 66 R.P.R. (5th) 173, leave to appeal refused, [2016] S.C.C.A. No. 139, that one of the essential characteristics of a prescriptive easement is that it must accommodate – that is, be reasonably necessary to the better enjoyment of the dominant tenement.
[19] Further, at paras. 52 and 53 of her reasons, the trial judge correctly relied on *Kaminskas v. Storm*, 2009 ONCA 318, 310 D.L.R. (4th) 549 for the proposition that to acquire a prescriptive easement whether under the doctrine of lost modern
grant or by prescription under the *Real Property Limitations Act*, the claimant must demonstrate use that is continuous, uninterrupted, open, and peaceful for a period of 20 years.
[20] At para. 86 of her reasons, the trial judge turned to the issue of the cedar trail. After reviewing the evidence, she returned to the law and, at para. 108, again quoted from *Barbour v. Bailey*, in which the court said, in relation to the fourth criterion for a prescriptive easement, “what is ‘reasonably necessary’ [for the better enjoyment of the dominant tenement will] depend on the nature of the property and the purpose of the easement.” Further, “[t]here must be a connection between the easement and the normal enjoyment of the dominant tenement, as opposed to a personal right belonging to the dominant tenement owner”: *Barbour*, at para. 58 (citation omitted). Parking spaces or driveways are examples of uses that courts have found fulfill the criteria. The *Barbour* court added at para. 59 that “[t]his is reinforced by the fact that in order to be capable of forming the subject matter of a grant (the third criterion listed above) easement rights *must not be ones of mere recreation and amusement*; the rights in issue must be of utility and benefit to the dominant tenement” (citation omitted, emphasis added).
[21] However, having set out the law correctly, the trial judge erred by failing to properly consider whether the continuous use she found met the criterion of accommodating the dominant tenement.
At para. 109 of the reasons, the trial judge stated:
The use to which the Cedar Trail was put was for both recreation in a country property and wood-gathering. It was not just for amusement but also for utility and benefit to the Majewskys.
At para. 111, the trial judge said she was satisfied that the respondent, through his mother and her partner, had “made out factually an entitlement to a prescriptive easement” for the cedar trail and therefore “the use may continue specifically for the purpose of harvesting and transporting wood from the woodlot” (emphasis added). This holding implies that continuous use for the purpose of harvesting wood had been established during the necessary period and that was the right that provided utility and benefit to the dominant tenement. Yet at para. 101 of the reasons, the trial judge noted that the evidence disclosed that the respondent had used the cedar trail to harvest wood on only two occasions in the past and that he planned and hoped to do so in the future. In fact, the record supports the use of the cedar trail for harvesting wood on only one prior occasion along with occasional use to bring firewood to the house. One or even two prior uses of the cedar trail for harvesting wood and occasional use for gathering firewood was not sufficient to support a conclusion of continuous use.
At para. 102 of the reasons, the trial judge also found continuous use based upon recreational uses (hikes, walks and an annual Legion event). However, as
set out in *Barbour v. Bailey*, such uses were personal to the Majewskys rather than of utility and benefit to the dominant tenement.
**The Laneway**
[25] The appellants' eleventh argument is that the trial judge erred in finding a prescriptive right to maintain the laneway encroachment by clearing brush, plowing snow and cutting grass a distance of three metres onto the appellants' lands as there was no evidence of such maintenance on the appellants' lands during the 20-year period. We reject this argument. The appellants do not contest the trial judge's finding of a prescriptive right for the portion of the laneway that encroaches on their land. Their sole objection is the maintenance rights the trial judge attached to the prescriptive easement. The trial judge accepted the respondent's evidence concerning him, his mother and his stepfather taking out dead trees, cutting grass and plowing snow on either side of the laneway. Although the respondent testified that the grass was trimmed three feet on either side of the laneway he also said their snowblower "shoots the snow a good two to three metres." Based on the evidence she accepted, it was open to the trial judge to make the order permitting maintenance of the laneway.
**Disposition**
[26] Based on the foregoing reasons, the appeal is allowed in part by setting aside the trial judge's finding of a prescriptive easement over the cedar trail and
dismissing the respondent’s claim in that respect. The appeal is otherwise dismissed.
[27] Costs of the appeal are to the respondent on a partial indemnity scale fixed in the amount of $10,000 inclusive of disbursements and applicable taxes. If so advised, the appellants may make written submissions, not to exceed three pages, concerning the costs award below within 10 days of the release of these reasons; the respondent may respond within 7 days thereafter.
|
Analysis of raw material inventory for insecticide packaging bottle with material requirement planning: a case study
Arinda Soraya Putri*, Bagus Imron Rosydi
Department of Industrial Engineering, Universitas Muhammadiyah Surakarta, Jl. A. Yani Tromol Pos 1, Sukoharjo 57162, Indonesia
ARTICLE INFORMATION
Article history:
Received: November 27, 2020
Revised: December 10, 2020
Accepted: December 12, 2020
Keywords:
Bottle packaging
Inventory
Material requirement planning
Pesticide
Scheduling
*Corresponding Author
Arinda Soraya Putri
E-mail: firstname.lastname@example.org
ABSTRACT
PT. Agricon mostly produces 1L package (bottle) pesticides named Spontan insecticide. This study was conducted to analyze and determine the stock inventory of 1L bottles for determining an effective ordering schedule based on the calculation of safety stock and reorder points to optimize planning and inventory. Material Requirement Planning (MRP) method used to determine the effective scheduling of 1L bottle products. Material Requirement Planning is one method used to determine scheduling with advantages such as reducing inventory, reducing set up costs, and reducing idle time. The proposed order scheme for ordering 1L insecticide packaging bottle improve the company on reducing the frequency for ordering bottles from 7 times to 5 times and inventory is more stable and close to demand quantity.
1. INTRODUCTION
Raw material planning identically with scheduling, mainly the purpose of scheduling, manages inventory in the best way and within the optimal timeframe [1], [2]. PT. Agricon mainly produces 1L pesticide product, which has not implemented safety stock for inventory. When demand decreases, warehouse stock will increase because supplies didn't meet the demand, which is a loss for the company because of inventory cost or tied money. Lack of coordination in the planning division impacts some problems such as too much inventory, bad services, and not optimal capacity utilization [3], [4] Preventing that problem, proper planning, and material control is needed using Material Requirement Planning system [5], [6]. MRP is a manufacturing management system assisting manufacturers in dealing with production planning, scheduling, and inventory control [7], [8]. The advantages of the MRP method are cuts down and optimizes the inventory costs within a production period [9], purchase planning streamlines the production process [10], work scheduling [11], and time-saving [12].
Material Requirement Planning is a method for carrying out production planning to determine the order date and quantity of materials ordered to fill each product component [13], [14]. MRP system also provides appropriate information
about inventory and production [15], [16]. Material Requirements Planning aims to schedule raw materials, components, and sub-assemblies quantified in the correct quantity and ready at the right time [17], [18]. So MRP needs to keep 1L insecticide packaging bottle inventory meet reorder point and safety stock.
The company produces pesticides, fungicides, insecticides, and herbicides. The demand level is uncertain and not fixed per day, and the company is required to plan the production process sequences precisely to meet consumer needs. The company experienced problems such as high shipping and ordering costs, and raw material inventory was far less than inventory capacity.
The company must maintain raw materials availability such as 1L insecticide packaging bottle to expedite the production process. It was scheduling a 1L insecticide packaging bottle as its superior product needs to be done to determine the inventory amount in a certain period. The inventory can meet demand, knowing the safe amount of inventory to meet production needs optimally, and minimizing ordering costs by maximizing ordering capacity.
2. RESEARCH METHODS
One of the thriving markets that are The first step is Observations and interviews with operators and staff in order to understand the production process at PT. Agricon. Data collection will then continue, such as schedule receipt data, lead time, lot size, demand, truck capacity, and inventory data [19], [20]. Collected data processed using MRP. MRP is a method used for planning and control production and a tool for manage inventory depend on higher item levels [21], [22].
It is possible to get the raw materials needed to complete a product in the future demand using MRP. The company can optimize the required inventory, so it's not too much but not too small either [23]. MRP planning considers time priority to calculate material requirements and schedules supply to meet a product's demand [24].
MRP also considering production planning and computer-based inventory control system related to production scheduling and inventory control [25]. MRP related to production and inventory scheduling to ensure raw material planning must be precise to avoid excess or shortage inventory of raw material [26]. Here Table 1 is the format used in the MRP method.
| Item | Period |
|-----------------------------|--------|
| **Lead time** | 1 |
| Gross requirement | |
| Schedule receipt | |
| Project on hand | |
| Net requirement | |
| Planned order receipt | |
| Planned order release | |
MRP formulation and calculation [17]:
\[
POH = \text{Max} \{0, POH(t-1) + SRt - GRt\} \\
NRt = \text{Max} \{0, GRt - POH(t-1) - SRt\} \\
PoRect = NRt \\
PoRelt = PoRect
\]
The explanation of the notations in the formula: \(it\) is Gross Requirement on t-period, \(SRt\) is Schedule Receipt on t-period, \(POH_t\) is Project On Hand on t-period, \(NRt\) is Net Requirement on t-period, \(PoRect\) is Planned Order Receipt on t-period, and \(PoRelt\) is Planned Order Release on t-period.
3. RESULTS AND DISCUSSION
PT. Agricon applies total product requirement in a month's production. 1L insecticide packaging bottle demand in July is 72,171 Units. The maximum storage capacity for a 1L insecticide packaging bottle is 100,000 Units. The total quantity available for the 1L insecticide packaging bottle on 1st July is 71,700 Units. Schedule receipt contains information about the quantity and date of raw material in-house. Table 2 below is the schedule receipt for some package.
Table 2. The arrival date of 1L insecticide bottle
| Receipt date | 200 ml Units | 500 ml Units | 1000 L Units |
|--------------|--------------|--------------|--------------|
| 02/07 | 9078 | 02/07 | 5922 | 04/07 | 14000 |
| 02/07 | 8122 | 10/07 | 21000 | 12/07 | 5040 |
| 22/07 | 11000 | 10/07 | 9144 | 18/07 | 8050 |
| | | 12/07 | 5856 | 19/07 | 15050 |
| | | 12/07 | 1144 | 23/07 | 14560 |
| | | 12/07 | 3736 | 26/07 | 7070 |
| | | | 30/07 | 110 | |
| | | | 30/07 | 13590 | |
Safety stock calculation for 1L insecticide packaging bottle:
Value Z (95%) : 1.65
Std deviation overall demand = 34,110
Lead Time = 11 days
\[ \text{Safety Stock} = Z * \text{Std Deviation} * \sqrt{\text{LeadTime}} \]
\[ = 1.65 \times 34,11 \times \sqrt{11} \]
\[ = 18,667 \text{ units} \]
Based on the calculation safety stock for the 1L insecticide packaging bottle is 18,667 Units. Below Table 3 is MRP calculation for 1L insecticide packaging bottle based on the gross requirement in July 2019 with total quantity 72,171 Units and opening stock (on Hand) is 71,700 Units.
Example:
Period 11
\[ GRt = GR12 = 8008 \]
\[ SRt = SR12 = 5040 \]
\[ POH11 = 69687 \]
\[ POH12 = \max \{0, POH(t-1) + SRt - GRt\} \]
\[ = \max \{0, 69687 + 5040 - 8008\} \]
\[ = 66719 \]
\[ NRt = \max \{0, GRt - POH(t-1) - SRt\} \]
\[ NR11 = 0 \text{ (because } POH(t-1) \text{ fulfill } GRt) \]
\[ PoRect = PoRect = 0 \]
\[ PoRelt = PoRelt = 0 \]
MRP 1L insecticide packaging bottle calculation proposed considering various factors such as total request, store inventory capacity, safety stock, truck capacity, and reorder point. So schedule receipt calculation will be:
Total Gross Req in 1 month = 72.171 units
Store Capacity = 100.000 units
Safety Stock = 18.667 units
Truck Capacity = 15.050 units
Demand every 11 days in/ Month = 3
Reorder point period 1 (1-11) = 34.680 units
Reorder point period 2 (12-23) = 66.824 units
Reorder point period 3 (24-30) = 26.668 units
Example reorder point period 1 calculation
\[ \text{Reorder Point} = \text{Safety stock} + \text{demand in t} \]
\[ \text{Reorder Point} = 18667 + 8008 + 8005 = 34680 \text{ units} \]
The following Table 4 is the proposed MRP calculations on 1L insecticide packaging bottle.
**Table 3. Calculation MRP 1L insecticide packaging bottle – the initial condition**
| Item: 1L bottle | Period |
|-----------------|--------|
| **Lead Time : 11** | 1 2 3 4 . . . 27 28 29 30 |
| Gross requirement | |
| Schedule receipt | 14000 . . . 13700 |
| Project on hand | 71700 71700 71700 85700 . . . 63299 63299 63299 76999 |
| Net requirement | 0 0 0 0 . . . 0 0 0 0 |
| Planned order receipt | |
| Planned order release | |
**Table 4. Calculation MRP 1L insecticide packaging bottle – proposed**
| Item: 1L bottle | Period |
|-----------------|--------|
| **Lead time : 11** | 1 2 3 4 . . . 27 28 29 30 |
| Gross requirement | |
| Schedule receipt | |
| Project on hand | 71700 71700 71700 71700 . . . 74779 74779 74779 74779 |
| Net requirement | 0 0 0 0 . . . 0 0 0 0 |
| Planned order receipt | |
| Planned order release | |
Based on the MRP calculation for the 1L insecticide packaging bottle on initial condition in Fig. 1, the Net value requirement for each period is 0 because the previous quantity of schedule receipt can cover each period's gross requirement. Project On Hand has a high demand in the 16th – 24th period. On period 22nd, Hand's project has an unstable condition and close to the lowest quantity with a total Project on Hand 41,669 units. The vertical axis shows the units, and the horizontal axis shows the period.
It considers various factors such as storage capacity, safety stock, demand, and truck capacity, and the proposed order scheme is used to improve or guide when making an order. Therefore, orders with considering inventory and demand will be more stable, shown in Fig. 2. Notice unstable demand in the company, then implement a three-period reorder point for 1L insecticide packaging bottle in a month. The reorder point for the 1st period is 34,680 units, where the company will make an order if inventory is less than the reorder point. The reorder point 2nd period is 66,824 units, and the reorder point 3rd period is 26,668 units.
The company only needs to order the bottle five times, compared to the previous condition needs seven times, where the company needs to make order seven times (2 times save). The other benefit is order and inventory are more stable close to demand. The following is a comparison chart using initial and proposed data in Fig. 3.
Research with the MRP method has been widely used in various industries, for example apparel [6], paper [11], cement [12], automobile [13], garment [15], construction [16], motorcycle chains [17], talc powder [18], screen printing ink [19], briquette [20], sewing thread [21], pharmaceuticals [22], alloy cast [23], and coconut sugar [24]. The majority of existing research discusses the raw materials for making finished products. There are not many studies that discuss packaging. Packaging supplies are important because they complement the products being marketed, and the amount must meet the amount of products. Excess packaging inventory increases storage costs, and a shortage of packaging inventory hinders the packaging process. In addition, proper ordering frequency can reduce the cost incurred by the company.
4. CONCLUSION
The proposed order scheme for ordering 1L insecticide packaging bottle improves the company on reducing the frequency for ordering bottles from 7 times to 5 times and inventory is more stable and close to demand quantity. In future research, MRP calculations can be carried out by comparing various methods and conducting a
sensitivity analysis of the proposed results. PT. Agricon has not implemented an ordering system for raw material of 1L pesticide bottles using safety stock. This impact on inventory accumulation occurs in the warehouse (waste of inventory). Project On Hand on $10^{th} - 24^{th}$ period for 1L insecticide packaging bottle is not stable because inventory vs demand not precisely catches. Implemented safety stock and reorder points will improve the inventory control and make it more durable.
REFERENCES
[1] R. Guillaume, C. Thierry, and P. Zieliński, “Robust material requirement planning with cumulative demand under uncertainty,” *Int. J. Prod. Res.*, vol. 55, no. 22, pp. 6824–6845, Nov. 2017, doi: 10.1080/00207543.2017.1353157.
[2] A. Afriansyah and A. S. Mohruni, “Production Planning and Control System with Just in Time and Lean Production: A Review,” *J. Mech. Sci. Eng.*, vol. 6, no. 2, pp. 19–27, 2019. Available: https://ejournal.unsri.ac.id/index.php/jmse/article/view/10285.
[3] R. Hencha and D. S. Verma, “Study of Material Requirement Planning Processes and Its Analysis and Implementation (A Case Study of Automobile Industry),” *Int. J. Sci. Technol. Res.*, vol. 8, no. 8, pp. 648–653, 2019. Available: https://www.ijstr.org/final-print/aug2019/Study-Of-Material-Requirement-Planning-Processes-Its-Analysis-And-Implementation-a-Case-Study-Of-Automobile-Industry.pdf.
[4] D. Gradišar and M. Glavan, “Material Requirements Planning Using Variable-Sized Bin-Packing Problem Formulation with Due Date and Grouping Constraints,” *Processes*, vol. 8, no. 10, pp. 1–6, Oct. 2020, doi: 10.3390/pr8101246.
[5] M. Díaz-Madronero, J. Mula, and M. Jiménez, “Material Requirement Planning under Fuzzy Lead Times,” *IFAC-PapersOnLine*, vol. 48, no. 3, pp. 242–247, 2015, doi: 10.1016/j.ifacol.2015.06.088.
[6] A. Iasya and Y. Handayati, “Material requirement planning analysis in micro, small and medium enterprise case study: grooveline-an apparel outsourcing company final project,” *J. Bus. Manag.*, vol. 4, pp. 317–329, 2015. Available: https://jjournal.sbm.itb.ac.id/index.php/jbm/article/view/1628.
[7] R. J. Najy, “MRP (Material Requirement Planning) Applications in Industry-A REVIEW,” *IJRDO - J. Bus. Manag.*, vol. 6, no. 1, pp. 1–13, 2020. Available: http://www.ijrdo.org/index.php/bm/article/view/3442.
[8] C. Furqon, M. A. Sultan, and R. J. Pramudita, “Analysis of Material Requirement Planning (MRP) Implementation on The Company,” in *Proceedings of the 2nd International Conference on Economic Education and Entrepreneurship*, 2017, pp. 140–145, doi: 10.5220/0006882001400145.
[9] D. Więcek, D. Więcek, and L. Dulina, “Materials Requirement Planning with the Use of Activity Based Costing,” *Manag. Syst. Prod. Eng.*, vol. 28, no. 1, pp. 3–8, Mar. 2020, doi: 10.2478/mspe-2020-0001.
[10] B. A. Mtengwa and J. A. Malloo, “Stakeholder’s Perception on Quality of Mergers and Acquisitions in Tanzania,” *Int. J. Acad. Res. Bus. Soc. Sci.*, vol. 8, no. 10, pp. 1216–1227, Nov. 2018, doi: 10.6007/IJARBS/8.v8-i10/4832.
[11] E. Rimawan, D. S. Saroso, and P. E. Rohmah, “Analysis of Inventory Control with Material Requirement Planning (MRP) Method on IT180-55gsm F4 Paper Product at PT. IKPP, TBK,” *Int. J. Innov. Sci. Res. Technol.*, vol. 2, no. 2018, pp. 569–581, 3AD. Available: https://ijiisrt.com/wp-content/uploads/2018/02/Analysis-of-Inventory-Control-with-Material-Requirement-Planning-MRP-Method.pdf.
[12] W. A. Y. Bunga and D. I. Rinawati, “Perencanaan Persediaan Bahan Baku Semen Dengan Menggunakan Metode Material Requirement Planning (Mrp) Pada PT Indocement Tunggal Prakarsa Tbk. Plant Cirebon,” *Ind. Eng. Online J.*, vol. 7, no. 4, pp. 1–8, 2019. Available: https://ejournal3.undip.ac.id/index.php/ieoj/article/view/22991.
[13] D. Gharakhani, “Optimization of material requirement planning by goal programming model,” *Asian J. Manag. Res.*, vol. 2, no. 1, pp. 297–317, 2011. Available:
[14] T. T. Amachree, E. O. P. Apkan, E. C. Ubani, and K. A. Okorocha, “Validation Of Developed Materials Requirement Planning (MRP) Integrated Flow System Model Of Ims For Piemf,” *Int. J. Sci. Technol. Res.*, vol. 6, no. 8, pp. 355–361, 2017. Available: https://www.ijstr.org/final-print/aug2017/Validation-Of-Developed-Materials-Requirement-Planning-mrp-Integrated-Flow-System-Model-Of-Ims-For-Piemf.pdf.
[15] N. Hasanati, E. Permatasari, N. Nurhasanah, and S. Hidayat, “Implementation of Material Requirement Planning (MRP) on Raw Material Order Planning System for Garment Industry,” *IOP Conf. Ser. Mater. Sci. Eng.*, vol. 528, no. 1, pp. 1–8, Jun. 2019, doi: 10.1088/1757-899X/528/1/012064.
[16] A. Imetieg and M. Lutovac, “Project scheduling method with time using MRP system: A case study: Construction project in Libya,” *Eur. J. Appl. Econ.*, vol. 12, no. 1, pp. 58–66, 2015, doi: 10.5937/ejae12-7815.
[17] S. Kurniawan and S. S. Raphaeli, “Optimizing Production Process through Production Planning and Inventory Management in Motorcycle Chains Manufacturer,” *ComTech Comput. Math. Eng. Appl.*, vol. 9, no. 2, pp. 43–53, Dec. 2018, doi: 10.21512/comtech.v9i2.4723.
[18] F. Lefta, L. Gozali, and I. A. Marie, “Aggregate and Disaggregate Production Planning, Material Requirement, and Capacity Requirement in PT. XYZ,” *IOP Conf. Ser. Mater. Sci. Eng.*, vol. 852, no. 1, pp. 1–6, Jul. 2020, doi: 10.1088/1757-899X/852/1/012123.
[19] P. Theresia and L. L. Salomon, “Usulan Penerapan Material Requirement Planning (MRP) Untuk Pengendalian Persediaan Bahan Baku Produk ANT INK (Studi Kasus: CV. Sinar Mutiara),” *J. Kaji. Teknol.*, vol. 11, no. 1, pp. 43–54, 2015. Available: http://journal.untar.ac.id/index.php/teknologi/article/view/618.
[20] Y. E. Torunoglu, H. K. Akin, and N. Guler, “Material Requirement Planning in a Briquette Factory,” *Int. Adv. Res. Eng. J.*, vol. 1, no. 1, pp. 21–25, 2017. Available: https://dergipark.org.tr/en/pub/iarej/issue/33993/378300.
[21] D. Abrianto and D. Riandadari, “Perencanaan Persediaan Bahan Baku Produksi dengan Metode Material Requirement Planning (MRP) Pada PT. Sejati Jaya,” *J. Pendidik. Tek. Mesin*, vol. 06, no. 01, pp. 77–83, 2016. Available: https://jurnalmahasiswa.unesa.ac.id/index.php/jurnal-pendidikan-teknik-mesin/article/view/21045.
[22] A. Chandradevi and N. B. Puspitasari, “Penerapan Material Requirement Planning (MRP) dengan Mempertimbangkan Lot Sizing dalam Pengendalian Bahan Baku pada PT. Phapros, Tbk.,” *PERFORMA Media Ilm. Tek. Ind.*, vol. 15, no. 1, pp. 77–86, 2016, doi: 10.20961/performa.15.1.13760.
[23] T. Y. T. Kusuma, “Analisis Material Requirement Planning (MRP) di C-Maxi Alloycast,” *Integr. Lab J*. Available: http://ejournal.uinsuka.ac.id/pusat/integratedlab/article/view/1556 vol. 5, no. 2, pp. 81–94, 2017.
[24] K. A. Martha and P. Y. Setiawan, “Analisis Material Requirement Planning Produk Coconut Sugar Pada Kul-Kul Farm,” *E-Jurnal Manaj. Univ. Udayana*, vol. 7, no. 12, p. 6532, 2018, doi: 10.24843/ejmunud.2018.v07.i12.p06.
[25] A. P. Velasco Acosta, C. Mascle, and P. Baptiste, “Applicability of Demand-Driven MRP in a complex manufacturing environment,” *Int. J. Prod. Res.*, vol. 58, no. 14, pp. 4233–4245, 2020, doi: 10.1080/00207543.2019.1650978.
[26] J. I. Romero-Gelvez, E. A. Delgado-Sierra, J. A. Herrera-Cuartas, and O. Garcia-Bedoya, “Demand Forecasting and Material Requirement Planning Optimization Using Open Source Tools,” *CEUR Workshop Proc.*, vol. 2486, no. November, pp. 94–107, 2019. Available: http://ceur-ws.org/Vol-2486/icaiw_wdea_5.pdf
|
Immobile complex verbs in Germanic*
STEN VIKNER
Department of English, University of Aarhus, DK-8000 Aarhus C, Denmark;
E-mail: firstname.lastname@example.org
Key words: back-formation, complex verbs, OV-languages, particle verbs, separable particles, V*-to-I* movement, verb second
Abstract. Certain complex verbs in Dutch, German, and Swiss German do not undergo verb movement. The suggestion to be made in this article is that these “immobile” verbs have to fulfill both the requirements imposed on complex verbs of the V* type (= verbs with non-separable prefixes) and the requirements imposed on complex verbs of the V* type (= verbs with separable prefixes). This results in such verbs being morphologically unexceptional, i.e., having a full set of forms but syntactically peculiar (“immobile”), i.e., they can only occur in their base position. Any movement is incompatible with either the V* requirements or the V* requirements.
Haider (1993, p. 62) and Koopman (1995), who also discuss such immobile verbs, only account for verbs with two prefix-like parts (e.g., German *uraufführen* ‘to perform (a play) for the first time’ or Dutch *herintvoeren* ‘to reintroduce’), not for the more frequent type with only one prefix-like part (e.g., German *bauchreden/Dutch buikspreken* ‘to ventriloquize’).
This analysis will try to account not only for the data discussed in Haider (1993) and Koopman (1995) but also for the following:
– why immobile verbs include verbs with only one prefix-like part (and why this single prefix-like part may NOT be a particle),
– why immobile verbs even include verbs with two prefix-like parts, where each of these are separable particles (as in, e.g., German *voranmelden* ‘preregister’),
– why there is such a great amount of speaker variation as to which verbs are immobile,
– why such verbs are not found in Germanic VO-languages such as English and Scandinavian.
* I am very grateful to Norbert Corver, Jürg Fleischer, Christian Fortmann, Jóhannes Gisli Jónsson, Gereon Müller, Hubert Haider, Hans Kamp, Vieri Samek-Lodvici, Manuela Schönenberger, Neil Smith, Arnim von Stechow, Wolfgang Sternefeld, Carl Vikner, Ralf Vogel, Ursula Wegmüller, and two anonymous reviewers for comments, criticism, and support.
Thanks also to participants in classes at the University of Stuttgart and to audiences at talks given at 17th Workshop on Comparative Germanic Syntax (Reykjavik 2002), at the 16th “Focus on Grammar” Workshop (Lund 2002), and at the Universities of Manchester, Tübingen, Stuttgart, and University College London.
This article is based on work in my habilitation dissertation (Vikner 2001), which was supported by the Deutsche Forschungsgemeinschaft as part of the Project *The Optimality-theoretic Syntax of German from a Comparative Germanic Perspective* (Grant MU 1444/2-1, Principal Investigators: Gereon Müller, Sten Vikner).
1. German *uraufführen*
There are three types of complex verbs in Dutch, German, and Swiss German: separable, inseparable, and immobile, of which the first two types are much less controversial than the third type.
Separable complex verbs leave their prefix (or particle) behind when they undergo verb movement, e.g., V2 (verb second). Inseparable complex verbs take their prefix (or particle) along when they undergo verb movement. For examples of these, please refer to Section 4 below.
For the third type, the immobile complex verbs, it cannot be determined whether the prefix (or particle) is separable or not because the verb does not occur in contexts that would allow this to be determined, i.e., the verb does not leave its base position.
The German verb *uraufführen* ‘to put on (a play) for the very first time’ contains two prefixes, *ur-*‘for the first time’ and *auf-*‘on’:
(1) a. Sollten sie das Stück uraufführen?
should they the play original-on-put.INF
b. Haben sie das Stück uraufgeführt?
have they the play original-on-put.PPLE
These two (main) clauses contain non-finite versions of *uraufführen*. The following (embedded) clause on the other hand contains a finite form (simple past, 3rd person plural) of *uraufführen*:
(2) … ob sie das Stück uraufführten
… if they the play original-on-put.3.PL.PAST
It would thus seem that this is a more or less ordinary verb. That this is not so becomes clear when native speakers are asked to construct main clauses containing a finite form of *uraufführen*. In German main clauses, the finite verb undergoes V2, but *uraufführen* refuses to undergo V2:
(3) a. *Uraufführten sie das Stück _______ ? (German)
b. *Aufführten sie das Stück ur _______ ?
c. *Ur-führten sie das Stück ___ auf ___ ?
d. *Führten sie das Stück urauf ___ ?
(original)(on)put.3.PL.PAST they the play (original)(on)
Following Höhle (1991), Haider (1993, p. 62) suggests the following analysis (an English summary of the argument may be found in Haider 2001, pp. 70–73): *ur*- is a non-separable particle, and *auf*- is a separable one. Given that the separable particle, *auf*- is closer to the stem *-führen* than the inseparable particle *ur*- is, one of three possible situations must obtain, each of which leads to ungrammaticality: either *auf*- is carried along under V2, violating its requirements ((3a and b)); or *ur*- is left behind, violating its requirements ((3b and d)); or what is moved is not a constituent ((3c)).
Consequently, the only well-formed sentences with this verb are ones where the verb is not moved at all: (1a and b) and (2).
The fact that *uraufführen* may take on a finite form if and only if it occurs sentence-finally ((2)) leads to the conclusion that the clause-final position of finite verbs in embedded clauses is a non-moved position (i.e., that it is of the same kind as the position that the underlined non-finite verb has in (1a and b)) since it violates neither the requirements of the non-separable *ur*- nor the requirements of the separable *auf*.
According to Höhle (1991) and Haider (1993, p. 62), this means that German finite verbs in clause-final position in embedded clauses have not undergone any movement. This in turn means that German does not have V°-to-I° movement, assuming that a characteristic of V°-to-I° movement (as opposed to other kinds of movement, including V2) is that all finite verbs obligatorily undergo this movement (which is generally assumed to be the case, e.g., in the Romance languages and in those Germanic VO-languages that have V°-to-I° movement, cf., e.g., Vikner 1997, 2001, Rohrbacher 1999, and references therein). Whether German does or does not have V°-to-I° movement has been an on-going debate for at least 15 years. The point is that, as opposed to V°-to-I° movement in a VO-language, V°-to-I° movement in an OV-language is often assumed to be string vacuous.
While I agree with the conclusions in the preceding paragraph (that the data concerning immobile verbs show that German does not have V°-to-I° movement), I do not find the Höhle/Haider analysis itself satisfactory as it does not apply to a large number of immobile verbs (to be discussed in more detail in Sections 3 and 5 below). The verbs not accounted for are immobile verbs that do not have two “conflicting” prefixes/particles, either because there is only one prefix-like part, e.g., *schutzimpfen* ‘inoculate’, or because the two prefixes/particles do not impose conflicting requirements, e.g., *voranmelden* ‘preregister’ (in which both prefixes/particles are separable) or *strafversetzen* ‘transfer for disciplinary reasons’ (in which both prefixes/particles are non-separable).
2. Dutch *herinvoeren*
Koopman’s (1995, p. 139, (2b)) examples from Dutch are all parallel to *uraufführen*, i.e., they contain two prefixes/particles with conflicting requirements:
(4) herindelen ‘to re-in-split’, i.e., ‘to redivide’
herindijken ‘to re-in-dike’, i.e., ‘to put within dikes again’
herinvoeren ‘to re-in-lead’, i.e., ‘to reintroduce’
heruitgeven ‘to re-out-put’, i.e., ‘to republish’
heruitzenden ‘to re-out-send’, i.e., ‘to rebroadcast’
(5) a. … omdat ze vorig jaar deze wet hebben heringevoerd (Dutch)
… because they last year this law have re-intro-duced. PPLE
b. … omdat ze vorig jaar deze wet herinvoeren
… because they last year this law re-introduced.3.PL.PAST
(Koopman 1995, p. 141, (6d) and (5d))
(6) a. *Deze wet herinvoeren
b. *Deze wet invoeren
c. *Deze wet her-voeren
d. *Deze wet voeren
This law (re)(intro)duced.3.PL.PAST they last year (re)(intro)
((6a) from Koopman 1995, p. 141, (4d))
Koopman (1995, p. 143) adapts and elaborates Haider’s (1993, p. 62) analysis of (1)–(3). This analysis rests on an insoluble conflict between the two particles/prefixes: *ur-* cannot be left behind if the verb moves, and *auf-* must be left behind if the verb moves, and so the only way to avoid conflict is to avoid verb movement. Koopman (1995, pp. 156–159) accounts for the data by assuming that *ur-* in (1)–(3) and *her-* in (4)–(6) (i.e., the leftmost or outermost of the two particles/prefixes) blocks overt checking of finiteness features. This in turn means that only LF-checking is an option, which again means that there can be no overt movement to a checking head (i.e., no V°-to-I° movement) in examples like (2) and (5b).
It seems to me that the conclusions concerning verb movement drawn by Haider (1993, p. 62) and Koopman (1995) must be on the right track even if I disagree with the analyses themselves.
To see that the data concerning immobile verbs are incompatible with the assumption of obligatory V°-to-I° movement, let us consider what a potential analysis of immobile verbs would look like under the assumption that V°-to-I° movement obligatorily applies in German (and Dutch).
The only possible analysis of immobile verbs which is compatible with V°-to-I° movement having applied in (2) and (5b) (i.e., an analysis which is compatible with German and Dutch having V°-to-I° movement) would be that the ungrammaticality of (3) and (6) results from the blocking of an obligatory checking procedure (e.g., Rizzi’s 1996, p. 64, wh-criterion) that takes place at some point after V°-to-I° movement has taken place (i.e., in C°). This would seem to be the only way to explain why *uraufführt*en and *herinvoerden* cannot undergo V2 ((3) and (6)) when they may undergo V°-to-I° movement, as shown by (2) and (5b), still under the assumption that German and Dutch have (obligatory) V°-to-I° movement.
The problem with such an account is that the particles/prefixes that would have to block checking in C° could not possibly block checking in I° (or Agr° or T° or whatever the landing site of V°-to-I° movement is), which makes working out exactly what it is that is checked in C° and in I° very difficult. If I° can check a finite verb across such particles/prefixes, why should C° not be able to do the same thing? Even though there may be a difference in that C° may have to check the verb (stem) itself, and I° may only have to check the finite verb ending, it does not seem very likely that C° should not be able to check the verb across the finite verbal ending, given that in other cases C° has no trouble doing such checking across the finite verbal ending: all well-formed verbs in C° (i.e., the finite verb in all well-formed main clauses) not only may but actually must have a finite ending.
As I stated above, both Haider’s and Koopman’s analyses only work for verbs with two prefixes: Haider’s (1993, p. 62) analysis needs the prefixes to conflict, and Koopman’s (1995, p. 159) analysis needs a second prefix to violate the strict c-command requirement. In Section 5 below, I will suggest a different analysis, also based on conflicting requirements but not requiring such verbs to have two prefix-like parts. Before that, in Section 3 below, I will discuss immobile verbs that only have one prefix-like part.
3. Other immobile verbs
Haider (1993, p. 62) lists some additional German verbs (originally from Höhle 1991) which behave exactly like *uraufführen* in (1)–(3); cf. (8) below. All these verbs have only one prefix-like part, and it is thus not clear what predictions Haider’s (1993, p. 62) account or Koopman’s (1995, pp. 156–159) account would make for them:
(7) *bauchreden* ‘to stomach-speak’, i.e., ‘to ventriloquize’
*bausparen* ‘to building-save’, i.e., ‘to save with a building society’
*rückfragen* ‘to back-question’, i.e., ‘to query’
*wettrudern* ‘to contest-row’, i.e., ‘to row in a competition’
(8) a. Sie will bausparen
she wants (to) building-save
She wants to save with a building society.
b. … weil er bauspart
… because he building-saves
… because he saves with a building society
((8a and b) adapted from Eisenberg 1998, pp. 226, 324, (16a))
c. *Spart er bau?
d. *Bauspart er?
building-)saves he (building
Intended: Does he save with a building society?
Eisenberg (1998, p. 324, (14)) adds the following verbs:
(9) *bauchlanden* ‘to stomach-land’, i.e., ‘to land on one’s stomach’
*bergsteigen* ‘to mountain-rise’, i.e., ‘to climb mountains’
*bruchlanden* ‘to break-land, i.e., ‘to make a crash landing’
*ehebrechen* ‘to marriage-break’, i.e., ‘to commit adultery’
*kopfrechnen* ‘to head-reckon’, i.e., ‘to do mental arithmetic’
*kunststopfen* ‘to art-mend’, i.e., ‘to mend textiles so well that you cannot tell that they have been mended’
manndecken ‘to man-cover’, i.e., ‘to mark someone in soccer (man-to-man marking)’
preiskegeln ‘to prize-bowl’, i.e., ‘to play skittles in order to win a prize’
punktschweißen ‘to spot-weld’
schutzimpfen ‘to protection-inoculate’, i.e., ‘to inoculate’
strafversetzen ‘to punishment-transfer’, i.e., ‘to transfer for disciplinary reasons’
teilzahlen ‘to part-pay’, i.e., ‘to pay by instalments’
wettturnen ‘to contest-exercise’, i.e., ‘to do gymnastics in a competition’
A search through the electronic versions of two Duden dictionaries of German, the 1993 *Duden Universal Wörterbuch* and the 2000 *Duden Rechtschreibung*¹ and subsequent checks with native speakers turned up the following further examples of the same kind, i.e., of verbs which may occur in finite form clause-finally in embedded clauses but not in the first or second position in main clauses:
(10) auferstehen ‘to up-rise’, i.e., ‘to rise from the dead’
auferwecken ‘to up-wake’, i.e., ‘to raise from the dead’
erstaufführen ‘to first-on-put’, i.e., ‘to perform a play for the first time’
feuerverzinken ‘to fire-zinc’, i.e., ‘to rustproof something by immersion in liquid zinc’
gefriertrocknen ‘to freeze-dry’
gegensprechen ‘to counter-speak’, i.e., ‘to speak on a two-way intercom’
generalüberholen ‘to general-overhaul’, i.e., ‘to give something a general overhaul’
hartlöten ‘to hard-solder’, i.e., ‘to solder at more than 450 °C’
hohnlächeln ‘to scorn-smile’, i.e., ‘to smile scornfully’
hohnsprechen ‘to scorn-speak’, i.e., ‘to fly in the face of something’
prämiensparen ‘to prize-save’, i.e., ‘to save in such away that a prize may be won’
sonnenbaden ‘to sun-bathe’
voranmelden ‘to pre-at-report’, i.e., ‘to preregister, to book, e.g., a ticket’
vorglühen ‘to pre-glow’, i.e., ‘to preheat a diesel engine’
zweckentfremden ‘to purpose-alienate’, i.e., ‘to use for a different purpose’
zwischenlanden ‘to between-land’, i.e., ‘to stop over in X on the way to Y’
A brief check of Dutch (Norbert Corver, p.c.) shows that at least the following Dutch verbs behave the same way:
(11) bergklimmen ‘to mountain-climb’, i.e., ‘to climb mountains’
bouwsparen ‘to building-save’, i.e., ‘to save with a building society’
buikspreken ‘to stomach-speak’, i.e., ‘to ventriloquize’
echtbreken ‘to marriage-break’, i.e., ‘to commit adultery’
diepvriezen ‘to deep-freeze’
hardsolderen ‘to hard-solder’, i.e., ‘to solder at more than 450 °C’
hoofdrekenen ‘to head-reckon’, i.e., ‘to do mental arithmetic’
mandekken ‘to man-cover’, i.e., ‘to mark someone in soccer (man-to-man marking)’
prijsschieten ‘to prize-shoot’, i.e., ‘to shoot a rifle for a prize’
Strictly speaking, some of these verbs have a structure similar to *uraufführen* and the other verbs discussed in Section 1 above: in *auferstehen*, *auferwecken*, *feuerverzinken*, *generalüberholen*, *strafversetzen*, *zweckentfremden*, and also in *erstaufführen* and *voranmelden*, there is not one but two prefix-like parts. However, only *erstaufführen* and *voranmelden* are really parallel to *uraufführen* because these are the only two in which the second of the prefix-like parts, i.e., *auf-* and *an-*, is a separable particle (see Section 5 below).
Another question is of course whether examples of either kind exist in the other Germanic OV-languages. In (4) and (11) above, we saw that both kinds are attested in Dutch.
According to Cooper (1994, p. 47), the Zürich Swiss German versions of *uraufführen* (*uruffüere*) and the verbs in (7) may move to C°. However, according to my informants, in so far as verbs that correspond to the immobile German verbs exist at all, most of them are also immobile, e.g., in Stuttgart (Swabian) and in Bern, Zürich, and Sankt Gallen (i.e., in Swiss German, where the conflict may be avoided by insertion of *tun* ‘do’). Consider the following example from Swiss German as spoken in Bern (Ursula Wegmüller, p.c.):
(12) a. Uf em Wääg vo Züri uf New York müesse mer in Paris zwüschselande
*on the way from Zürich to New York must we in Paris between-land*
On the way from Zürich to New York, we have to have a stopover in Paris.
b. Uf em Wääg vo Züri uf New York si mer in Paris zwüschegglandet
*on the way from Zürich to New York are we in Paris between-landed*
On the way from Zürich to New York, we had a stopover in Paris.
c. … öb mer äch uf em Wääg vo Züri uf New York in Paris zwüsachelande
*… if we really on the way from Zürich to New York in Paris between-land*
whether we will really have a stopover in Paris
on the way from Zürich to New York
(13) a. * Zwüschelande mer eigentlech in Paris?
b. * Lande mer eigentlech in Paris zwüsche?
(between-) land we actually in Paris (between)
Intended: Will we actually have a stopover in Paris?
Summing up this section, we have seen that the immobile verbs include not only verbs with two (conflicting) prefix-like parts but also verbs with only one prefix-like part. Furthermore we have seen that the languages are less different than might appear from the literature: the data seem to be quite parallel in at least Dutch, German, and Swiss German.
4. Complex verbs: V° or V*
The crucial property common to all the immobile verbs as discussed above is that they are complex verbs with two (or more) internal parts, the last of which is itself a verb. Before continuing the discussion of immobile verbs in Section 5 below, I would like to discuss complex verbs in German more generally.
I assume that for a complex verb, whether it consists of a noun and a verb or of a particle and a verb, there are two relevant possibilities: V° and V*. One option is that the complex verb is of exactly the same status as a simplex verb, V° (result: verbs with non-separable prefixes). The other option, using the notation, of e.g., Booij (1990), is that the complex verb constitutes a V* (result: verbs with separable prefixes, V* being interpreted as more than V° but possibly less than V'). This follows a suggestion for German made by Haiden (1997, p. 105), Wurmbrand (1998, p. 271), and many others, namely that verb and separable particle form a lexical unit but not necessarily also a syntactic X°-constituent. For more discussion of particle verbs in Danish, German, and Yiddish along such lines, see, e.g., Vikner (2001, pp. 33–49).
Examples of V° (inseparable) are [V° [Prt ver] [V° stehcn]] ‘to understand’ among the Prt-V° complex verbs, and [V° [N° brand] [V° markcn]] ‘to fire-mark’, i.e., ‘to brand, to denounce’, among the N°–V° complex verbs.
Examples of V* (separable) are [V*[Prt ab][V° schicken]] ‘to send off’ among the Prt-V° complex verbs, and [V*[N° statt][V° finden]] ‘to place-find’, i.e., ‘to take place’, among the N°–V° complex verbs:
(14)
| Prt+V compounds | inseparable | separable |
|-----------------|-------------|-----------|
| | a. Prt ver | b. Prt ab |
| | V° stehen | V° schicken |
| N+V compounds | c. N° brand | d. N° statt |
|-----------------|-------------|------------|
| | V° marken | V° finden |
Let us first consider those complex verbs in which the first part is a particle.
Particle verbs that are V° are mobile but inseparable. (16a) is ruled out as a case of excorporation, i.e., there is a trace inside V°, which is impossible, according to Baker (1988, p. 73):
(15) Den Brief wird er nicht verstehen (German)
the letter will he not understand
(16) a. *Den Brief steht er nicht [V° ver [V° t]] (German)
b. Den Brief versteht er nicht [V° t]
the letter (under)stands he not (under)
Particle verbs that are V* are mobile but separable (actually, it is the V° contained within the V* that is mobile). In (18b), C° contains a V*, not a V°, but as a V* is larger than an X°, V* cannot occur in C°:
(17) Den Brief wird er nicht abschicken (German)
the letter will he not offsend
(18) a. Den Brief schickt er nicht [V* ab [V° t]] (German)
b. *Den Brief ab schickt er nicht [V* t]
the letter (off)sends he not (off)
Particle verbs of the V° and V* types also differ when it comes to the placement of the past participle prefix ge- and of the infinitival marker zu. If the whole particle verb is a V°, it does not allow the past participle prefix ge- at all (19c), and all of it is preceded by the infinitival marker zu (21b) whereas if the whole particle verb is a V*, only its second half (which is a V°) is preceded by ge- or by zu (20a)/(22a):
(19) a. *Er hat den Brief nicht vergestanden (German)
b. *Er hat den Brief nicht geverstanden
c. Er hat den Brief nicht verstanden
he has the letter not understood
(20) a. Er hat den Brief nicht abge[V° schickt] (German)
b. *Er hat den Brief nicht ge[V* abschickt]
he has the letter not off-sent
(21) a. *Er hat versucht, den Brief ver-zu-[V° stehen] (German)
b. Er hat versucht, den Brief zu [V° verstehen]
he has tried the letter (under) to (under)stand
He has tried to understand the letter.
(22) a. Er hat versucht, den Brief ab-zu-[V° schicken]
(German)
b. *Er hat versucht, den Brief zu [V* abschicken]
he has tried the letter (off) to (off)send
He has tried to send off the letter.
For more detail on the placement of the ge- prefix, see, e.g., Geilfuß (1998) and Rathert (2002).
Let us now turn to those complex verbs in which the first part is nominal.
The following verbs, taken from the lists in Eisenberg (1998, p. 323, (10) and p. 324, (15)) and Wellmann (1998, p. 449) are further examples of the two types:²
(23) V°, like brandmarken : (mobile but inseparable)
gewährleisten ‘to guarantee-achieve’, i.e., ‘to guarantee, to ensure’
handhaben ‘to hand-have’, i.e., ‘to handle, to implement’
lobpreisen ‘to praise.N-praise.V’, i.e., ‘to praise’
lustwandeln ‘to joy-stroll’, i.e., ‘to stroll’
maßregeln ‘to measure-rule’, i.e., ‘to reprimand’
nachtwandeln ‘to night-stroll’, i.e., ‘to sleepwalk’
sandstrahlen ‘to sand-radiate’, i.e., ‘to sandblast’
schlussfolgern ‘to conclusion-conclude’, i.e., ‘to conclude’
wetteifern ‘to contest-strive’, i.e., ‘to compete’
wetterleuchten ‘to weather-light’, i.e., ‘for lightning to flash in the distance’
(24) V*, like stattfinden: (mobile but separable)
Acht geben ‘to attention give’, i.e., ‘to pay attention’
Amok laufen ‘to amok run’, i.e., ‘to run amok’
Eis laufen ‘to ice run’, i.e., ‘to ice-skate’
Halt machen ‘to stop make’, i.e., ‘to stop’
Hof halten ‘to court hold’, i.e., ‘to hold court’
Kopf stehcn ‘to head stand’, i.e., ‘to stand on one’s head’
Matß halten ‘to measure hold’, i.e., ‘to exercise moderation’
preisgeben ‘to prize-give’, i.e., ‘to relinquish, to surrender something’
Probe singen ‘to sample sing’, i.e., ‘to show how well one sings’
Schlange stehen ‘to queue stand’, i.e., ‘to queue, to stand in line’
standhalten ‘to stand-hold’, i.e., ‘to stand firm’
teilnehmen ‘to part-take’, i.e., ‘to take part’
Wort halten ‘to word hold’, i.e., ‘to keep one’s word’
The two types behave differently both syntactically and morphologically. If the whole complex verb is a V°, all of it may undergo verb movement (25), cf. the inseparable (16) above. If it is a V*, only its second half (which is a V°) may undergo verb movement³ (26), cf. the separable (18) above.
(25) a. *Er markte die Misssstände \([V^\circ \text{ brand} [V^\circ \text{ t}]]\)
(German)
b. Er brandmarkte die Misssstände \([V^\circ \text{ t}]\)
he (fire) marked the irregularities (fire)
He denounced the irregularities adapted from
(Eisenberg 1998, p. 322)
(26) a. 1999 fand die Tagung in Berlin \([V_* \text{ statt} [V^\circ \text{ t}]]\)
(German)
b. *1999 statt fand die Tagung in Berlin \([V^\circ \text{ t}]\)
1999 (place) found the conference in Berlin (place)
In 1999, the conference took place in Berlin
However, if the verb occurs clause-finally in an embedded clause, there are no observable differences:
(27) … ob er die Misssstände brandmarkte
(German)
… if he the irregularities fire-marked
… whether he denounced the irregularities
(28) … ob die Tagung in Berlin stattfand
(German)
… if the conference in Berlin place-found
… whether the conference took place in Berlin
If the whole complex verb is a \(V^\circ\), all of it is preceded by the past participle prefix \(ge\)- (29b), and all of it is preceded by the infinitival marker \(zu\) (31b) whereas if the whole complex verb is a \(V^*\), only its second half (which is a \(V^\circ\)) is preceded by \(ge\)- or by \(zu\) (30a)/(32a):
(29) a. *Er hat die Misssstände brandge\[V^\circ \text{ markt}\]
(German)
b. Er hat die Misssstände ge\[V^\circ \text{ brandmarkt}\]
he has the irregularities fire-marked
He denounced the irregularities
(30) a. 1999 hat die Tagung in Berlin stattge\[V^\circ\text{funden}\]
(German)
b. *1999 hat die Tagung in Berlin ge\[V^\circ\text{stattfunden}\]
1999 has the conference in Berlin place-found
In 1999, the conference took place in Berlin
(31) a. *Er hat versucht, die Missstände brand-zu-\[V^\circ\text{marken}\]
(German)
b. Er hat versucht, die Missstände zu \[V^\circ\text{brandmarken}\]
he has tried the irregularities (fire-) to (fire-) mark
He has tried to denounce the irregularities.
(32) a. 2001 scheint die Tagung in Berlin statt- zu-\[V^\circ\text{finden}\]
(German)
b. *2001 scheint die Tagung in Berlin zu \[V^\circ\text{stattfinden}\]
2001 seems the conference in Berlin (place) to (place) find
In 2001, the conference would seem to take place in Berlin
Also here, the parallels between complex particle verbs and complex N° + V° verbs are striking, with the exception that the past participle of inseparable particle verbs contains no ge- prefix at all (19c) whereas the past participle of inseparable N° + V° verbs has a ge- prefix in front of the whole complex verb (29b).
Neither of the two classes of complex verbs discussed in this section, V° and V*, are immobile, in that V2 is possible in both cases; see, e.g., (25b) and (26a).
Summing up, there are nine logical possibilities (the underlining shows which parts may undergo V2):
| (33) | Inseparable verbs (V2 includes the prefix) | Separable verbs (V2 excludes the prefix) | Immobile verbs (V2 impossible) |
|------|------------------------------------------|----------------------------------------|-------------------------------|
| N° + V | a. brandmarken | b. statt-finden | c. schutz-impfen |
| Prt + V | d. ver-stiehen | e. ab-schicken | f. — |
| Prt + | g. be-[ein-flussen] | h. an-[er-kennen] | i. ur-[auf-führen] |
| Prt + V | ver-[aus-gaben] | [vor-an]-gehen | vor-[an-melden] |
The non-existence of the type (33f) is discussed at the end of Section 5.3 below. The differences between types (33g) and (33i) are discussed at the end of Section 5.5 below, where it is argued that the verbs under (33g) here should actually be under (33d).
5. Immobile verbs respect the requirements for both $V^\circ$ and $V^*$
I would now like to return to the immobile verbs discussed previously, i.e., the ones which did not allow verb movement. Take as an example the complex verb *schutzimpfen* ‘to inoculate’, which behaves syntactically exactly like *uraufführen* in (1)–(3) and *bausparen* in (8) above. It is derived from the compound noun *Schutzimpfung* (*Schutz* ‘protection’, *Impfung* ‘inoculation’) by means of back-formation, which undoes the nominalization of the second part of the compound by removing the nominalizing suffix *-ung*. The result is a so-called pseudo-compound (e.g., Wellmann 1998, p. 449) as *schutzimpfen* is not derived by composition although it appears to be a compound, i.e., *schutz-impfen*. In other words, it is derived exactly like the English verbs *to back-stab* and *to case-mark*:
\[
\begin{align*}
\text{(34)} & \quad \text{N} \\
& \quad \text{case mark- ing} \\
& \quad \text{Schutz- impf- ung} \\
\text{a. English} & \quad \rightarrow \\
\text{b. German} & \quad \text{N} \\
& \quad \text{case mark- ing} \\
& \quad \text{Schutz- impf- ung} \\
& \quad \rightarrow \\
& \quad \text{V} \\
& \quad \text{case mark} \\
& \quad \text{schutz- impfen}
\end{align*}
\]
It is clear that the second half of *schutzimpfen*, i.e., *impfen*, is a verb, cf. the infinitival morphology, but the categorial status of the whole complex verb has not been resolved, i.e., it has not been resolved whether it is a $V^\circ$ or a $V^*$. I would like to make the rather controversial suggestion that *schutzimpfen* and the other immobile complex verbs above have to fulfill BOTH the requirements imposed on complex verbs of the $V^\circ$ type AND the requirements imposed on complex verbs of the $V^*$ type. I would furthermore like to tentatively suggest that maybe the reason that immobile verbs have to fulfill the requirements imposed on $V^\circ$ is that they are seen as verbal elements that can receive a suffix (because they are derived by removing a nominalizing affix), and maybe the reason that immobile verbs have to fulfill the requirements imposed on $V^*$ is that they clearly consist of two parts ($N^\circ + V^\circ$), each of which is interpretable on its own. Whatever the exact reasons may be, I submit
that the result is that immobile verbs are not specified as being only V* or as being only V°, and therefore they may only occur in contexts which are compatible with both analyses.
5.1. Syntactic consequences
Syntactically, this means that V2 contexts are impossible. It is not the case that the last half of the complex verb can undergo V2 in both the V* and the V° analysis (as this is only possible in the V* analysis), nor is it the case that the whole complex verb can undergo V2 in both of the two analyses (as this is only possible in the V° analysis):
In the V° case, the whole complex verb can undergo V2 (25b) as it is a V°, but the last half of the complex verb cannot (25a) as this would cause the existence of a trace inside a V° (which is impossible, according to Baker 1988, p. 73).
In the V* case, the whole complex verb cannot undergo verb movement (26b) as it is not a V° but a V*, but the last half of the complex verb can (26a) as it is a V° and as such a movement would not cause a V° -internal trace (but only a trace internal to V*).
However, as both V° and V* complex verbs may occur clause-finally in an embedded clause, so may immobile verbs like *schutzimpfen.
5.2. Morphological consequences
Morphologically, the requirement that the complex verb may only occur in contexts which are compatible with both analyses means that ge-prefixation of the whole complex verb is impossible as this is incompatible with the V* analysis: in *gestattfunden (30b), ge- can only be prefixed on a V°, not on a V*. Exactly the same goes for the infinitival marker zu. It cannot occur in front of the whole complex verb as this is incompatible with the V* analysis: in *zu stattfinden (32b), zu only occurs in front of a V°, not in front of a V*.
However, the second half of the complex verb is itself a V° under both analyses. I would like to suggest that it may be prefixed by ge- or by zu, not only when the V° half is part of a V* (cf. stattgefunden and stattzufinden in (30a) and (32a)) but also when this V° is part of a larger V°. The latter cannot be directly observed, cf. the ungrammaticality of *brandgemarkt (29a) and *brandzumarken (31a), but this ungrammaticality might only be caused by a preference for prefixation to
apply to as large domains as possible. So *brandgemarkt (29a) and *brandzumarken (31a) are only dispreferred because the options gebrandmarkt (29b) and zu brandmarken (31b) are possible.
That “infixation” of ge- and zu is an option with all types of complex verbs, even with the complex verbs of the V° type, is supported by the following facts.
According to the German orthographical dictionary, Duden Rechtschreibung (Scholze-Stubenrecht, ed., 2000), two of the V° verbs in (23) may have either prefixation or infixation of ge -: gelobpreist and lobgepriesen are both possible past participles of lobpreisen ‘praise’; gesandstrahlt and also sandgestrahlt are possible participles of sandstrahlen ‘sandblast’.
A search of the corpus of written German available at the Institut für deutsche Sprache in Mannheim turned up the following infixed forms among the complex verbs of the V° type in (23) above:
(35) a. 7 cases of handzuhaben vs. 528 of zu handhaben
b. 1 case of lobzupreisen vs. 13 of zu lobpreisen
c. 2 cases of lustzuwandeln vs. 17 of zu lustwandeln
d. 1 case of maßzuregeln vs. 40 of zu maßregeln/zu massregeln
e. 21 cases of sandgestrahlt vs. 3 of gesandstrahlt
f. 2 cases of wettzueifern vs. 37 of zu wetteifern
(All the other forms of the verbs mentioned here and the other verbs in (23) above were found only with prefixed ge- and zu.)
In contrast to this somewhat mixed picture, all the verbs in the V* group (24), have only one type of forms, namely with infixation of ge- and zu.
As for the group of immobile verbs, I would also expect them to have only infixed ge- and zu, but I have to admit that the corpus search turned up two “prefixed” verb forms that go against this:
(36) a. 20 cases of aufzuerstehen vs. 1 of zu auferstehen
b. 6 case of zweckzuentfremden vs. 1 of zu zweckentfremden
I conclude that although prefixation of ge- and zu (zu handhaben) is much more frequent than infixation (handzuhaben) with the
complex verbs of the $V^\circ$ (inseparable) type, infixation remains an option. This means that infixation is an option both for complex verbs of the $V^*$ (separable) type and of the $V^\circ$ (inseparable) type, and this again explains why infixation is also possible with immobile verbs, even given the assumption from above that immobile verbs have to respect the requirements for both $V^\circ$ (inseparable) and $V^*$ (separable) verbs.
5.3. Overview
The various options for complex verbs of the $V^\circ$ type (left column) and for complex verbs of the $V^*$ type (right column) can be schematized as follows:
| | $[V^\circ N^\circ V^\circ]$, e.g. brandmarken | $[V^* N^\circ V^*]$, e.g. stattfinden | Examples |
|---|---------------------------------------------|-------------------------------------|----------|
| (37) | a. *$[C^\circ V^\circ] ... [V^\circ N^\circ t]$ | a' $[C^\circ V^\circ] ... [V^\circ N^\circ t]$ | (25a), (26a) |
| | b. $[C^\circ V^\circ] ... t$ | b' *$[C^\circ V^*] ... t$ | (25b), (26b) |
| (38) | a. $C^\circ [P ... [V^\circ N^\circ V^\circ]]$ | a' $C^\circ [P ... [V^* N^\circ V^\circ]]$ | (27), (28) |
| (39) | a. $[V^\circ N^\circ ge\cdot V^\circ]$ | a' $[V^\circ N^\circ ge\cdot V^*]$ | (35e), (30a) |
| | b. $ge\cdot [V^\circ N^\circ V^\circ]$ | b' *$ge\cdot [V^\circ N^\circ V^\circ]$ | (29b), (30b) |
| (40) | a. $[V^\circ N^\circ zu V^\circ]$ | a' $[V^\circ N^\circ zu V^*]$ | (35a), (32a) |
| | b. $zu [V^\circ N^\circ V^\circ]$ | b' *$zu [V^* N^\circ V^*]$ | (31b), (32b) |
Under the assumption that the immobile verbs like schutzimpfen have to fulfill both the requirements imposed on complex verbs of the $V^\circ$ type (inseparable) and the requirements imposed on complex verbs of the $V^*$ type (separable), we expect to find them only in structures which are possible in both columns. These cases are (38a/a'), clause-final finite verbs in embedded clauses, and (39a/a') and (40a/a'), “infixation” of $ge$- and of $zu$ (even though (39a) and (40a) would seem to be very infrequent as discussed in section 5.2 above).
One way of describing this situation is that the immobile verbs are in the intersection of the two sets of verbs, one which comprises complex verbs of the $V^\circ$ type and another which comprises complex verbs of the $V^*$ type:
One fact which is striking is that no particle verb (to be exact: no particle verb with only one particle, i.e., type (33f) above) belongs to the intersection of the two sets, i.e., there are no immobile (single) particle verbs. I think that this is due to the fact that the verbs which are immobile are not semantically transparent, i.e., we need real world knowledge to interpret what *bausparen* ‘building-save’ means, and thus semantics can offer no help in determining whether *bausparen* should belong to the V° or V* class. Particle verbs never find themselves in this situation: if they are semantically opaque, then they are also lexicalized and as such established as belonging either to the V° or the V* group (e.g., *umbringen*, *aufhören*, *verstehen*, where it is not obvious what the contribution of *um-*/*auf-*/*ver-* to the semantics of the verb is). If they are not established or lexicalized, then they (or rather their particles) have a transparent semantics/morphology, which will put them clearly into either the V° or the V* class.
5.4. The considerable variation from speaker to speaker
This account (in particular, the fact that the semantics of the verbs in question is non-transparent) is also compatible with the fact that there is considerable variation from speaker to speaker in whether they find a given example well formed or not when confronted with potentially immobile back-formation verbs. This is because it is a property of the individual complex verb in the lexicon whether it is a V° or a V* (or “both”). Which class a given complex verb belongs to depends on many
factors which vary from speaker to speaker, including how frequently it is used.
As Eisenberg (1998, p. 324) and Eschenlohr (1999, p. 156) also note, the judgments on these data are subject to great variation. For example Eisenberg (1998, p. 324, (15)) classifies *notlanden* ‘to emergency-land’, i.e., ‘to make an emergency landing’, among the V* verbs so that *notlanden* may undergo V2 if *not* stays behind while *landen* moves to C°, like *stattfinden* in (26). Gallmann (1999, p. 298, (90)), on the other hand, classifies *notlanden* among the immobile verbs such as *uraufführen* in (1)–(3) and *bausparen* in (8).
A search of the corpus of written German available at the *Institut für deutsche Sprache* in Mannheim\(^7\) turned up the following figures showing that a few V2 cases do occur even though my informants find them unacceptable:
(42) a. Out of 153 finite cases of *auferstehen*, 15 were V2 (9.8%)
b. Out of 4 finite cases of *auferwecken*, 0 were V2 (0%)
c. Out of 2 finite cases of *bausparen*, 0 were V2 (0%)
d. Out of 1 finite case of *erstaufführen*, 0 were V2 (0%)
e. Out of 2 finite cases of *gefriertrocknen*, 1 was V2 (50.0%)
f. Out of 18 finite cases of *hohnsprechen*, 0 were V2 (0%)
g. Out of 2 finite cases of *manndecken*, 0 were V2 (0%)
h. Out of 26 finite cases of *notlanden*, 1 was V2 (3.8%)
i. Out of 1 finite case of *rückfragen*, 0 were V2 (0%)
j. Out of 4 finite cases of *sonnenbaden*, 0 were V2 (0%)
k. Out of 2 finite cases of *strafversetzen*, 0 were V2 (0%)
l. Out of 66 finite cases of *uraufführen*, 0 were V2 (0%)
m. Out of 5 finite cases of *voranmelden*, 0 were V2 (0%)
n. Out of 42 finite cases of *zweckentfremden*, 10 were V2 (23.8%)
o. Out of 19 finite cases of *zwischenlanden*, 0 were V2 (0%)
(Notice that if two of the fifteen immobile verbs that are found in finite form were excluded from the count, viz. *auferstehen* and *zweckentfremden*, the number of counterexamples would fall from a total of 27 to only 2, or from 7.8% to 1.3%. Notice further that these two verbs were the only two that were found with the infinitival marker *zu* prefixed to the entire complex verb, cf. (36) above.)
5.5. Verbs with two prefixed particles
This analysis can also be applied to those immobile verbs which have two prefix-like parts, the second of which is a separable prefix, i.e., the Dutch verbs in (4) above and the following German verbs:
(43) uraufführen ‘to original-on-put’, i.e., ‘to put on (a play) for the very first time’
erstaufführen ‘to first-on-put’, i.e., ‘to put on (a play) for the very first time’
voranmelden ‘to pre-to-report’, i.e., ‘to preregister’
vorankündigen ‘to pre-to-announce’, i.e., ‘to announce in advance’
vorentchecken ‘to pre-in-check’, i.e., ‘to check in in advance’
The reason why these verbs are immobile has to do with the verb formed by the inner particle and the final $V^\circ$, i.e., -auf-führen, -an-melden and -ein-checken.
I have already suggested above that complex verbs containing separable prefixes are V*, and this also goes for -auf-führen, -an-melden and -ein-checken, given that auf-, an, and ein are separable particles. Now I would like to add the suggestion that the element to which German ur-, erst-, vor-, and Dutch her- are prefixed must be interpretable as a $V^\circ$. We therefore find ourselves in the same double requirement situation as above (presumably due to the fact that also these verbs came into existence through back-formation), where the -aufführen that occurs in uraufführen has to conform to both the requirements imposed by the $V^\circ$ analysis (e.g., auf- cannot be left behind during verb movement), and those imposed by the V* analysis (e.g., auf- cannot be taken along during verb movement), which means both that the -aufführen that occurs in uraufführen cannot occur in V2 at all but only clause-finally and also that ge- and zu can only precede -führen.
This account has a distinct advantage over the one that relies on the two prefix-like parts imposing different requirements, i.e., that in uraufführen, ur- is non-separable and auf- is separable. The point is that such an account could not be applied to, e.g., voranmelden or vorankündigen (cf. Voranmeldung ‘pre-registration’ and Vorankündigung ‘advance announcement’), both of which are immobile. The immobility of these two verbs cannot be linked to either of the two prefixes/particles
being non-separable because both *vor-* and *an-* are actually separable, and yet *voranmelden* and *vorankündigen* belong to the immobile verbs.\(^8\)
That *vor-* and *an-* are both separable can be seen from the fact that when either *vor* or *an* is the only particle, they are always separable, e.g., *annehmen* ‘assume’, *anschauen* ‘look at’, *vornehmen* ‘plan, carry out’, and *vortäuschen* ‘simulate’. That *vor-* and *an-* are both separable can also be seen from the large number of verbs where *voran-* can be left behind during V2, e.g., *voranbringen* ‘advance something’, *vorangehen* ‘go in front’, *vorankommen* ‘make headway’, and *vorantreiben* ‘push ahead’. The relevant difference between *voranmelden* and *vorankündigen*, both immobile, and *vorangehen*, *vorantreiben* etc., which are well formed in V2 clauses, is not that the prefixes/particles impose different requirements but instead that the two types have different structures: [[vor[anmelden]]] vs. [[voran][treiben]] a difference which is also supported by the differences in interpretation and in accentuation. Only in the former case, [vor[anmelden]], is there a V*, viz. [anmelden], that now also has to fulfill the requirements imposed on a V° because *vor-* cannot be prefixed on a V*. In the case of [[voran][treiben]], there is a complex particle *voran*, which together with the verbal part *treiben*, form a V*. But there is nothing which has to be interpreted both as a V° and as a V*, and so [[voran][treiben]] may undergo V2 like any other separable particle and verb combination.
A different but related question is what the difference is between the verbs in (43), e.g., *ur-[auf-führen]* or *vor-[an-melden]*, which belong to the type (33i) above, and the following verbs, listed at the end of Section 4 as belonging to type (33g), which are inseparable and not immobile:
(44) \hspace{1cm} be-[auf-tragen] ‘to PRT-on-carry’,
i.e., ‘to give someone a particular task’
be-[ein-drucken] ‘to PRT-in-press’, i.e., ‘to impress’
be-[ein-flussen] ‘to PRT-in-flow’, i.e., ‘to influence’
ver-[aus-gaben] ‘to pre-out-give’, i.e.,
‘to give more than one has, wear oneself out’
Under the analysis suggested here, the verbs in (44) would be predicted to be immobile verbs, just like the verbs in (43). The prefixation by the outer particles (*be-*-, *vor-*-) requires the inner particle and the verb to form a V° whereas the inner particle itself is a separable one (*auf-*-, *ein-*-, *aus-*-) requiring the inner particle and the verb to form a V*, much as *ur-* and *vor-* require the inner particle and the verb (in *ur-[auf-führen]* and
vor-[an-melden]) to form a V° whereas the inner particle itself is a separable one (auf-, an-) requiring the inner particle and the verb to form a V*.
However, there are reasons to assume that the analysis in (44) is not the correct one. As opposed to the verbs in (43), the verbs in (44) do not arise through back-formation but through conversion. be- and ver- are purely verbal prefixes, which are applied to the nouns Auftrag ‘task’, Eindruck ‘impression’, Einfluß ‘influence’, and Ausgabe ‘expenditure’ (notice also that the vowels in -drucken, -flussen, and -gaben are different from their corresponding verbs, which are drücken ‘press’, fließen ‘flow’, and geben ‘give’). This means that the structures in question are not as given in (44) but rather as follows:
(45) \[ \begin{align*}
\text{be-auftragen} & \text{ ‘to PRT-task’,} \\
& \text{i.e., ‘to give someone a particular task’} \\
\text{be-eindrucken} & \text{ ‘to PRT-impress’, i.e., ‘to impress’} \\
\text{be-einflussen} & \text{ ‘to PRT-influence’, i.e., ‘to influence’} \\
\text{ver-ausgaben} & \text{ ‘to pre-expenditure’,} \\
& \text{i.e., to give more than one has, wear oneself out’}
\end{align*} \]
In other words, these are not examples of type (33g) but rather of type (33d), i.e., inseparable verbs with only one particle. This does not necessarily mean that no verbs of type (33g) (inseparable particle verbs with two particles) could possibly exist, only that both particles would have to be inseparable.
6. Why a VO-language like Danish has no immobile verbs
In Danish and presumably in the other Germanic VO-languages, no finite verbs exist that are possible in some positions, e.g., in embedded clauses, but not in others, e.g., in main clauses. It is important that the crucial difference here is OV vs. VO, rather than, e.g., V2 vs. non-V2, as suggested in McIntyre (2002).
As far as potential verbs derived by back-formation are concerned, there are only two groups of these. One consists of verbs that do not exist, even though related nouns that could be the source for back-formation do exist:
(46) \[ \begin{align*}
*bändoptage & \text{ ‘to tape-up-take’, which should mean} \\
& \text{‘to record on tape’}
\end{align*} \]
*bjerghestige ‘to mountain-climb’, which should mean
‘to climb mountains’
*bogbinde ‘to book-bind’, which should mean
‘to bind books’
*boligopspare ‘to home-up-save’, which should mean
‘to save for buying a home’
*bugtale ‘to stomach-speak’, which should mean
‘to ventriloquize’
*hovedregne ‘to head-reckon’, which should mean
‘to do mental arithmetic’
*solbade ‘to sun-bathe’
*vandkøle ‘to water-cool’
The other group consists of back-formed verbs that do exist. I have split
this group into two subgroups because I do not find the (47a) group
completely well formed (although all the verbs in (47a,b) may be found
in two Danish dictionaries from 1996, NuDansk Ordbog and Ret-
skrivningsordbogen).
(47) a. ?databehandle ‘to data-treat’, i.e., ‘to computerize,
to process on a computer’
?gæsteforelase ‘to guest-lecture’
?kaederyge ‘to chain-smoke’
?maskinskrive ‘to machine-write’, i.e., ‘to type’
?nydanne ‘to new-form’, i.e., ‘to construct, to coin’
?prisgive ‘to prize-give’, i.e., ‘to relinquish,
to surrender something’
?strejkelamme ‘to strike-paralyze’, i.e., ‘to paralyze
through a labor strike’
b. dagdrømme ‘to day-dream’
deltage ‘to part-take’, i.e., ‘to take part’
dybfryse ‘to deep-freeze’
forsteopfore ‘to first-up-put’, i.e., ‘to perform
a play for the first time’
hjernevasker ‘to brain-wash’
iscenesætte ‘to in-scene-put’, i.e., ‘to direct, to engineer’
lovprise ‘to praise.N-praise.V’, i.e., ‘to praise’
mandsopdække ‘to man-cover’, i.e., ‘to mark someone in soccer’
mavelande ‘to stomach-land’, i.e., ‘to land on one’s stomach’
mellemlande ‘to between-land’, i.e., ‘to stop over in X on the way to Y’
planlægge ‘to plan-lay’, i.e., ‘to plan’
støvsuge ‘to dust-suck’, i.e., ‘to vacuum-clean’
sygemelde ‘to sick-report’, i.e., ‘to call in sick, to report someone as sick’
uropføre ‘to original-up-put’, i.e., ‘to perform a play for the first time’
It is clear, however, that in so far as the verbs in (47a and b) are well formed, they may occur in finite form in all positions in which finite verbs may occur (see also Hansen 1967, vol. 3, p. 177).
Thus the question arises why Danish (and presumably the other Germanic VO-languages) do not have any verbs like the Dutch, German, and Swiss German immobile verbs. The analysis of such verbs suggested in Section 5 above was that they have to fulfill the requirements imposed on $V^\circ$ complex verbs as well as the requirements imposed on $V^*$ complex verbs.
I would like to suggest that the reason why such verbs do not exist in Danish is that there is no way Danish verbs could possibly satisfy the two sets of requirements, due to the directionality variation. The verbal part of the complex verb is the rightmost one in the $V^\circ$ cases (47a and b), as in $[V^\circ [N^\circ \text{plan}][V^\circ \text{lægge}]]$ ‘to plan’. (48b) shows that $\text{planlægge}$ is a $V^\circ$, and (49) shows that the order is N–V:
(48) a. * Lægger de plan at holde konferencen i Reykjavík?
(Danish)
b. Planlægger de at holde konferencen i Reykjavík?
(plan)lay they (plan) to hold conference-the in Reykjavík
Are they planning to hold the conference in Reykjavík?
(49) a. *Hvorfor kunne de ikke lægge plan at holde konferencen her?
(Danish)
b. Hvorfor kunne de ikke planægge at holde konferencen her?
why could they not plan to hold conference-the here
In the V* cases (see (53) below for more examples), the verbal part of the complex verb is the leftmost one, as in [V* [V° finde][N° sted]] ‘to take place’. (50a) shows that finde sted is a V*, and (51) shows that the order is V–N:
(50) a. Finder konferencen sted i Reykjavík? (Danish)
b. *Sted finder konferencen i Reykjavík?
(place)find conference-the(place) in Reykjavík
Is the conference taking place in Reykjavík
(51) a. Hvorfor kunne konferencen ikke finde sted her?
(Danish)
b. *Hvorfor kunne konferencen ikke sted-finde her?
why could conference-the not take place here
The difference between the Danish separable complex verb [v+ [v° finde][N° sted]] and the German separable complex verb [v+ [N° statt][N° finden]] is thus completely parallel to the differences between entire VPs in the two languages, namely that Danish is VO and German OV.
In other words, the intersection between the two sets, illustrated for German in (41), is necessarily empty in Danish:
(52)
\[
\begin{array}{c}
V^* \quad [v^* [N^* plan][v^* lagge]] \\
[v^* [Pn^* for][v^* stå]]
\end{array}
\]
\[
\begin{array}{c}
V^* \quad [v^* [v^* finde][N^* sted]] \\
[v^* [v^* smide][Pn^* ud]]
\end{array}
\]
planlægge ‘to plan-lay’, i.e., ‘to plan’
finde sted ‘to find place’, i.e., ‘to take place’
forstå ‘to fore-stand’, i.e., ‘to understand’
smide ud ‘to throw out’
Here are some more examples of the V* type (more can be found in the literature on Danish under the heading “unit accentuation”, e.g., in Thomsen 1992 or Grønnum 1998, p. 206):
(53) gå amok ‘to go amok’
giveagt ‘to give attention’, i.e., ‘to pay attention’
gøre holdt ‘to make stop’, i.e., ‘to stop’
holde måde ‘to hold measure’, i.e., ‘to exercise moderation’
holde ord ‘to hold word’, i.e., ‘to keep one’s word’
holde stand ‘to hold stand’, i.e., ‘to stand firm’
tage del ‘to take part’
vække opsigt ‘to awake attention’, i.e., ‘to attract attention’
The reason why a VO-language such as Danish has no immobile verbs is thus that it is simply not possible for any verbs to satisfy both the requirements on complex V° verbs (which are Prt–V) and the ones on complex V* verbs (which are V–Prt). This account therefore explains
why Danish verbs similar to *uraufführen* and the other doubly prefixed verbs are not immobile. In Danish, the following verbs are of this type:
(54) *genopblomstre* ‘to re-up-blossom’,
i.e., ‘to experience a renaissance’
*genopblusse* ‘to re-up-flare’, i.e., ‘for,
e.g., hostilities to break out again’
*genopsætte* ‘to re-up-put’, i.e., ‘to put on a play again’
These may all undergo V2 (55a) even though without the prefix *gen-* ‘re-’ they are impossible in V2 clauses unless the (separable) inner particle *op-* ‘up’ is left behind (56):
(55) a. I maj genopblussedede stridighederne med fornyet styrke
(Danish)
b. *I maj opblussedede stridighederne gen med fornyet styrke
c. *I maj gen-blussedede stridighederne op med fornyet styrke
d. *I maj blussedede stridighederne genop med fornyet styrke
*in May (re)(up)flared hostilities-the (re)(up) with renewed force*
In May, the hostilities broke out again with renewed force.
(56) a. *I maj opblussedede stridighederne med fornyet styrke
(Danish)
b. I maj blussedede stridighederne op med fornyet styrke
*in May (up)flared hostilities-the (up) with renewed force*
In May the hostilities broke out with renewed force.
The point here is similar to the one above, namely that the requirements for V* are violated even before the new verb with *gen-*-, e.g., *genopblusse*, is formed because V* (i.e., with a separable particle) does not allow the order particle-verb but only verb-particle. Therefore *opblusse* has already been forced into being a V° only, and the fact that prefixation of *gen-* requires *opblusse* to be a V° does not change anything. The crucial question is thus whether *genopblusse* is a possible verb or not, and not whether it occurs in one position or the other.
7. Conclusion
In this paper, I have suggested a somewhat radical analysis of the immobile verbs in Dutch, German, and Swiss German. The suggestion was that for reasons of underspecification, these immobile verbs have to fulfill both the requirements imposed on complex verbs of the $V^\circ$ type (= verbs with non-separable prefixes) and the requirements imposed on complex verbs of the $V^*$ type (= verbs with separable prefixes). This results in such verbs being morphologically unexceptional, i.e., having a full set of forms but syntactically peculiar (immobile), i.e., they can only occur in their base position, where no movement has taken place. Any kind of movement is incompatible with either the $V^\circ$ requirements or the $V^*$ requirements.
The analysis here has tried to account for:
– why these immobile verbs include verbs with only one prefix-like part (because they are created through back-formation and are semantically opaque) and why the single prefix-like part in these verbs may NOT be a particle (particle verbs are either semantically transparent, and then the particle determines $V^\circ$ or $V^*$, or they are opaque, and then they are lexicalized as either $V^\circ$ or $V^*$),
– why these immobile verbs include all two-particle verbs with the structure [Prt [Prt V]] where the inner (i.e., rightmost) particle is a separable particle, including verbs with two separable particles as in, e.g., German voranmelden ‘preregister’ and vorankündigen ‘announce in advance’ (because the outer particle requires a $V^\circ$, but the inner separable particle produces a $V^*$),
– why there is so much individual speaker variation as to whether a given verb belongs to this group or not (because it is a property of the individual complex verb in the lexicon whether it is a $V^\circ$ or a $V^*$ or unspecified, and this depends on many factors which vary from speaker to speaker, including how frequently the verb in question is used),
– why such verbs are not found in Germanic VO-languages such as English and Scandinavian (there is no way that any verb could simultaneously satisfy the $V^\circ$ requirements, which include a prefixed particle, and the $V^*$ requirement, which include a postposed particle).
Whereas I thus disagree with Haider (1993, p. 62) and Koopman (1995) about the details of the morphosyntactic analysis of the individual verbs, I agree with these two works about the consequences for the analysis of verb movement in Dutch, German, and Swiss German. The reason why
it is only possible for finite forms of these verbs to occur in clause-final position in embedded clauses is that this position is the base-generated position, and thus no conflict can arise as to whether the prefix-like part must or must not be carried along under verb movement.
Thus the fact that several OV-Germanic verbs, not just one, behave in this way provides further support for the conclusion that the clause-final position of finite verbs in embedded clauses is the same position that non-finite verbs have in all clauses (presumably inside their own VP and definitely below I°). In other words, Dutch, German, and Swiss German do not have V°-to-I° movement.
Notes
1. These searches were made by looking for lexical entries containing the remark “only/mainly used in the infinitive or in the past participle”. For each of the verbs in (10), my informants disagreed with this and found the verbs to possess a full set of morphological forms. This conflicts not only with the Duden dictionaries but also with some authors, e.g., Eschenlohr (1999, p. 147), who says that some of these verbs have neither finite forms nor a past participle.
2. I have changed Eisenberg’s spelling to conform with the 1998 German orthographical reform. The changes introduced in the words in (24), from, e.g., *achtgeben to Acht geben, have been the subject of heated debate not only in the public at large but also among linguists, cf., e.g., Bredel and Günther (2000) and Gallmann (1999, 2000).
3. Although *stattfinden thus may split up when *finden undergoes V2, there are more indications than the orthography that stattfinden and the other V* complex verbs are indeed complex verbs and not simply two constituents of the clause. One such indication is that unless -finden itself undergoes V2, statt- and -finden can never be split, cf. (iib) and (iiib and c), as opposed to the transitive verb *finden ‘find’ and its object, cf. (iib) and (ivb and c):
(i) a. Die Tagung hat in Berlin stattgefunden (German)
b. *Die Tagung hat statt in Berlin gefunden
the conference has (place) in Berlin (place)found
The conference took place in Berlin.
(ii) a. Peter hat in Berlin das Buch gefunden (German)
b. Peter hat das Buch in Berlin gefunden
Peter has (the book) in Berlin (the book) found
Peter found the book in Berlin.
(iii) a. Stattgefunden hat die Tagung in Berlin (German)
b. * Gefunden hat die Tagung statt in Berlin
c. * Gefunden hat die Tagung in Berlin statt
(place)found has the conference (place) in Berlin (place)
The conference took place in Berlin.
(iv) a. Das Buch gefunden hat Peter in Berlin (German)
b. Gefunden hat Peter das Buch in Berlin
c. Gefunden hat Peter in Berlin das Buch
(the book) found has Peter (the book) in Berlin (the book)
Peter found the book in Berlin.
The topicalized participles in (iiia) and (iva–c) focus on the main verb and are best in contrastive contexts. Examples of such contexts could be for (iii): ‘The conference took place in Berlin, but it was planned in Stuttgart’, for (ixa): ‘Peter found the book in Berlin, but he wrote his paper on it in Stuttgart’, for (ivb): ‘Peter found the book in Berlin, but he read it in Stuttgart’, and finally for (ixc): ‘Peter found the book in Berlin, but he found the article in Stuttgart’.
4. The corpus searched comprised 529 million words at the time of this search in January 2001. Strings which were cited precisely for their peculiar syntax, e.g., as examples in newspaper articles on the German orthographical reform, have not been counted.
5. An anonymous reviewer points out that it is possible that the speakers who produced the infixations reported in (35) simply interpret these (normally inseparable verbs) as separable. This is of course possible, but as opposed to the account offered in the main text, this would not explain why infixation of *ge-/*zu with V* verbs occurs sporadically whereas prefixation of *ge-/*zu with V* verbs would seem not to occur at all. Nor would it explain why native speakers are slightly less unhappy with infixations such as the ones in (35) than they are with the same verbs in constructions where the first element has been stranded under V2.
6. This picture conflicts somewhat with the findings of Åsdahl-Holmberg (1976), reported in Eschenlohr (1999, p. 158), where, e.g., 31% of the informants accepted the prefixed form *zu schutzimpfen* and 5% *geschutzimpft*, 19% *zu bausparen* und 7% *gebauspart*.
7. The corpus searched comprised 533 million words at the time of this search in February 2001. Strings which were cited precisely for their peculiar syntax, e.g., as examples in newspaper articles on the German orthographical reform, have not been counted.
8. An anonymous reviewer suggests an analysis where the immobility of verbs with two separable prefixes, e.g., *vorammelden*, results from *vor-* and *an-* not both being able to be stranded at the same time. The idea would be that either *vor-* is stranded, and then the non-stranding of *an-* causes ungrammaticality, or *voran-* is stranded, in which case the non-stranding of *vor-* is the cause of the ungrammaticality. However, I think that the stranding of *voran-* must be seen as satisfying the separability requirement of *vor-*; given that when *voran-* is stranded, *vor-* is not followed by a verbal element. Also, it seems to me that if the stranding of *voran-* could be seen as not satisfying the separability requirement of *vor-*; stranding of *voran-* is predicted to be impossible also in a number of cases where it is possible, e.g., *vorantreiben* ‘push ahead’, as discussed below in the main text.
9. As in previous segmentations of this kind, the infinitival ending is ignored.
References
Åsdahl-Holmberg, Märta: 1976, *Studien zu den verbalen Pseudokomposita im Deutschen*, *Göttinger germanistische Forschungen* **14**, Lund University Press, Lund.
Baker, Mark: 1988, *Incorporation*, University of Chicago Press, Chicago.
Becker-Christensen, Christian: 1996, *Politikens store nye mudansk ordbog*, CD-rom version, Politikens Forlag, Copenhagen.
Booij, Geert: 1990, ‘The Boundary between Morphology and Syntax: Separable Complex Verbs in Dutch’, in G. Booij and J. van Marle (eds.), *Yearbook of Morphology 1990*, Foris, Dordrecht, pp. 45–63.
Bredel, Ursula and Hartmut Günther: 2000, ‘Quer über das Feld das Kopfadjunkt’, *Zeitschrift für Sprachwissenschaft* **19.1**, 103–110.
Cooper, Kathrin: 1994, *Topics in Zurich German Syntax*, Ph.D. dissertation, University of Edinburgh.
Eisenberg, Peter: 1998, *Grundriss der deutschen Grammatik I: Das Wort*, J.B. Metzler, Stuttgart.
Eschenlohr, Stefanie: 1999, *Vom Nomen zum Verb: Konversion, Präfigierung und Rückbildung im Deutschen*, Olms Verlag, Hildesheim.
Gallmann, Peter: 1999, ‘Wortbegriff und Nomen-Verb-Verbindungen’, *Zeitschrift für Sprachwissenschaft* **18.2**, 269–304.
Gallmann, Peter: 2000, ‘Kopfzerbrechen wegen Kopfadjunkten’, *Zeitschrift für Sprachwissenschaft* **19.1**, 111–113.
Geißfuß, Jochen: 1998, ‘Über die optimale Position von ge-’, *Linguistische Berichte* **176**, 581–588.
Gromnum, Nina: 1998, *Fonetik og Fonologi – Almen og Dansk*, Akademisk Forlag, Copenhagen.
Haiden, Martin: 1997, ‘Verbal Inflection and the Structure of IP in German’, *Groninger Arbeiten zur Germanistischen Linguistik* **41**, 77–106.
Haider, Hubert: 1993, *Deutsche Syntax Generativ*, Gunter Narr Verlag, Tübingen.
Haider, Hubert: 2001, ‘Heads and Selection’, in N. Corver and H. van Riemsdijk (eds.), *Semilexical Categories*, Mouton de Gruyter, Berlin, pp. 67–96.
Hansen, Aage: 1967, *Moderne Dansk 1-3*, Grafisk Forlag, Copenhagen.
Höhle, Tilman: 1991, ‘Projektionsstufen bei V-Projektionen’, ms., University of Tübingen.
Koopman, Hilda: 1995, ‘On Verbs that Fail to Undergo V-Second’, *Linguistic Inquiry* **26.1**, 137–163.
McIntyre, Andrew: 2002, ‘Verb-Second and Backformations and Scalar Prefix Verbs in German: The Interaction between Morphology, Syntax and Phonology’. ms., University of Leipzig.
Rathert, Monika: 2002, ‘Morphophonology of the past participle in German: Where is the place of ge-?’, ms., University of Tübingen.
*Reiskrimningsordbogen*, 1996, CD-rom version of the 2nd edition, Aschehoug, Copenhagen.
Rizzi, Luigi: 1996, ‘Residual Verb Second and the Wh-Criterion’, in A. Belletti and L. Rizzi (eds.), *Parameters and Functional Heads*, Oxford University Press, New York, pp. 63–90.
Rohrbacher, Bernhard: 1999, *Morphology-driven Syntax*, John Benjamins, Amsterdam.
Scholze-Stubenrecht, Werner (ed.): 2000, *Duden-Die deutsche Rechtschreibung*, CD-rom version of the 22nd edition, Bibliographisches Institut and F.A. Brockhaus, Mannheim.
Thomsen, Ole Nedergaard: 1992, ‘Unit Accentuation as an Expressional Device for Predicate Formation in Danish’, *Acta Linguistica Hafniensia* **23**, 145–196.
Vikner, Sten: 1997, ‘V’-to-I’ Movement and Inflection for Person in All Tenses’, in L. Haegeman (ed.), *The New Comparative Syntax*, Longman, London, pp. 189–213.
Vikner, Sten: 2001, *Verb Movement Variation in Germanic and Optimality Theory*, Habilitation dissertation, University of Tübingen.
Wellmann, Hans: 1998, ‘Die Wortbildung’, in P. Eisenberg et al. (eds.), *Grammatik der deutschen Gegenwartssprache (Duden 4)*, Dudenverlag, Mannheim, pp. 408–557.
Wurmbrand, Susi: 1998, ‘Heads or Phrases? Particles in Particular’, in W. Kehrein and R. Wiese (eds.), *Phonology and Morphology of the Germanic Languages*, Max Niemeyer, Tübingen, pp. 267–295.
|
A Framework for Superintendent and Board Evaluation
Introduction
There has long been need in Manitoba for guidelines to assist boards in evaluating their superintendents and their own activity. MASS and MAST have collaborated on the present Framework in the belief that these processes should occur in parallel fashion, and cannot be truly separated from an assessment of the progress of the division generally. Whatever the evaluation process used, its chief value will be in the dialogue that occurs. The suggested roles for both superintendent and boards outlined in the following pages should stimulate a rich dialogue without which evaluation processes have little value.
It is recommended that the purposes, principles and processes for evaluating both boards and superintendents be outlined in a comprehensive local policy. The guidelines below should be helpful in developing such policy. The suggested criteria listed in the framework may also be helpful to boards each year in identifying those that might be evaluated at that particular time. This would provide a kind of “instrument” for use in the evaluation process. The discussion required to achieve this first step contributes to the value of the entire process.
The exercising of effective leadership takes place in a culture of responsibility rather than of accountability, says Bart McGettrick, author of “Developing a Culture of Responsibility: A Position Paper” (2004). He characterizes a culture of responsibility as “one which values the constructive contribution of each member, builds teams and relationships, and supports all actions which are taken in the common good.”
McGettrick’s model includes three core elements: vision and values, policies, and professional practices of superintendents. In order to incorporate the leadership of boards as well as superintendents into this framework, we’ve added governance to policies, and board operations to the dimension of professional practices. It is through the interaction of these components - how the vision is expressed in policies and demonstrated in practice - that a culture of responsibility is developed and sustained by boards and superintendents working together.
The following purposes, principles and processes should be useful in developing a divisional evaluation policy:
**Purposes**
- Assess progress toward the stated goals of the division plan.
- Identify potential challenges and opportunities and envision future directions for the division.
- Enhance the collaborative working relationship between the Board and the Superintendent
- clarify the distinction between Board and Superintendent responsibilities
- measure ability of Board and Superintendent to work as an effective leadership team.
- Provide opportunity for Board and Superintendent self-review and assessment.
- Identify on-going professional learning needs for Board and Superintendent.
- Support the professional and personal growth of the Superintendent as the educational leader of the division and Board members as educational governors and policy makers for their communities.
- Fulfill contractual obligations of the Board and Superintendent.
**Principles:**
- The process values the contribution of both Superintendent and Board of Trustees in the achievement of the division goals.
- The process reflects the collective commitment of the Superintendent and the Board of Trustees to quality education for all students.
- The process should include commitment to and the practice of honesty, fairness, trust, justice and mutual respect.
- The details of the process should be mutually agreed upon by the Board and the Superintendent.
- Evaluation should be based upon an ethical process of data collection.
- The process should be relevant to the identified job descriptions and roles of the Superintendent and the Board.
- Evaluation shall respect the confidentiality of the employer-employee relationship.
Process
1. Through discussion and mutual agreement between the Board and Superintendent, determine:
a. choice of benchmarks or criteria on which to focus.
b. who is responsible for each aspect of the process.
c. Timelines and dates for meetings.
d. Data sources (e.g., evaluation surveys, divisional data)
e. Additional information sources, e.g., divisional plan, role descriptions for Board and Superintendent, divisional policies, Superintendent’s regular reports to the Board.
2. Collect evidence and documentation relevant to the achievement of organizational goals and priorities and other mutually agreed upon criteria.
3. Review and discuss the collected data/documentation to assess divisional achievements over the past year and progress toward stated longer-term goals. (This assessment should be supplemented by periodic monitoring and review on a regular basis throughout the year.)
4. Complete self-assessments (both Board and Superintendent).
5. Sharing and discuss the self-assessments.
6. Identify successes, opportunities, challenges and strategic priorities.
7. Prepare final report(s).
8. Evaluate the process to identify necessary or desirable changes for the future in policy or practice.
1. Values and Vision
Within a culture of responsibility, the Superintendent and Board collaborate to lead the community in the development and articulation of shared values, common purposes and a desired future for the division.
| BENCHMARKS In the School Division: | SAMPLE FEATURES The Superintendent: | SAMPLE FEATURES The School Board: |
|-----------------------------------|------------------------------------|----------------------------------|
| A. There is a statement of the Vision and Mission that is led by values. | Articulates the value of education in a democratic society. Assisted by the Trustees, develops a collective vision for the Division based on its values. Understands and models appropriate values, ethics and moral leadership. Ensures that the values are shared with all members of the school community but that each school is able to express its distinctive values within the divisional framework. Directs strategic planning and change efforts with an emphasis on teaching and learning, reasonable risk-taking and innovation. | Articulates the value of education in a democratic society. Engages community and divisional staff in the articulation of collective values and vision for the Division. Models divisional values and utilizes values, vision and mission as filters for policy-development and decision-making at the Board level. Monitors divisional processes and outcomes to ensure congruency with values, vision and mission. Engages in strategic planning to set direction and establish goals for teaching and learning in the division. |
| B. The education system is inclusive. | Ensures that structures exist for all people to participate in developing the values and policies of the school division | Provides a policy framework and appropriate structures to ensure broad-based community participation in policy development and decision-making at the school and Division levels. |
| BENCHMARKS In the School Division: | SAMPLE FEATURES The Superintendent: | SAMPLE FEATURES The School Board: |
|-----------------------------------|------------------------------------|----------------------------------|
| **B.** The education system is inclusive | Promotes appropriate involvement by students and parents and community as well as staff in school and division decision-making.
Is knowledgeable about research and good practice with respect to multicultural sensitivity and the adaptation of programs to meet the needs of diverse communities.
Provides leadership in social inclusion to address the diversity of student populations and communities.
Serves as an articulate spokesperson for the welfare of all students in the multicultural context of education.
Manages a balance between community demands and what is in the best interest of students. | Endorses and promotes socially inclusive policies and practices to address the needs of diverse student populations and communities.
Represents and advocates for all students and all communities within the Division.
Makes decisions which balance community demands with what is in the best interests of students. |
| **C.** The Division is characterized by a culture of learning, including lifelong learning. | Promotes a culture of learning among staff and students through modeling, encouragement and support.
Encourages schools to take responsibility for the learning needs of the communities they serve. | Sets clear expectations for student learning and staff reporting requirements on student learning outcomes.
Provides a policy framework and resource allocations for professional development opportunities for all staff. |
| BENCHMARKS In the School Division: | SAMPLE FEATURES The Superintendent: | SAMPLE FEATURES The School Board: |
|-----------------------------------|------------------------------------|----------------------------------|
| C. The Division is characterized by a culture of learning, including lifelong learning. | Empowers others to reach high levels of performance supported by professional development and study. Ensures that each school explicitly expresses its expectations for learning. Demonstrates an understanding of provincial, national and international issues affecting education, shares as needed, and encourages others to be so informed. | Monitors and reports regularly to Government and community regarding progress toward divisional goals. Recognizes and celebrates student achievement and staff accomplishments within the Division. Demonstrates an understanding of provincial, national and international issues facing education and uses this knowledge to inform direction-setting and decision-making within the Division. |
2. Governance and Policies
Within a culture of responsibility, the Board and Superintendent provide leadership which recognizes the rights of every student to an education of the highest quality within a policy framework that is lawful, respectful of individuals and understandable to the community at large.
| BENCHMARKS | SAMPLE FEATURES | SAMPLE FEATURES |
|------------|-----------------|-----------------|
| Policies and governance processes in the School Division: | The Superintendent: | The School Board: |
| A. Are congruent with legal requirements and provincial policy directions governing public education and schools as learning and work environments. | Is knowledgeable about and ensures compliance with relevant legislation and statutes and provincial policies governing education and public schools. Provides regular review and revision of divisional policies and processes to maintain alignment with legislated obligations and mandates of school divisions. Deploys and manages the use of divisional resources - human, material and financial - in accordance with divisional directions, goals and policy requirements. Ensures clarity and transparency of divisional policies, practices and objectives to internal and external communities. | Complies with relevant legislation and statutes and provincial policies governing education and public schools. Reviews and revises divisional policies as appropriate to maintain alignment with legislated obligations and mandates. Monitors the allocation of divisional resources - human, material and financial - to ensure congruency with divisional directions, goals and policies. Communicates divisional policies, practices and objectives to internal and external communities. |
| B. Reflect the expressed values of the Division. | Monitors the development and application of policies within the Division to ensure relevance and congruency with divisional values. | Sets clear expectation for monitoring and reporting on the implementation and application of divisional policies. |
| BENCHMARKS | SAMPLE FEATURES | SAMPLE FEATURES |
|------------|-----------------|-----------------|
| Policies and governance processes in the School Division. | The Superintendent: | The School Board: |
| B. Reflect the expressed values of the Division. | Responds to address identified needs for policy development. In both professional and personal conduct, communicates and models divisional values to staff, students, parents and community members. Ensures periodic review of divisional values and policies to maintain their currency as a foundation for planning and operations. | Creates and approves new policies in response to identified needs within the division. Communicates and models divisional values in interactions with staff, students, parents and community. Reviews periodically, divisional values and policies to maintain their currency as a foundation for planning and operations. |
| C. Articulates roles, responsibilities and delegated authorities within the Division. | Understands the legal and political nature of elected school boards and works effectively with the Board and with individual trustees. Establishes and implements policies, decision-making protocols and communications strategies to ensure role clarity. Demonstrates knowledge of and skill in the management of complex organizations and organizational change processes. | Provides a policy framework which clearly delineates Board and Superintendent roles, and delegated authorities within the school division. Respects and upholds established policies and decision-making protocols concerning role delineation with the Division. |
| BENCHMARKS Policies and governance processes in the School Division: | SAMPLE FEATURES The Superintendent: | SAMPLE FEATURES The School Board: |
|---|---|---|
| D. Provide a framework for teaching and learning within the Division. | Possesses and demonstrates extensive knowledge of human learning, instructional pedagogy and provincial curricula. Engages in professional learning activities to remain current with emerging trends and developments in teaching and learning. Promotes the use of information, data and research to inform instructional policies and practices within the Division. Is aware of and responsive to the professional learning needs of all divisional staff. Implements strategies to maximize employee growth, performance, and job satisfaction throughout the division. Ensures regular communication about student learning to parents, Board of Trustees and community. | Seeks to be informed about emerging trends and research with regard to learning, instruction and provincial education policy and curricula. Requires the use of information, data and research to inform instructional practices and policies within the division. Allocates resources to meet the professional learning needs of all staff and approves strategies to enhance employee growth, performance and job satisfaction throughout the division. Communicates regularly to the community about student learning and achievement. |
3. Professional Practices and Board Operations
Within a culture of responsibility, the Board and Superintendent provide leadership to promote professional practices and Board operations that enhance communication and community relationships, and foster effective organizational management, curriculum planning and development, and teaching and learning.
| BENCHMARKS | SAMPLE FEATURES | SAMPLE FEATURES |
|------------|-----------------|-----------------|
| The School Division’s practices: | The Superintendent: | The School Board: |
| A. Enhance communication and relationships among all members of the educational community. | Demonstrates the attributes of a skilled listener and speaker. | Provides a policy framework to support communications and partnership initiatives within the Division and the broader educational community. |
| | Demonstrates effective presentation, facilitative and discussion techniques. | Actively seeks constituent and community input and seeks to consult and collaborate in planning, budgeting and policy development processes within the Division. |
| | Models a positive and problem-solving approach to challenges. | Models a positive and problem-solving approach to challenges. |
| | Models effective communication strategies and relationship skills with all members of the educational community, including government at all levels. | |
| | Institutes effective structures to provide information and assistance to internal and external communities. | |
| B. Employ organizational processes and strategies for optimum use of divisional human, capital and fiscal resources. | Delegates effectively for task accomplishment. | Establishes an annual budget process which is comprehensive and inclusive in its consideration of resource issues and concerns. |
| | Uses evidence to make decisions about facilities and human resources. | |
| BENCHMARKS | SAMPLE FEATURES | SAMPLE FEATURES |
|------------|-----------------|-----------------|
| The School Division’s practices: | The Superintendent: | The School Board: |
| **B.** Employ organizational processes and strategies for optimum use of divisional human, capital and fiscal resources. | Anticipates factors that will affect planning contexts. Establishes structures to guide the annual budget development process, and financial procedures and services. Establishes structures for appropriate allocation and development and support of divisional personnel. Establishes structures for ancillary services to support student learning. | Uses evidence and data to make decisions about the allocation of human, capital and fiscal resources to meet divisional goals. Respects the professional expertise of staff and delegated authorities within the Division with regard to operational issues. Conducts an annual performance review of the Superintendent/CEO. |
| **C.** Support curriculum planning and development and instructional processes that enhance teaching and learning. | Is knowledgeable about instructional practices that enhance student learning. Provides leadership for effective development and implementation of curriculum. Oversees processes that ensure assessment and diagnosis of student needs and the provision of resources to meet those needs. Provides leadership for effective program review. | Provides a policy framework for curriculum planning, development and implementation within the Division. Sets clear expectations for monitoring and reporting of student learning outcomes and staff performance appraisal processes. Uses outcomes data to inform decision-making about teaching and learning within the Division. |
| BENCHMARKS | SAMPLE FEATURES | SAMPLE FEATURES |
|------------|-----------------|-----------------|
| The School Division’s practices: | The Superintendent: | The School Board: |
| C. Support curriculum planning and development and instructional processes that enhance teaching and learning | Institutes effective processes to gather information about student learning to inform decision-making. Oversees comprehensive, fair and consistent student evaluation and reporting processes to inform students and their parents about student learning. Provides leadership to guide effective system-wide professional development processes. | |
| D. Reflect characteristics of a learning community. | Is a resilient and creative learner. Values new opportunities for his/her professional learning. Articulates a professional learning plan targeted to the realities and challenges of his/her role. Embraces evidence, research and innovation in his/her professional learning. | Values new learning and board development activities for all Board members. Articulates a policy framework and an annual plan for Board evaluations including Board development activities. Encourages the active participation of all trustees in both individual and board learning and development endeavours. |
|
Constrained multifidelity optimization using model calibration
Andrew March · Karen Willcox
Received: 4 April 2011 / Revised: 22 November 2011 / Accepted: 28 November 2011 / Published online: 8 January 2012
© Springer-Verlag 2012
Abstract Multifidelity optimization approaches seek to bring higher-fidelity analyses earlier into the design process by using performance estimates from lower-fidelity models to accelerate convergence towards the optimum of a high-fidelity design problem. Current multifidelity optimization methods generally fall into two broad categories: provably convergent methods that use either the high-fidelity gradient or a high-fidelity pattern-search, and heuristic model calibration approaches, such as interpolating high-fidelity data or adding a Kriging error model to a lower-fidelity function. This paper presents a multifidelity optimization method that bridges these two ideas; our method iteratively calibrates lower-fidelity information to the high-fidelity function in order to find an optimum of the high-fidelity design problem. The algorithm developed minimizes a high-fidelity objective function subject to a high-fidelity constraint and other simple constraints. The algorithm never computes the gradient of a high-fidelity function; however, it achieves first-order optimality using sensitivity information from the calibrated low-fidelity models, which are constructed to have negligible error in a neighborhood around the solution. The method is demonstrated for aerodynamic shape optimization and shows at least an 80% reduction in the number of high-fidelity analyses compared other single-fidelity derivative-free and sequential quadratic programming methods. The method uses approximately the same number of high-fidelity analyses as a multifidelity trust-region algorithm that estimates the high-fidelity gradient using finite differences.
Keywords Multifidelity · Derivative-free · Optimization · Multidisciplinary · Aerodynamic design
Nomenclature
| Symbol | Description |
|--------|-------------|
| $A$ | Active and violated constraint Jacobian |
| $a$ | Sufficient decrease parameter |
| $B$ | A closed and bounded set in $\mathbb{R}^n$ |
| $\mathcal{B}$ | Trust region |
| $C$ | Differentiability Class |
| $c(x)$ | Inequality constraint |
| $d$ | Artificial lower bound for constraint value |
| $e(x)$ | Error model for objective |
| $\tilde{e}(x)$ | Error model for constraint |
| $f(x)$ | Objective function |
| $g(x)$ | Inequality constraint |
| $h(x)$ | Equality constraint |
| $\mathcal{L}$ | Expanded level-set in $\mathbb{R}^n$ |
| $L$ | Level-set in $\mathbb{R}^n$ |
| $\mathcal{L}$ | Lagrangian of trust-region subproblem |
| $\mathcal{M}$ | Space of all fully linear models |
| $m(x)$ | Surrogate model of the high-fidelity objective function |
| $\tilde{m}(x)$ | Surrogate model of the high-fidelity constraint |
| $n$ | Number of design variables |
| $p$ | Arbitrary step vector |
| RBF | Radial Basis Function |
A version of this paper was presented as paper AIAA-2010-9198 at the 13th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Fort Worth, TX, 13–15 September 2010.
A. March (✉) · K. Willcox
Department of Aeronautics & Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
e-mail: email@example.com
K. Willcox
e-mail: firstname.lastname@example.org
| Symbol | Description |
|--------|-------------|
| $r$ | Radial distance between two points |
| $s$ | Trust-region step |
| $t$ | Constant between 0 and 1 |
| $w$ | Constraint violation conservatism factor |
| $\mathbf{x}$ | Design vector |
| $\alpha$ | Convergence tolerance multiplier |
| $\beta$ | Convergence tolerance multiplier |
| $\gamma_0$ | Trust-region contraction ratio |
| $\gamma_1$ | Trust-region expansion ratio |
| $\varepsilon$ | Termination tolerance |
| $\varepsilon_2$ | Termination tolerance for trust region size |
| $\Delta$ | Trust region size |
| $\delta_h$ | Linearized step size for feasibility |
| $\delta_x$ | Finite difference step size |
| $\eta_0$ | Trust-region contraction criterion |
| $\eta_1, \eta_2$ | Trust-region expansion criterion |
| $\kappa$ | Bound related to function smoothness |
| $\lambda$ | Lagrange multiplier |
| $\hat{\lambda}$ | Lagrange multiplier for surrogate model |
| $\xi$ | Radial basis function correlation length |
| $\rho$ | Ratio of actual to predicted improvement |
| $\sigma$ | Penalty parameter |
| $\tau$ | Trust-region solution tolerance |
| $\Phi$ | Penalty function |
| $\phi$ | Surrogate penalty function |
| $\phi$ | Radial basis function |
**Superscript**
- * Optimal
- + Active or violated inequality constraint
**Subscript**
- $0$ Initial iterate
- $bhm$ Bound for Hessian of the surrogate model
- $blg$ Upper bound on the Lipschitz constant
- $c$ Relating to constraint
- $cg$ Relating to constraint gradient
- $cgm$ Maximum Lipschitz constant for a constraint gradient
- $cm$ Maximum Lipschitz constant for a constraint
- $f$ Relating to objective function
- $fg$ Lipschitz constant for objective function gradient
- $FCD$ Fraction of Cauchy Decrease
- $g$ Relating to objective function gradient
- $high$ Relating to the high-fidelity function
- $k$ Index of trust-region iteration
- $low$ Relating to a lower-fidelity function
- $\tilde{m}$ Related to constraint surrogate model
- max User-set maximum value of that parameter
---
**1 Introduction**
High-fidelity numerical simulation tools can provide valuable insight to designers of complex systems and support better decisions through optimal design. However, the design budget frequently prevents the use of formal optimization methods in conjunction with high-fidelity simulation tools, due to the high cost of simulations. An additional challenge for optimization is obtaining accurate gradient information for high-fidelity models, especially when the design process employs black-box and/or legacy tools. Instead, designers often set design aspects using crude lower-fidelity performance estimates and later attempt to optimize system subcomponents using higher-fidelity methods. This approach in many cases leads to suboptimal design performance and unnecessary expenses for system operators.
An alternative is to use the approximate performance estimates of lower-fidelity simulations in a formal multifidelity optimization framework, which aims to speed the design process towards a high-fidelity optimal design using significantly fewer costly simulations. This paper presents a multifidelity optimization method for the design of complex systems, and targets in particular, systems for which accurate design sensitivity information is challenging to obtain.
Multifidelity optimization of systems with design constraints can be achieved with the approximation model management framework of Alexandrov et al. (1999, 2001). That technique optimizes a sequence of surrogate models of the objective function and constraints that are first-order consistent with their high-fidelity counterparts. The first-order consistent surrogates are constructed by correcting lower-fidelity models with either additive or multiplicative error models based on function value and gradient information of the high-fidelity models. This technique is provably convergent to a locally optimal high-fidelity design and in practice can provide greater than 50% reductions in the number of high-fidelity function calls, translating into important reductions in design turnaround times (Alexandrov et al. 1999). Estimating the needed gradient information is often a significant challenge, especially in the case of black-box simulation tools. For example, the high-fidelity numerical simulation tool may occasionally fail to provide a solution or it may contain noise in the output (e.g., due to numerical tolerances). In these cases, which arise frequently as both objectives and constraints in complex system design, if design sensitivities are unavailable then finite-difference gradient estimates will likely be inaccurate and non-robust. Therefore, there is a need for constrained multifidelity optimization methods that can guarantee convergence to a high-fidelity optimal design while reducing the number of expensive simulations, but that do not require gradient estimates of the high-fidelity models.
Constraint handling in derivative-free optimization is challenging. Common single-fidelity approaches use linear approximations (Powell 1994, 1998), an augmented Lagrangian (Kolda et al. 2003, 2006; Lewis and Torczon 2010), exact penalty method (Liuzzi and Lucidi 2009), or constraint filtering (Audet and Dennis 2004) in combination with either a pattern-search or a simplex method. In the multifidelity setting, if the generated surrogate models are smooth, then surrogate sensitivity information can be used to speed convergence.\(^1\) This may provide an advantage over single-fidelity pattern-search and simplex methods, especially when the dimension of the design space is large. For example, constraint-handling approaches used in conjunction with Efficient Global Optimization (EGO) (Jones et al. 1998) either estimate the probability that a point is both a minimum and that it is feasible, or add a smooth penalty to the surrogate model prior to using optimization to select new high-fidelity sample locations (Sasena et al. 2002; Jones 2001; Rajnarayan et al. 2008). These heuristic approaches may work well in practice, but unfortunately have no guarantee of convergence to a minimum of the high-fidelity design problem. Another constraint-handling method used together with the Surrogate Management Framework (SMF) (Booker et al. 1999) augments a pattern-search method with predictions of locally optimal designs from a surrogate model. The underlying pattern-search method ensures convergence of the method, so a broad range of surrogate models are allowable. One technique for generating the surrogate model is conformal space mapping, where a low-fidelity function is calibrated to the high-fidelity function at locations where the value is known (Castro et al. 2005). This calibrated surrogate enables the inclusion of accurate local penalty methods as a sensitivity-based technique for quickly estimating the location of constrained high-fidelity optima.
These existing multifidelity methods highlight the benefits of exploiting sensitivity information generated from calibrated surrogate models in a derivative-free optimization setting. Here we propose a formal framework, based on a derivative-free trust region approach, to systematically generate calibration data and to control surrogate model quality. Thus, this paper develops a constrained multifidelity optimization method that in practice can quickly find high-fidelity optimal designs and be robust to unavailable or inaccurate gradient estimates, because it generates smooth surrogate models of high-fidelity objectives and constraints without requiring high-fidelity model gradient information. More specifically, this is accomplished by creating a fully linear model (formally defined in the next section), which establishes Lipschitz-type error bounds between the high-fidelity function and the surrogate model. This ensures that the error between the high-fidelity model gradient and surrogate model gradient is locally bounded without ever calculating the high-fidelity gradient. Conn et al. (2008) showed that polynomial interpolation models can be made fully linear, provided the interpolating set satisfies certain geometric requirements, and further developed an unconstrained gradient-free optimization technique using fully linear models (Conn et al. 2009a, b). Wild et al. (2008) demonstrated that a radial basis function interpolation could satisfy the requirements for a fully linear model and be used in Conn’s derivative-free optimization framework (Wild 2009; Wild and Shoemaker 2009). March and Willcox (2010) generalized this method to the cases with arbitrary low-fidelity functions or multiple low-fidelity functions using Bayesian model calibration methods.
Section 2 of this paper presents the derivative-free method to optimize a high-fidelity objective function subject to constraints with available derivatives. Fully linear surrogate models of the objective function are minimized within a trust-region setting until no further progress is possible or when convergence to a high-fidelity optimum is achieved. Section 3 presents a technique for minimizing a high-fidelity objective function subject to both constraints with available derivatives and computationally expensive constraints with unavailable derivatives. The constraints without available derivatives are approximated with multifidelity methods, whereas the other constraints are handled either implicitly with a penalty method or explicitly. Section 4 presents an aerodynamic shape optimization problem to demonstrate the proposed multifidelity optimization techniques and compares the results with other single-fidelity methods and approximation model management using finite difference gradient estimates. Finally, Section 5 concludes the paper and discusses extensions of the method to the case when constraints are hard (when the objective function fails to exist if the constraints are violated).
### 2 Constrained optimization of a multifidelity objective function
This section considers the constrained optimization of a high-fidelity function, \( f_{\text{high}}(x) \), that accurately estimates system metrics of interest but for which accurate gradient estimates are unavailable. We first present a formalized problem statement and some qualifying assumptions. We then present a trust-region framework, the surrogate-based optimization problems performed within the trust region, and the trust region updating scheme. We follow this with an algorithmic implementation of the method, and with a brief discussion of algorithmic limitations and theoretical considerations needed for robustness.
\(^1\)Note that in the multifidelity setting, we use the term “derivative-free” to indicate an absence of derivatives of the high-fidelity model.
2.1 Problem setup and assumptions
We seek the vector \( x \in \mathbb{R}^n \) of \( n \) design variables that minimizes the value of the high-fidelity objective function subject to equality constraints, \( h(x) \), and inequality constraints \( g(x) \),
\[
\begin{align*}
\min_{x \in \mathbb{R}^n} & \quad f_{\text{high}}(x) \\
\text{s.t.} & \quad h(x) = 0 \\
& \quad g(x) \leq 0,
\end{align*}
\] (1)
where we assume gradients of \( h(x) \) and \( g(x) \) with respect to \( x \) are available or can be estimated accurately. To reduce the number of evaluations of \( f_{\text{high}}(x) \) we use a low-fidelity function, \( f_{\text{low}}(x) \), that estimates the same metric as \( f_{\text{high}}(x) \) but with cheaper evaluation cost and lower accuracy. We seek to find the solution to (1) without estimating gradients of \( f_{\text{high}}(x) \), by calibrating \( f_{\text{low}}(x) \) to \( f_{\text{high}}(x) \) and using sensitivity information from the calibrated surrogate model. The calibration strategy employed may break down should either \( f_{\text{high}}(x) \) or \( f_{\text{low}}(x) \) not be twice continuously differentiable or not have a Lipschitz continuous first derivative, although in many such cases the algorithm may still perform well.
2.2 Trust-region model management
From an initial design vector \( x_0 \), the trust-region method generates a sequence of design vectors that each reduce a merit function consisting of the high-fidelity function value and penalized constraint violation, where we denote \( x_k \) to be this design vector on the \( k \)th trust-region iteration. Following the general Bayesian calibration approach in Kennedy and O’Hagan (2000), we define \( e_k(x) \) to be a model of the error between the high- and low-fidelity functions on the \( k \)th trust-region iteration, and construct a surrogate model \( m_k(x) \) for \( f_{\text{high}}(x) \) as
\[
m_k(x) = f_{\text{low}}(x) + e_k(x).
\] (2)
We define the trust region at iteration \( k \), \( B_k \), to be the region centered at \( x_k \) with size \( \Delta_k \),
\[
B_k = \{ x : \| x - x_k \| \leq \Delta_k \},
\] (3)
where any norm on \( \mathbb{R}^n \) can be used.
To solve the constrained optimization problem presented in (1) we define a merit function, \( \Phi(x_k, \sigma_k) \), where \( \sigma_k \) is a parameter that must go to infinity as the iteration number \( k \) goes to infinity and serves to increase the penalty placed on the constraint violation. To prevent divergence of this algorithm, we need the penalty function to satisfy some basic properties. First, the merit function with the initial penalty, \( \sigma_0 \), must be bounded from below within a relaxed level-set, \( L(x_0, \sigma_0) \), defined as
\[
L(x_0, \sigma_0) = \{ x \in \mathbb{R}^n : \Phi(x, \sigma_0) \leq \Phi(x_0, \sigma_0) \}
\] (4)
\[
B(x_k) = \{ x \in \mathbb{R}^n : \| x - x_k \| \leq \Delta_{\text{max}} \}
\] (5)
\[
L(x_0, \sigma_0) = L(x_0, \sigma_0) \bigcup_{x_k \in L(x_0, \sigma_0)} B(x_k),
\] (6)
where \( \Delta_{\text{max}} \) is the maximum allowable trust-region size and the relaxed level-set is required because the trust-region algorithm may attempt to evaluate the high-fidelity function at points outside of the level set at \( x_0 \). Second, the level sets of \( \Phi(x_k, \sigma_k > \sigma_0) \) must be contained within \( L(x_0, \sigma_0) \), and third, \( L(x_0, \sigma_0) \) must be a compact set. These properties ensure that all design iterates, \( x_k \), remain within \( L(x_0, \sigma_0) \).
Although other merit functions, such as augmented Lagrangians, are possible, we restrict our attention to merit functions based on quadratic penalty functions because it is trivial to show that they are bounded from below if the objective function obtains a finite global minimum and there is no need to consider arbitrarily bad Lagrange multiplier estimates. The merit function used in this method is the objective function plus the scaled sum-squares of the constraint violation, where \( g^+(x) \) denotes the values of the nonnegative inequality constraints,
\[
\Phi(x, \sigma_k) = f_{\text{high}}(x) + \frac{\sigma_k}{2} h(x)^T h(x) + \frac{\sigma_k}{2} g^+(x)^T g^+(x).
\] (7)
The parameter \( \sigma_k \) is a penalty weight, which must go to \( +\infty \) as the iteration \( k \) goes to \( +\infty \). Note that when using a quadratic penalty function for constrained optimization, under suitable hypotheses on the optimization algorithm and penalty function, the sequence of iterates generated, \( \{x_k\} \), can either terminate at a feasible regular point at which the Karush–Kuhn–Tucker (KKT) conditions are satisfied, or at a point that minimizes the squared norm of the constraint violation, \( h(x)^T h(x) + g^+(x)^T g^+(x) \) (Nocedal and Wright 2006; Bertsekas 1999).
We now define a surrogate merit function, \( \hat{\Phi}(x, \sigma_k) \), which replaces \( f_{\text{high}}(x) \) with its surrogate model \( m_k(x) \),
\[
\hat{\Phi}(x, \sigma_k) = m_k(x) + \frac{\sigma_k}{2} h(x)^T h(x) + \frac{\sigma_k}{2} g^+(x)^T g^+(x).
\] (8)
Optimization is performed on this function, and updates to the trust-region are based on how changes in this surrogate merit function compare with changes in the original merit function, \( \Phi(x, \sigma_k) \).
Our calibration strategy is to make the surrogate models \( m_k(x) \) fully linear, where the following definition of a fully linear model is from Conn et al.:
**Definition 1** Let a function \( f_{\text{high}}(\mathbf{x}) : \mathbb{R}^n \rightarrow \mathbb{R} \) that is continuously differentiable and has a Lipschitz continuous derivative, be given. A set of model functions \( .M = \{ m : \mathbb{R}^n \rightarrow \mathbb{R}, \ m \in C^1 \} \) is called a fully linear class of models if the following occur:
There exist positive constants \( \kappa_f, \kappa_g \) and \( \kappa_{hlg} \) such that for any \( \mathbf{x} \in L(x_0, \sigma_0) \) and \( \Delta_k \in (0, \Delta_{\text{max}}] \) there exists a model function \( m_k(\mathbf{x}) \) in \( .M \) with Lipschitz continuous gradient and corresponding Lipschitz constant bounded by \( k_{hlg} \), and such that the error between the gradient of the model and the gradient of the function satisfies
\[
\| \nabla f_{\text{high}}(\mathbf{x}) - \nabla m_k(\mathbf{x}) \| \leq \kappa_g \Delta_k \quad \forall \mathbf{x} \in \mathcal{B}_k
\]
(9)
and the error between the model and the function satisfies
\[
\left| f_{\text{high}}(\mathbf{x}) - m_k(\mathbf{x}) \right| \leq \kappa_f \Delta_k^2 \quad \forall \mathbf{x} \in \mathcal{B}_k.
\]
(10)
Such a model \( m_k(\mathbf{x}) \) is called fully linear on \( \mathcal{B}_k \) (Conn et al. 2009a).
### 2.3 Trust-region subproblem
At each trust-region iteration a point likely to decrease the merit function is found by solving one of two minimization problems on the fully linear model for a step \( s_k \), on a trust region of size \( \Delta_k \):
\[
\min_{s_k \in \mathbb{R}^n} \quad m_k(\mathbf{x}_k + s_k)
\]
\[
\text{s.t.} \quad \mathbf{h}(\mathbf{x}_k + s_k) = 0
\]
\[
\mathbf{g}(\mathbf{x}_k + s_k) \leq 0
\]
\[
\| s_k \| \leq \Delta_k,
\]
(11)
or
\[
\min_{s_k \in \mathbb{R}^n} \quad \hat{\Phi}_k(\mathbf{x}_k + s_k, \sigma_k)
\]
\[
\text{s.t.} \quad \| s_k \| \leq \Delta_k.
\]
(12)
The subproblem in (12) is used initially to reduce constraint infeasibility. However, there is a limitation with this subproblem that the norm of the objective function Hessian grows without bound due to the penalty parameter increasing to infinity. Therefore to both speed convergence and prevent Hessian conditioning issues, the subproblem in (11) with explicit constraint handling is used as soon as a point that satisfies \( \mathbf{h}(\mathbf{x}) = 0 \) and \( \mathbf{g}(\mathbf{x}) \leq 0 \) exists within the current trust region. This is estimated by a linear approximation to the constraints, however, if the linearized estimate falsely suggests (11) has a feasible solution then we take recourse to (12).\(^2\)
For both trust-region subproblems, (11) and (12), the subproblem must be solved such that the 2-norm of the first-order optimality conditions is less than a constant \( \tau_k \). This requirement is stated as \( \| \nabla_x \mathcal{L}_k \| \leq \tau_k \), where \( \mathcal{L}_k \) is the Lagrangian for the trust-region subproblem used. There are two requirements for \( \tau_k \). First \( \tau_k < \varepsilon \), where \( \varepsilon \) is the desired termination tolerance for the optimization problem in (1). Second, \( \tau_k \) must decrease to zero as the number of iterations goes to infinity. Accordingly, we define
\[
\tau_k = \min \left[ \beta \varepsilon, \alpha \Delta_k \right],
\]
(13)
with a constant \( \beta \in (0, 1) \) to satisfy the overall tolerance criteria, and a constant \( \alpha \in (a, 1) \) multiplying \( \Delta_k \) to ensure that \( \tau_k \) goes to zero. The constant \( a \) will be defined as part of a sufficient decrease condition that forces the size of the trust-region to decrease to zero in the next subsection.
### 2.4 Trust-region updating
Without using the high-fidelity function gradient, the trust-region update scheme must ensure the size of the trust-region decreases to zero to establish convergence. To do this, a requirement similar to the fraction of Cauchy decrease requirement in an the unconstrained trust-region formulation is used (see, for example, Conn et al. 2009a). We require that the improvement in our merit function is at least a small constant \( a \in (0, \varepsilon] \), multiplying \( \Delta_k \),
\[
\hat{\Phi}(\mathbf{x}_k, \sigma_k) - \hat{\Phi}(\mathbf{x}_k + s_k, \sigma_k) \geq a \Delta_k.
\]
(14)
The sufficient decrease condition is enforced through the trust region update parameter, \( \rho_k \). The update parameter is the ratio of the actual reduction in the merit function to the predicted reduction in the merit function unless the sufficient decrease condition is not met,
\[
\rho_k = \begin{cases}
0 & \hat{\Phi}(\mathbf{x}_k, \sigma_k) - \hat{\Phi}(\mathbf{x}_k + s_k, \sigma_k) < a \Delta_k \\
\frac{\hat{\Phi}(\mathbf{x}_k, \sigma_k) - \hat{\Phi}(\mathbf{x}_k + s_k, \sigma_k)}{\hat{\Phi}(\mathbf{x}_k, \sigma_k) - \hat{\Phi}(\mathbf{x}_k + s_k, \sigma_k)} & \text{otherwise}.
\end{cases}
\]
(15)
The size of the trust region, \( \Delta_k \), must now be updated based on the quality of the surrogate model prediction. The size of the trust region is increased if the surrogate model predicts the change in the function value well, kept constant if the
---
\(^2\)Note that an initial feasible point, \( x_0 \) could be found directly using the gradients of \( h(x) \) and \( g(x) \); however, since the penalty method includes the descent direction of the objective function it may better guide the optimization process in the case of multiple feasible regions. Should the initial iterate be feasible, the deficiencies of a quadratic penalty function are not an issue.
prediction is fair, and the trust region is contracted if the model predicts the change poorly. Specifically, we update the trust region size using
\[
\Delta_{k+1} = \begin{cases}
\min\{\gamma_1 \Delta_k, \Delta_{\text{max}}\} & \text{if } \eta_1 \leq \rho_k \leq \eta_2, \\
\gamma_0 \Delta_k & \text{if } \rho_k \leq \eta_0, \\
\Delta_k & \text{otherwise},
\end{cases}
\]
where \(0 < \eta_0 < \eta_1 < 1 < \eta_2, 0 < \gamma_0 < 1,\) and \(\gamma_1 > 1.\) Regardless of whether or not a sufficient decrease has been found, the trust-region center will be updated if the trial point has decreased the value of the merit function,
\[
x_{k+1} = \begin{cases}
x_k + s_k & \text{if } \Phi(x_k, \sigma_k) > \Phi(x_k + s_k, \sigma_k) \\
x_k & \text{otherwise}.
\end{cases}
\]
A new surrogate model, \(m_{k+1}(x),\) is then built such that it is fully linear on a region \(\mathcal{B}_{k+1}\) having center \(x_{k+1}\) and size \(\Delta_{k+1}.\) The new fully linear model is constructed using the procedure of Wild et al. (2008) with the calibration technique of March and Willcox (2010).
### 2.5 Termination
For termination, we must establish that the first-order KKT conditions,
\[
\left\| \nabla f_{\text{high}}(x_k) + A(x_k)^T \lambda(x_k) \right\| \leq \varepsilon,
\]
\[
\left\| \left[ h(x_k)^T, g^+(x_k)^T \right] \right\| \leq \varepsilon
\]
are satisfied at \(x_k,\) where \(A(x_k)\) is defined to be the Jacobian of all active or violated constraints at \(x_k,\)
\[
A(x_k) = \left[ \nabla h(x_k)^T, \nabla g^+(x_k)^T \right]^T,
\]
and \(\lambda(x_k)\) are Lagrange multipliers. The additional complementarity conditions are assumed to be satisfied from the solution of (11). The constraint violation criteria, (19), can be evaluated directly. However, the first-order condition, (18), cannot be verified directly in the derivative-free case because the gradient, \(\nabla f_{\text{high}}(x_k),\) is unknown. Therefore, for first-order optimality we require two conditions: first-order optimality with the surrogate model, and a sufficiently small trust-region. The first-order optimality condition using the surrogate model is
\[
\left\| \nabla m_k(x_k) + A(x_k)^T \hat{\lambda}(x_k) \right\| \leq \max \left[ \beta \varepsilon, \alpha \Delta_k, a \right] \leq \varepsilon,
\]
where \(\hat{\lambda}\) are the Lagrange multipliers computed using the surrogate model and active constraint set estimated from the surrogate model instead of the high-fidelity function. This approximate stationarity condition is similar to what would be obtained using a finite-difference gradient estimate with a fixed step size where the truncation error in the approximate derivative eventually dominates the stationarity measure, but in our case the sufficient decrease modification of the update parameter eventually dominates the stationarity measure (Boggs and Dennis 1976). For \(\Delta_k \to 0,\) we have from (9) that \(\|\nabla f_{\text{high}}(x) - \nabla m_k(x)\| \to 0,\) and also \(\|\hat{\lambda}(x_k) - \lambda(x_k)\| \to 0.\) Therefore, we have first-order optimality as given in (18). In practice, the algorithm is terminated when the constraint violation is small, (19), first-order optimality is satisfied on the model, (21), and the trust region is small, say \(\Delta_k < \varepsilon_2\) for a small \(\varepsilon_2.\)
### 2.6 Implementation
The numerical implementation of the multifidelity optimization algorithm, which does not compute the gradient of the high-fidelity objective function, is presented as Algorithm 1. A set of possible parameters that may be used in this algorithm is listed in Table 2 in Section 4. A key element of this algorithm is the logic to switch from the penalty function trust-region subproblem, (12), to the subproblem that uses the constraints explicitly, (11). Handling the constraints explicitly will generally lead to faster convergence and fewer function evaluations; however, a feasible solution to this subproblem likely does not exist at early iterations. If either the constraint violation is sufficiently small, \(\|\left[ h(x_k)^T, g^+(x_k)^T \right]\| \leq \varepsilon,\) or the linearized steps, \(\delta_h,\) satisfying \(\left[ h(x) + \nabla h(x)^T \delta_h = 0 \right.\) for all equality and inequality constraints are all smaller than the size of the trust region, then the subproblem with the explicit constraints is attempted. If the optimization fails, then the penalty function subproblem is solved.
This method may be accelerated with the use of multiple lower-fidelity models. March and Willcox (2010) suggest a multifidelity filtering technique to combine estimates from multiple low-fidelity functions into a single maximum likelihood estimate of the high-fidelity function value. That technique will work unmodified within this multifidelity optimization framework and in many situations may improve performance.
### 2.7 Theoretical considerations
This subsection discusses some performance limitations of the proposed algorithm as well as theoretical considerations needed for robustness.
This paper has not presented a formal convergence theory; at best, such a theory will apply only under many
**Algorithm 1: Multifidelity Objective Trust-Region Algorithm**
0: Set initial parameters, $a$, $\alpha$, $\beta$, $\varepsilon$, $\varepsilon_2$, $\eta_0$, $\eta_1$, $\eta_2$, $\gamma_1$, $\Delta_{\text{max}}$, and $\sigma_0$. Choose initial starting point $x_0$, and build initial surrogate model $m_0(x)$ fully linear on $\{x : \|x - x_0\| \leq \Delta_0\}$. Set $k = 0$.
1: Update tolerance, $\tau_k = \min[\beta\varepsilon, \alpha\Delta_k]$.
2: Choose and solve a trust-region subproblem:
2a: If the constraint violation is small, $\|\begin{bmatrix} h(x_k)^T & g^+(x_k)^T \end{bmatrix}\| \leq \varepsilon$, or the maximum linearized step to constraint feasibility for all active and violated constraints is smaller than the current trust region size, $\Delta_k$, then solve:
$$\min_{s_k \in \mathbb{R}^n} m_k(x_k + s_k)$$
subject to:
$$h(x_k + s_k) = 0$$
$$g(x_k + s_k) \leq 0$$
$$\|s_k\| \leq \Delta_k,$$
to convergence tolerance $\tau_k$.
2b: If 2a is not used or fails to converge to the required tolerance, solve the trust-region subproblem:
$$\min_{s_k \in \mathbb{R}^n} \hat{\Phi}_k(x_k + s_k, \sigma_k)$$
subject to:
$$\|s_k\| \leq \Delta_k.$$
to convergence tolerance $\tau_k$.
3: If $f_{\text{high}}(x_k + s_k)$ has not been evaluated previously, evaluate the high-fidelity function at that point.
3a: Store $f_{\text{high}}(x_k + s_k)$ in database.
4: Compute the merit function $\Phi(x_k, \sigma_k), \Phi(x_k + s_k, \sigma_k)$, and the surrogate merit function, $\hat{\Phi}(x_k + s_k, \sigma_k)$.
5: Compute the ratio of actual improvement to predicted improvement,
$$\rho_k = \begin{cases}
0 & \hat{\Phi}(x_k, \sigma_k) - \hat{\Phi}(x_k + s_k, \sigma_k) < a\Delta_k \\
\frac{\Phi(x_k, \sigma_k) - \Phi(x_k + s_k, \sigma_k)}{\hat{\Phi}(x_k, \sigma_k) - \hat{\Phi}(x_k + s_k, \sigma_k)} & \text{otherwise.}
\end{cases}$$
6: Update the trust region size according to $\rho_k$,
$$\Delta_{k+1} = \begin{cases}
\min\{\gamma_1\Delta_k, \Delta_{\text{max}}\} & \text{if } \eta_1 \leq \rho_k \leq \eta_2 \\
\gamma_0\Delta_k & \text{if } \rho_k \leq \eta_0, \\
\Delta_k & \text{otherwise},
\end{cases}$$
7: Accept or reject the trial point according to improvement in the merit function,
$$x_{k+1} = \begin{cases}
x_k + s_k & \Phi(x_k, \sigma_k) - \Phi(x_k + s_k, \sigma_k) > 0 \\
x_k & \text{otherwise}.
\end{cases}$$
8: Create new model $m_{k+1}(x)$ fully linear on $\{x : \|x - x_{k+1}\| \leq \Delta_{k+1}\}$.
9: Increment the penalty, $\sigma_{k+1} = \max[e^{k+1/10}, 1/\Delta_{k+1}^{1.1}]$. Increment $k$.
10: Check for convergence: if $\|\nabla m_k(x_k) + A(x_k)^T\lambda\| \leq \varepsilon$, $\|\begin{bmatrix} h(x_k)^T & g^+(x_k)^T \end{bmatrix}\| \leq \varepsilon$, and $\Delta_k \leq \varepsilon_2$ the algorithm is converged, otherwise go to step 1.
---
restrictive assumptions. In addition, we can at best guarantee convergence to a near-optimal solution (i.e., optimality to an a priori tolerance level and not to stationarity). Forcing the trust region size to decrease every time the sufficient decrease condition is not satisfied means that if the projection of the gradient onto the feasible domain is less than the parameter $a$, then the algorithm can fail to make progress. This limitation is presented in (21); however, it is not seen as severe because $a$ can be set at any value arbitrarily close to (but strictly greater than) zero.
A second limitation is that the model calibration strategy, generating a fully linear model, is theoretically only guaranteed to be possible for functions that are twice continuously differentiable and have Lipschitz-continuous first derivatives. Though this assumption may seem to limit some desired opportunities for derivative-free optimization of functions with noise, noise with certain characteristics like that discussed in Conn et al. (2009b, Section 9.3), or the case of models with dynamic accuracy as in Conn et al. (2000, Section 10.6) can be accommodated in this framework. Approaches such as those in Moré and Wild (2010) may be used to characterize the noise in a specific problem. However, our algorithm applies even in the case of general noise, where no guarantees can be made on the quality of the surrogate models. As will be shown in the test problem in Section 4, in such a case our approach exhibits robustness and significant advantages over gradient-based methods that are susceptible to poor gradient estimates.
Algorithm 1 is more complicated than conventional gradient-based trust-region algorithms using quadratic penalty functions, such as the algorithm in Conn et al. (2000, Chapter 14). There are three sources of added complexity. First, our algorithm increases the penalty parameter after each trust-region subproblem as opposed to after a completed minimization of the penalty function. This aspect can significantly reduce the number of required high-fidelity evaluations, but adds complexity to the algorithm. Second, our algorithm switches between two trust region subproblems to avoid numerical issues associated with the conditioning of the quadratic penalty function Hessian. The third reason for added complexity is that in derivative-free optimization the size of the trust-region must decrease to zero in order to demonstrate optimality. This aspect serves to add unwanted coupling between the penalty parameter and the size of the trust region, which must be handled appropriately. We now discuss how these two aspects of the algorithm place important constraints on the penalty parameter $\sigma_k$.
The requirements for the penalty parameter, $\sigma_k$, are that (i) $\lim_{k \to \infty} \sigma_k \Delta_k = \infty$, (ii) $\lim_{k \to \infty} \sigma_k \Delta_k^2 = 0$, and (iii) $\sum_{k=0}^{\infty} 1/\sigma_k$ is finite. The lower bound for the growth of $\sigma_k$, that (i) $\lim_{k \to \infty} \sigma_k \Delta_k = \infty$, comes from the properties of the minima of quadratic penalty functions presented in Conn et al. (2000, Chapter 14), properties of a fully linear model, and (13). If $x_k$ is at an approximate minimizer of the surrogate quadratic penalty function, (8), then a bound on the constraint violation is
$$\left\| \begin{bmatrix} h(x_k)^T g^+(x_k)^T \end{bmatrix} \right\|$$
$$\leq \frac{\kappa_1 (\max\{\alpha \Delta_k, a\} + \kappa_g \Delta_k) + \|\lambda(x^*)\| + \kappa_2 \|x_k - x^*\|}{\sigma_k},$$
(22)
where $x^*$ is a KKT point of (1), and $\kappa_1, \kappa_2$ are finite positive constants. If $\{\sigma_k \Delta_k\}$ diverges then an iteration exists where both the bound, (22), holds (the trust region size must be large enough that a feasible point exists in the interior) and the constraint violation is less than the given tolerance $\varepsilon$. This enables the switching between the two trust region subproblems, (12) and (11). The upper bound for the growth of $\sigma_k$, that (ii) $\lim_{k \to \infty} \sigma_k \Delta_k^2 = 0$, comes from the smoothness of the quadratic penalty function. To establish an upper bound for the Hessian 2-norm for the subproblem in (12) we compare the value of the merit function at a point $\Phi(x_k + p, \sigma_k)$ with its linearized prediction based on $\Phi(x_k, \sigma_k)$, $\tilde{\Phi}(x_k + p, \sigma_k)$. If $\kappa_{fg}$ is the Lipschitz constant for $\nabla f_{\text{high}}(x)$, $\kappa_{cm}$ is the maximum Lipschitz constant for the constraints, and $\kappa_{cgm}$ is the maximum Lipschitz constant for a constraint gradient, we can show that
$$\| \Phi(x_k + p, \sigma_k) - \tilde{\Phi}(x_k + p, \sigma_k) \|$$
$$\leq \left[ \kappa_{fg} + \sigma_k \left( \kappa_{cm} \left\| \begin{bmatrix} h(x_k)^T g^+(x_k)^T \end{bmatrix} \right\| \right.$$
$$+ \kappa_{cgm} \| A(x_k) \| \| p \| + \kappa_{cm} \kappa_{cgm} \| p \|^2 \right)^2 \right]$$
$$\times \| p \|^2.$$
(23)
We have a similar result for $\hat{\Phi}(x, \sigma_k)$ by replacing $\kappa_{fg}$ with the sum $\kappa_{fg} + \kappa_g$, using the definition of a fully linear model, and by bounding $\| p \|$ by $\Delta_k$. Therefore, we may show the error in a linearized prediction of the surrogate model will go to zero and that Lipschitz-type smoothness is ensured provided that the sequence $\{\sigma_k \Delta_k^2\}$ converges to zero regardless of the constraint violation.
The final requirement, that (iii) $\sum_{k=0}^{\infty} 1/\sigma_k$ is finite, comes from the need for the size of the trust region to go to zero in gradient-free optimization. The sufficient decrease condition, that $\rho_k = 0$ unless $\hat{\Phi}(x_k, \sigma_k) - \hat{\Phi}(x_k + s_k, \sigma_k) \geq \alpha \Delta_k$ ensures that the trust region size decreases unless the change in the merit function $\Phi(x_k, \sigma_k) - \Phi(x_k + s_k, \sigma_k) \geq \eta_0 a \Delta_k$. This provides an upper bound on the total number of times in which the size of the trust region is kept constant or increased. We have assumed that the merit function is bounded from below and also that the trust-region iterates remain within a level-set $L(x_0, \sigma_0)$ as defined by (4). We now consider the merit function written in an alternate form,
$$\Phi(x_k, \sigma_k) = f_{\text{high}}(x_k) + \frac{\sigma_k}{2} \left\| \begin{bmatrix} h(x_k)^T, g^+(x_k)^T \end{bmatrix} \right\|^2.$$
(24)
From (22), if $\sigma_k$ is large enough such that the bound on the constraint violation is less than unity, we may use the bound
on the constraint violation in (22) to show that an upper bound on the total remaining change in the merit function is
\[
f_{\text{high}}(x_k) - \min_{x \in L(x_0, \sigma_0)} f_{\text{high}}(x)
\]
\[
+ \frac{\left[k_1(\max\{\alpha \Delta_k, a\} + \kappa_k \Delta_k) + \|\lambda(x^*)\| + k_2 \|x_k - x^*\|\right]^2}{\sigma_k}.
\]
(25)
Each term in the numerator is bounded from above because $\Delta_k$ is always bounded from above by $\Delta_{\text{max}}$, $\|\lambda(x^*)\|$ is bounded from above because $x^*$ is a regular point, and $\|x_k - x^*\|$ is bounded from above because $L(x_0, \sigma_0)$ is a compact set. Therefore, if the series $\{1/\sigma_k\}$ has a finite sum, then the total remaining improvement in the merit function is finite. Accordingly, the sum of the series $\{\Delta_k\}$ must be finite, and $\Delta_k \to 0$ as $k \to \infty$. The prescribed sequence for $\{\sigma_k\}$ in our algorithm satisfies these requirements for a broad range of problems.
### 3 Multifidelity objective and constraint optimization
This section considers a more general constrained optimization problem with a computationally expensive objective function and computationally expensive constraints. The specific problem considered is where the gradients for both the expensive objective and expensive constraints are either unavailable, unreliable or expensive to estimate. Accordingly, the multifidelity optimization problem in (1) is augmented with the high-fidelity constraint, $c_{\text{high}}(x) \leq 0$. In addition, we have a low-fidelity estimate of this constraint function, $c_{\text{low}}(x)$, which estimates the same metric as $c_{\text{high}}(x)$, but with unknown error. Therefore, our goal is to find the vector $x \in \mathbb{R}^n$ of $n$ design variables that solves the nonlinear constrained optimization problem,
\[
\begin{align*}
\min_{x \in \mathbb{R}^n} & \quad f_{\text{high}}(x) \\
\text{s.t.} & \quad h(x) = 0 \\
& \quad g(x) \leq 0 \\
& \quad c_{\text{high}}(x) \leq 0,
\end{align*}
\]
(26)
where $h(x)$ and $g(x)$ represent vectors of inexpensive equality and inequality constraints with derivatives that are either known or may be estimated cheaply. The same assumptions for the expensive objective function formulation given in Section 2.1 are made for the functions presented in this formulation. It is also necessary to make an assumption similar to that in Section 2.2, that a quadratic penalty function with the new high-fidelity constraint is bounded from below within an initial expanded level-set. A point of note is that multiple high-fidelity constraints can be used if an initial point $x_0$ is given that is feasible with respect to all constraints; however, due to the effort required to construct approximations of the multiple high-fidelity constraints, it is recommended that all of the high-fidelity constraints be combined into a single high-fidelity constraint through, as an example, a discriminant function (Rvachev 1963; Papalambros and Wilde 2000).
The optimization problem in (26) is solved in two phases. First, the multifidelity optimization method presented in Section 2 is used to find a feasible point, and then an interior point formulation is used to find a minimum of the optimization problem in (26). The interior point formulation is presented in Section 3.2 and the numerical implementation is presented in Section 3.3.
#### 3.1 Finding a feasible point
This algorithm begins by finding a point that is feasible with respect to all of the constraints by applying Algorithm 1 to the optimization problem
\[
\begin{align*}
\min_{x \in \mathbb{R}^n} & \quad c_{\text{high}}(x) \\
\text{s.t.} & \quad h(x) = 0 \\
& \quad g(x) \leq 0,
\end{align*}
\]
(27)
until a point that is feasible with respect to the constraints in (26) is found. If this optimization problem is unconstrained (i.e., there are no constraints $h(x)$ and $g(x)$) then the trust-region algorithm of Conn et al. (2009a) is used with the multifidelity calibration method of March and Willcox (2010). The optimization problem in (27) may violate one of the assumptions for Algorithm 1 in that $c_{\text{high}}(x)$ may not be bounded from below. This issue will be addressed in the numerical implementation of the method in Section 3.3.
#### 3.2 Interior point trust-region method
Once a feasible point is found we minimize the high-fidelity objective function ensuring that we never again violate the constraints, that is we solve (26). This is accomplished in two steps, first by solving trust region subproblems that use fully linear surrogate models for both the high-fidelity objective function and the high-fidelity constraint. Second, the trust region step is evaluated for feasibility and any infeasible step is rejected. The surrogate model for the objective function is $m_k(x)$ as defined in (2). For the constraint, the surrogate model, $\bar{m}_k(x)$, is defined as
\[
\bar{m}_k(x) = c_{\text{low}}(x) + \bar{e}_k(x).
\]
(28)
From the definition of a fully linear model, (9) and (10), $\tilde{m}_k(x)$ satisfies
\begin{equation}
\| \nabla c_{\text{high}}(x) - \nabla \tilde{m}_k(x) \| \leq \kappa_c \Delta_k \quad \forall x \in B_k,
\end{equation}
\begin{equation}
|c_{\text{high}}(x) - \tilde{m}_k(x)| \leq \kappa_c \Delta_k^2 \quad \forall x \in B_k.
\end{equation}
In addition, we require that our procedure to construct fully linear models ensures that at the current design iterate, the fully linear models exactly interpolate the function they are modeling,
\begin{equation}
m_k(x_k) = f_{\text{high}}(x_k),
\end{equation}
\begin{equation}
\tilde{m}_k(x_k) = c_{\text{high}}(x_k).
\end{equation}
This is required so that every trust-region subproblem is feasible at its initial point $x_k$.
The trust-region subproblem is
\begin{align}
\min_{s_k \in \mathbb{R}^n} & \quad m_k(x_k + s_k) \\
\text{s.t.} & \quad h(x_k + s_k) = 0 \\
& \quad g(x_k + s_k) \leq 0 \\
& \quad \tilde{m}_k(x_k + s_k) \leq \max\{c_{\text{high}}(x_k), -w \Delta_k\} \\
& \quad \|s_k\| \leq \Delta_k.
\end{align}
The surrogate model constraint does not have zero as a right hand side to account for the fact the algorithm is looking for interior points. The right hand side, $\max\{c_{\text{high}}(x_k), -w \Delta_k\}$, ensures that the constraint is initially feasible and that protection of constraint violation decreases to zero as the number of iterations increase to infinity. The constant $w$ must be greater than $\alpha$, which is defined as part of the termination tolerance $\tau_k$ in (13). The trust-region subproblem is solved to the same termination tolerance as the multifidelity objective function formulation, $\| \nabla_x \Omega_k \| \leq \tau_k$, where $\Omega_k$ is the Lagrangian of (33).
The center of the trust region is updated if a decrease in the objective function is found at a feasible point,
\begin{equation}
x_{k+1} = \begin{cases}
x_k + s_k & \text{if } f_{\text{high}}(x_k) > f_{\text{high}}(x_k + s_k) \\
& \quad \text{and } c_{\text{high}}(x_k + s_k) \leq 0 \\
x_k & \text{otherwise},
\end{cases}
\end{equation}
with $h(x_k + s_k) = 0$ and $g(x_k + s_k) \leq 0$ already satisfied in (33). The trust-region size update must ensure that the predictions of the surrogate models are accurate and that the size of the trust region goes to zero in the limit as the number of iterations goes to infinity. Therefore, we again impose a sufficient decrease condition, that the change in the objective function is at least a constant, $a$, multiplying $\Delta_k$,
\begin{equation}
\Delta_{k+1} = \begin{cases}
\min\{\gamma_1 \Delta_k, \Delta_{\text{max}}\} & \text{if } f_{\text{high}}(x_k + s_k) \\
& \quad - f_{\text{high}}(x_k) \geq a \Delta_k \\
& \quad \text{and } c_{\text{high}}(x_k + s_k) \leq 0 \\
\gamma_0 \Delta_k & \text{otherwise}.
\end{cases}
\end{equation}
New surrogate models, $m_{k+1}(x)$ and $\tilde{m}_{k+1}(x)$, are then built such that they are fully linear on a region $B_{k+1}$ having center $x_{k+1}$ and size $\Delta_{k+1}$. The new fully linear models are constructed using the procedure of Wild et al. (2008) with the calibration technique of March and Willcox (2010).
### 3.3 Multifidelity objective and constraint implementation
The numerical implementation of this multifidelity optimization algorithm is presented as Algorithm 2. A set of possible parameters that may be used in this algorithm is listed in Table 2 in Section 4. An important implementation issue with this algorithm is finding the initial feasible point. Algorithm 1 is used to minimize the high-fidelity constraint value subject to the constraints with available derivatives in order to find a point that is feasible. However, Algorithm 1 uses a quadratic penalty function to handle the constraints with available derivatives if the constraints are violated. The convergence of a penalty function requires that the objective function is bounded from below, therefore a more general problem than (27) to find an initial feasible point is to use,
\begin{align}
\min_{x \in \mathbb{R}^n} & \quad [\max\{c_{\text{high}}(x) + d, 0\}]^2 \\
\text{s.t.} & \quad h(x) = 0 \\
& \quad g(x) \leq 0.
\end{align}
The maximization in the objective prevents the need for the high-fidelity constraint to be bounded from below, and the constraint violation is squared to ensure the gradient of the objective is continuous. The constant $d$ is used to account for the fact that the surrogate model will have some error in its prediction of $c_{\text{high}}(x)$, so looking for a slightly negative value of the constraint may save iterations as compared to seeking a value that is exactly zero. For example, if $d \geq \kappa_c \Delta_k^2$ and $\tilde{m}_k(x) + d = 0$ then (30) guarantees that $c_{\text{high}}(x) \leq 0$.
There is a similar consideration in the solution of (33), where a slightly negative value of the surrogate constraint is desired. If this subproblem is solved with an interior point algorithm this should be satisfied automatically; however, if
**Algorithm 2: Multifidelity Objective and Constraint Trust-Region Algorithm**
-1: Find a feasible design vector using Algorithm 1 to iterate on:
\[
\min_{x \in \mathbb{R}^n} \left[ \max \{ c_{\text{high}}(x) + d, 0 \} \right]^2 \\
\text{s.t. } h(x) = 0 \\
g(x) \leq 0,
\]
-1a: Return all evaluations of \( c_{\text{high}}(x) \).
0: Set initial parameters, \( a, \alpha, \beta, d, w, \varepsilon, \varepsilon_2, \gamma_0, \gamma_1, \Delta_0, \Delta_{\text{max}}, \) and \( \sigma_0 \). Build initial surrogate models \( m_0(x), \tilde{m}_0(x) \) fully linear on \( \{ x : \| x - x_0 \| \leq \Delta_0 \} \), where \( x_0 \) is the terminal point of step -1. Set \( k = 0 \).
1: Update tolerance, \( \tau_k = \min \{ \beta \varepsilon, \alpha \Delta_k \} \).
2: Solve the trust-region subproblem:
\[
\min_{s_k \in \mathbb{R}^n} m_k(x_k + s_k) \\
\text{s.t. } h(x_k + s_k) = 0 \\
g(x_k + s_k) \leq 0 \\
\tilde{m}_k(x_k + s_k) \leq \min \{ c_{\text{high}}(x_k), -w \Delta_k \} \\
\| s_k \| \leq \Delta_k.
\]
3: If \( f_{\text{high}}(x_k + s_k) \) or \( c_{\text{high}}(x_k + s_k) \) have not been evaluated previously, evaluate the high-fidelity functions at that point.
3a: Store \( f_{\text{high}}(x_k + s_k) \) and \( c_{\text{high}}(x_k + s_k) \) in a database.
4: Accept or reject the trial point according to:
\[
x_{k+1} = \begin{cases}
x_k + s_k & \text{if } f_{\text{high}}(x_k) > f_{\text{high}}(x_k + s_k) \text{ and } c_{\text{high}}(x_k + s_k) \leq 0 \\
x_k & \text{otherwise.}
\end{cases}
\]
5: Update the trust region size according to:
\[
\Delta_{k+1} = \begin{cases}
\min \{ \gamma_1 \Delta_k, \Delta_{\text{max}} \} & \text{if } f_{\text{high}}(x_k + s_k) - f_{\text{high}}(x_k) \geq \alpha \Delta_k \text{ and } c_{\text{high}}(x_k + s_k) \leq 0 \\
\gamma_0 \Delta_k & \text{otherwise.}
\end{cases}
\]
6: Create new models \( m_{k+1}(x) \) and \( \tilde{m}_{k+1}(x) \) fully linear on \( \{ x : \| x - x_{k+1} \| \leq \Delta_{k+1} \} \). Increment \( k \).
7: Check for convergence, if the trust region constraint is inactive,
\[
\| \nabla m(x_k) + A(x_k)^T \hat{\lambda}(x_k) + \nabla \tilde{m}(x_k) \hat{\lambda}_{\tilde{m}} \| \leq \varepsilon, \text{ and } \Delta_k \leq \varepsilon_2, \text{ the algorithm is converged, otherwise go to step 1.}
\]
A sequential quadratic programming method is used the constraint violation will have a numerical tolerance that is either slightly negative or slightly positive. It may be necessary to bias the subproblem to look for a value of the constraint that is more negative than the optimizer constraint violation tolerance to ensure the solution is an interior point. This feature avoids difficulties with the convergence of the trust-region iterates.
A final implementation note is that if a high-fidelity constraint has numerical noise or steep gradients it may be wise to shrink the trust region at a slower rate, increasing \( \gamma_0 \). This will help to ensure that the trust-region does not decrease to zero at a suboptimal point.
### 4 Supersonic airfoil design test problem
This section presents results of the two multifidelity optimization algorithms on a supersonic airfoil design problem.
4.1 Problem setup
The supersonic airfoil design problem has 11 parameters: the angle of attack, 5 spline points on the upper surface and 5 spline points on the lower surface. However, there is an actual total of 7 spline points on both the upper and lower surfaces because the leading and trailing edges must be sharp for the low-fidelity methods used. The airfoils are constrained such that the minimum thickness-to-chord ratio is 0.05 and that the thickness everywhere on the airfoil must be positive. In addition, there are lower and upper bounds for all spline points.
Three supersonic airfoil analysis models are available: a linearized panel method, a nonlinear shock-expansion theory method, and Cart3D, an Euler CFD solver (Aftosmis 1997). Note, that Cart3D has a finite convergence tolerance so there is some numerical noise in the lift and drag predictions. In addition, because random airfoils are used as initial conditions, Cart3D may fail to converge, in which case the results of the panel method are used. Figure 1 shows computed pressure distributions for each of the models for a 5% thick biconvex airfoil at Mach 1.5 and $2^\circ$ angle of attack. Table 1 provides the estimated lift and drag coefficients for the same airfoil and indicates the approximate level of accuracy of the codes with respect to each other.
Table 1 5% thick biconvex airfoil results comparison at Mach 1.5 and $2^\circ$ angle of attack
| | Panel | Shock-expansion | Cart3D |
|----------------|---------|-----------------|--------|
| $C_L$ | 0.1244 | 0.1278 | 0.1250 |
| % Diff | 0.46% | 2.26% | 0.00% |
| $C_D$ | 0.0164 | 0.0167 | 0.01666|
| % Diff | 1.56% | 0.24% | 0.00% |
Percent difference is taken with respect to the Cart3D results
We first present single-fidelity results using state-of-the-art derivative-free methods. Then the following sections present results for three optimization examples each using this airfoil problem to demonstrate the capabilities of the optimization algorithms presented. In the first example, Section 4.3, the airfoil drag is minimized using the constrained multifidelity objective function formulation presented in Section 2 with only the simple geometric constraints. In the second example, Section 4.4, the airfoil lift-to-drag ratio is maximized subject to a constraint that the drag coefficient is less than 0.01, where the constraint is handled with the multifidelity framework presented in Section 3. In the final example, Section 4.5, the airfoil lift-to-drag ratio is maximized subject to the constrained drag coefficient and both the objective function and the constraints are handled with the multifidelity framework presented in Section 3. The initial airfoils for all problems are randomly generated and likely will not satisfy the constraints.
The three multifidelity airfoil problems are solved with four alternative optimization algorithms: Sequential Quadratic Programming (SQP) (MathWorks, Inc. 2010); a first-order consistent multifidelity trust-region algorithm that uses a SQP formulation and an additive correction (Alexandrov et al. 2001); the high-fidelity-gradient-free approach presented in this paper using a Gaussian radial basis function and a fixed spatial correlation parameter, $\xi = 2$; and the approach presented in this paper using a maximum likelihood estimate to find an improved correlation length, $\xi = \hat{\xi}^*$. The Gaussian correlation functions used in these results are all isotropic. An anisotropic correlation function (i.e., choosing a correlation length for each direction in the design space) may speed convergence of this algorithm and reduce sensitivity to Hessian conditioning. The parameters used for the optimization algorithm are presented in Table 2. The fully linear models are constructed using the procedure of Wild et al. (2008) with the calibration
Table 2 List of constants used in the algorithm
| Constant | Description | Value |
|----------|--------------------------------------------------|-----------|
| $\alpha$ | Sufficient decrease constant | $1 \times 10^{-4}$ |
| $\alpha$ | Convergence tolerance multiplier | $1 \times 10^{-2}$ |
| $\beta$ | Convergence tolerance multiplier | $1 \times 10^{-2}$ |
| $d$ | Artificial lower bound for constraint value | 1 |
| $w$ | Constraint violation conservatism factor | 0.1 |
| $\varepsilon, \varepsilon_2$ | Termination tolerance | $5 \times 10^{-4}$ |
| $\gamma_0$ | Trust region contraction ratio | 0.5 |
| $\gamma_1$ | Trust region expansion ratio | 2 |
| $\eta_0$ | Trust region contraction criterion | 0.25 |
| $\eta_1, \eta_2$ | Trust region expansion criterion | 0.75, 2.0 |
| $\Delta_0$ | Initial trust region radius | 1 |
| $\Delta_{\text{max}}$ | Maximum trust region size | 20 |
| $\sigma_k$ | Penalty parameter | $\max[e^{k/10}, 1/\Delta_k^{1.1}]$ |
| $\delta_x$ | Finite difference step | $1 \times 10^{-5}$ |
All parameters used in constructing the radial basis function error model are the same as in March and Willcox (2010). These parameter values are based on recommendations for unconstrained trust-region algorithms and through numerical testing appear to have good performance for an assortment of problems.
technique of March and Willcox (2010) and the parameters stated therein.
4.2 Single-fidelity derivative-free optimization
For benchmark purposes, we first solve the airfoil optimization problem using three single-fidelity gradient-free optimization methods: the Nelder–Mead simplex algorithm (Nelder and Mead 1965), the global optimization method DIRECT (Jones et al. 1993), and the constrained gradient-free optimizer, COBYLA (Powell 1994, 1998). The test case is to minimize the drag of supersonic airfoil estimated with a panel method, subject to the airfoil having positive thickness everywhere and at least 5% thickness to chord ratio. This test case is a lower-fidelity version of the example in Section 4.3. Nelder–Mead simplex and DIRECT are unconstrained optimizers that use a quadratic penalty function known to perform well on this problem (March and Willcox 2010), while COBYLA handles the constraints explicitly. On this problem, starting from ten random initial airfoils the best observed results were 5,170 function evaluations for the Nelder–Mead simplex algorithm, over 11,000 evaluations for DIRECT, and 6,284 evaluations for COBYLA. Such high numbers of function evaluations mean that these single-fidelity gradient-free algorithms are too expensive for use with an expensive forward solver, such as Cart3D.
4.3 Multifidelity objective function results
This section presents optimization results in terms of the number of high-fidelity function evaluations required to find the minimum drag for a supersonic airfoil at Mach 1.5 with only geometric constraints on the design. Two cases are tested: the first uses the shock-expansion method as the high-fidelity function and the panel method as the low-fidelity function; the second uses Cart3D as the high-fidelity function and the panel method as the low-fidelity function. These problems are solved using the multifidelity optimization algorithm for a computationally expensive objective function and constraints with available derivatives presented in Section 2.
The average numbers of high-fidelity function evaluations required to find a locally optimal design starting from random initial airfoils are presented in Table 3. The results show that our approach uses approximately 78% fewer high-fidelity function evaluations than SQP and approximately 30% fewer function evaluations than the first-order consistent trust-region method using finite difference gradient estimates. In addition, Fig. 2 compares the objective function and constraint violation histories verse the number of high-fidelity evaluations for these methods as well as DIRECT and Nelder-Mead simplex from a representative random initial airfoil for the case with the shock-expansion method as the high-fidelity function. For the single-fidelity optimization using Cart3D, the convergence results were highly sensitive to the finite difference step length. The step size required tuning, and the step with the highest success rate was used. A reason for this is that the initial airfoils were randomly generated, and the convergence tolerance
Table 3 The average number of high-fidelity function evaluations to minimize the drag of a supersonic airfoil with only geometric constraints
| High-fidelity | Low-fidelity | SQP | First-order TR | RBF, $\xi = 2$ | RBF, $\xi = \xi^*$ |
|-----------------|---------------|--------|----------------|----------------|---------------------|
| Shock-expansion | Panel method | 314 (-) | 110 (-65%) | 73 (-77%) | 68 (-78%) |
| Cart3D | Panel method | 359* (-) | 109 (-70%) | 80 (-78%) | 79 (-78%) |
The asterisk for the Cart3D results means a significant fraction of the optimizations failed and the average is taken over fewer samples. The numbers in parentheses indicate the percentage reduction in high-fidelity function evaluations relative to SQP.
Fig. 2 Convergence history for minimizing the drag of an airfoil using the shock-expansion theory method as the high-fidelity function subject to only geometric constraints. The methods presented are our calibration approach, a first-order consistent multifidelity trust-region algorithm, sequential quadratic programming, DIRECT, and Nelder-Mead simplex. Both DIRECT and Nelder-Mead simplex use a fixed penalty function to handle the constraints, so only an objective function value is shown. COBYLA and BOBYQA were attempted, but failed to find the known solution to this problem. On the constraint violation plot, missing points denote a feasible iterate, and the sudden decrease in the constraint violation for the RBF calibration approach at 37 high-fidelity evaluations (13th iteration, $\sigma_k = 9.13 \times 10^5$) is when the algorithm switches from solving (12) to solving (11)
Table 4 The average number of high-fidelity constraint evaluations required to maximize the lift-to-drag ratio of a supersonic airfoil estimated with a panel method subject to a multifidelity constraint
| High-fidelity | Low-fidelity | SQP | First-order TR | RBF, $\xi = 2$ | RBF, $\xi = \xi^*$ |
|---------------------|------------------|--------------|----------------|----------------|-------------------|
| Shock-expansion | Panel method | 827 (-) | 104 (-87%) | 104 (-87%) | 115 (-86%) |
| Cart3D | Panel method | 909* (-) | 100 (-89%) | 103 (-89%) | 105 (-88%) |
The asterisk for the Cart3D results means a significant fraction of the optimizations failed and the average is taken over fewer samples. The numbers in parentheses indicate the percentage reduction in high-fidelity function evaluations relative to SQP
Fig. 3 Initial airfoil and supersonic airfoil with the maximum lift-to-drag ratio having drag less than 0.01 and 5% thickness at Mach 1.5
Table 5 The average number of high-fidelity objective function and high-fidelity constraint evaluations to optimize a supersonic airfoil for a maximum lift-to-drag ratio subject to a maximum drag constraint
| High-fidelity | Low-fidelity | SQP | First-order TR | RBF, $\xi = 2$ | RBF, $\xi = \xi^*$ |
|---------------|-------------|-----|----------------|----------------|-------------------|
| Objective: | Shock-expansion | Panel method | 773 (−) | 132 (−83%) | 93 (−88%) | 90 (−88%) |
| Constraint: | Shock-expansion | Panel method | 773 (−) | 132 (−83%) | 97 (−87%) | 96 (−88%) |
| Objective: | Cart3D | Panel method | 1168* (−) | 97 (−92%) | 104 (−91%) | 112 (−90%) |
| Constraint: | Cart3D | Panel method | 2335* (−) | 97 (−96%) | 115 (−95%) | 128 (−94%) |
The asterisk for the Cart3D results means a significant fraction of the optimizations failed and the average is taken over fewer samples. The numbers in parentheses indicate the percentage reduction in high-fidelity function evaluations relative to SQP.
of Cart3D for airfoils with sharp peaks and negative thickness was large compared with the airfoils near the optimal design. This sensitivity of gradient-based optimizers to the finite difference step length highlights the benefit of gradient-free approaches, especially when constraint gradient estimates become poor.
4.4 Multifidelity constraint results
This section presents optimization results in terms of the number of function evaluations required to find the maximum lift-to-drag ratio for a supersonic airfoil at Mach 1.5 subject to both geometric constraints and the requirement that the drag coefficient is less than 0.01. The lift-to-drag ratio is computed with the panel method; however, the drag coefficient constraint is handled using the multifidelity technique presented in Section 3. Two cases are examined: in the first the shock-expansion method models the high-fidelity constraint and the panel method models the low-fidelity constraint; in the second case, Cart3D models the high-fidelity constraint and the panel method models the low-fidelity constraint. Table 4 presents the average number of high-fidelity constraint evaluations required to find the optimal design using SQP, a first-order consistent trust-region algorithm and the multifidelity techniques developed in this paper. A significant decrease (almost 90%) in the number of high-fidelity function evaluations is observed when compared with SQP. Performance is almost the same as the first-order consistent trust-region algorithm.
In the first case, the shock-expansion method is the high-fidelity analysis used to estimate both metrics of interest and the panel method is the low-fidelity analysis. In the second case, Cart3D is the high-fidelity analysis and the panel method is the low-fidelity analysis. The optimal airfoils are shown in Fig. 3. Table 5 presents the number of high-fidelity function evaluations required to find the optimal design using SQP, a first-order consistent trust-region algorithm and the techniques developed in this paper. Again a significant reduction (about 90%) in the number of high-fidelity function evaluations, both in terms of the constraint and the objective, are observed compared with SQP, and a similar number of high-fidelity function evaluations are observed when compared with the first-order consistent trust region approach using finite differences.
5 Conclusion
This paper has presented two algorithms for multifidelity constrained optimization of computationally expensive functions when their derivatives are not available. The first method minimizes a high-fidelity objective function without using its derivative while satisfying constraints with available derivatives. The second method minimizes a high-fidelity objective without using its derivative while satisfying both constraints with available derivatives and an additional high-fidelity constraint without an available derivative. Both of these methods support multiple lower-fidelity models through the use of a multifidelity filtering technique without any modifications to the methods. For the supersonic airfoil design example considered here, the multifidelity methods resulted in approximately 90% reduction in the number of high-fidelity function evaluations compared to solution with a single-fidelity sequential quadratic programming method. In addition, the multifidelity methods performed similarly to a first-order consistent trust-region algorithm with gradients estimated using finite difference approximations. This shows that derivative-free multifidelity methods provide significant
opportunity for optimization of computationally expensive functions without available gradients.
The behavior of the gradient-free algorithms presented is slightly atypical of nonlinear programming methods. For example, the convergence rate of these gradient-free algorithms is rapid initially and then slows when close to an optimal solution. In contrast, convergence for a gradient-based method is often initially slow and then accelerates when close to an optimal solution (e.g., as approximate Hessian information becomes more accurate in a quasi-Newton approach). Also, gradient-based optimizers typically find the local optimal solution nearest the initial design. Although by virtue of the physics involved, the presented examples have unique optimal solutions, in a general problem the local optimum to which these gradient-free algorithms converge may not be the one in the immediate vicinity of the initial iterate. An example of this behavior is the case when the initial iterate is itself a local optimum, the surrogate model may not capture this fact and the iterate may move to a different point in the design space with a lower function value.
In the case of hard constraints, or when the objective function fails to exist if the constraints are violated, it is still possible to use Algorithm 2. After the initial feasible point is found, no design iterate will be accepted if the high-fidelity constraint is violated. Therefore the overall flow of the algorithm is unchanged. What must be changed is the technique to build fully linear models. In order to build a fully linear model, the objective function must be evaluated at a set of \( n + 1 \) points that span \( \mathbb{R}^n \). When the objective function can be evaluated outside the feasible region, the constraints do not influence the construction of the surrogate model. However, when the objective function does not exist where the constraints are violated, then the points used to construct the surrogate model must all be feasible and this restricts the shape of the feasible region. Specifically, this requirement prohibits equality constraints and means that strict linear independent constraint qualification must be satisfied everywhere in the design space (preventing two inequality constraints from mimicking an equality constraint). If these two additional conditions hold, then it will be possible to construct fully linear models everywhere in the feasible design space and use this algorithm to optimize computationally expensive functions with hard constraints.
Lastly, we comment on the applicability of the proposed multifidelity approach. Though this paper presents no formal convergence theory, and at best that theory will only apply if many restrictive assumptions hold (for example, assumptions on smoothness, constraint qualification, and always using a fully linear surrogate) our numerical experiments indicate the robustness of the approach in application to a broader class of problems. For example, the Cart3D CFD model employed in our case studies does not satisfy the Lipschitz continuity requirements, due to finite convergence tolerances in determining the CFD solution; however, with the aid of the smooth calibrated surrogates combined with the trust-region model management, our multifidelity method is successful in finding locally optimal solutions. Another example is a high-fidelity optimal solution for which constraint qualification conditions are not satisfied. In the algorithms presented, the design vector iterate will approach a local minimum and the sufficient decrease test for the change in the objective function value will fail. This causes the size of the trust region to decay to zero around the local minimum even though the KKT conditions may not be satisfied.
In summary, this paper has presented a multifidelity optimization algorithm that does not require estimating gradients of high-fidelity functions, enables the use of multiple low-fidelity models, enables optimization of functions with hard constraints, and exhibits robustness over a broad class of optimization problems, even when non-smoothness is present in the objective function and/or constraints. For airfoil design problems, this approach has been shown to perform similarly in terms of the number of function evaluations to finite-difference-based multifidelity optimization methods. This suggests that the multifidelity derivative-free approach is a promising alternative for the wide range of problems where finite-difference gradient approximations are unreliable.
**Acknowledgments** The authors gratefully acknowledge support from NASA Langley Research Center contract NNL07AA33C, technical monitor Natalia Alexandrov, and a National Science Foundation graduate research fellowship. In addition, we wish to thank Michael Aftosmis and Marian Nemec for support with Cart3D.
**References**
Aftosmis MJ (1997) Solution adaptive cartesian grid methods for aerodynamic flows with complex geometries. In: 28th computational fluid dynamics lecture series, von Karman Institute for Fluid Dynamics, Rhode-Saint-Gerése, Belgium, lecture series 1997-02
Alexandrov N, Lewis R, Gumbert C, Green L, Newman P (1999) Optimization with variable-fidelity models applied to wing design. Tech. Rep. CR-209826, NASA
Alexandrov N, Lewis R, Gumbert C, Green L, Newman P (2001) Approximation and model management in aerodynamic optimization with variable-fidelity models. AIAA J 38(6):1093–1101
Audet C, Dennis Jr JE (2004) A pattern search filter method for nonlinear programming without derivatives. SIAM J Optim 14(4):980–1010
Bertsekas DP (1999) Nonlinear programming, 2nd edn. Athena Scientific
Boggs PT, Dennis Jr JE (1976) A stability analysis for perturbed nonlinear iterative methods. Math Comput 30(134):199–215
Booker AJ, Dennis Jr JE, Frank PD, Serafini DB, Torczon V, Trosset MW (1999) A rigorous framework for optimization of expensive functions by surrogates. Struct Optim 17(1):1–13
Castro J, Gray G, Giunta A, Hough P (2005) Developing a computationally efficient dynamic multilevel hybrid optimization scheme using multifidelity model interactions. Tech. Rep. SAND2005-7498, Sandia
Conn A, Gould N, Toint P (2000) Trust-region methods. MPS/SIAM series on optimization. Society for Industrial and Applied Mathematics, Philadelphia
Conn A, Scheinberg K, Vicente L (2008) Geometry of interpolation sets in derivative free optimization. Math Program 111(1–2):141–172
Conn A, Scheinberg K, Vicente L (2009a) Global convergence of general derivative-free trust-region algorithms to first- and second-order critical points. SIAM J Optim 20(1):387–415
Conn AR, Scheinberg K, Vicente LN (2009b) Introduction to derivative-free optimization. MPS/SIAM series on optimization. Society for Industrial and Applied Mathematics, Philadelphia
Jones D (2001) A taxonomy of global optimization methods based on response surfaces. J Glob Optim 21:345–383
Jones D, Perttunen CD, Stuckmann BE (1993) Lipschitzian optimization without the lipschitz constant. J Optim Theory Appl 79(1):157–181
Jones D, Schonlau M, Welch W (1998) Efficient global optimization of expensive black-box functions. J Glob Optim 13:455–492
Kennedy M, O’Hagan A (2000) Predicting the output from a complex computer code when fast approximations are available. Biometrika 87(1):1–13
Kolda TG, Lewis RM, Torczon V (2003) Optimization by direct search: new perspectives on classical and modern methods. SIAM Rev 45(3):385–482
Kolda TG, Lewis RM, Torczon V (2006) A generating set direct search augmented lagrangian algorithm for optimization with a combination of general and linear constraints. Tech. Rep. SAND2006-5315, Sandia
Lewis RM, Torczon V (2010) A direct search approach to nonlinear programming problems using an augmented lagrangian method with explicit treatment of linear constraints. Tech. Rep. WM-CS-2010-01, College of William and Mary Department of Computer Science
Liuzzi G, Lucidi S (2009) A derivative-free algorithm for inequality constrained nonlinear programming via smoothing of an $l_\infty$ penalty function. SIAM J Optim 20(1):1–29
March A, Willcox K (2010) A provably convergent multifidelity optimization algorithm not requiring high-fidelity gradients. In: 6th AIAA multidisciplinary design optimization specialist conference, Orlando, FL, AIAA 2010-2912
MathWorks, Inc. (2010) Constrained nonlinear optimization. Optimization toolbox user’s guide, v. 5
Moré JJ, Wild SM (2010) Estimating derivatives of noisy simulations. Tech. Rep. Preprint ANL/MCS-P1785-0810, Mathematics and Computer Science Division
Nelder JA, Mead RA (1965) A simplex method for function minimization. Comput J 7:308–313
Nocedal J, Wright S (2006) Numerical optimization, 2nd edn. Springer, New York
Papalambros PY, Wilde DJ (2000) Principles of optimal design, 2nd edn. Cambridge University Press, Cambridge
Powell MJD (1994) A direct search optimization method that models the objective and constraint functions by linear interpolation. In: Gomez S, Hennart J-P (eds) Advances in optimization and numerical analysis, vol 7. Kluwer Academic, Dordrecht, pp 51–67
Powell MJD (1998) Direct search algorithms for optimization calculations. Acta Numer 7:287–336
Rajnarayan D, Haas A, Kroo I (2008) A multifidelity gradient-free optimization method and application to aerodynamic design. In: 12th AIAA/ISSMO multidisciplinary analysis and optimization conference, Victoria, British Columbia, AIAA 2008-6020
Rvachev VL (1963) On the analytical description of some geometric objects. Tech. Rep. 4, Reports of Ukrainian Academy of Sciences (in Russian)
Sasena MJ, Papalambros P, Goovaerts P (2002) Exploration of metamodeling sampling criteria for constrained global optimization. Eng Optim 34(3):263–278
Wild S (2009) Derivative-free optimization algorithms for computationally expensive functions. PhD thesis, Cornell University
Wild S, Shoemaker CA (2009) Global convergence of radial basis function trust-region algorithms. Tech. Rep. Preprint ANL/MCS-P1580-0209, Mathematics and Computer Science Division
Wild S, Regis R, Shoemaker C (2008) ORBIT: optimization by radial basis function interpolation in trust-regions. SIAM J Sci Comput 30(6):3197–3219
|
Assessing diet of the non-indigenous predatory cladoceran *Cercopagis pengoi* using stable isotopes
PER B. HOLLILAND*, TOWE HOLMBORN† AND ELENA GOROKHOVA‡
DEPARTMENT OF SYSTEMS ECOLOGY, STOCKHOLM UNIVERSITY, SE-106 91 STOCKHOLM, SWEDEN
†PRESENT ADDRESS: CALLUNA AB, STORA NYGATAN 45, SE-111 27 STOCKHOLM, SWEDEN
‡PRESENT ADDRESS: DEPARTMENT OF APPLIED ENVIRONMENTAL SCIENCE, STOCKHOLM UNIVERSITY, SE-106 91 STOCKHOLM, SWEDEN
*CORRESPONDING AUTHOR: firstname.lastname@example.org
Received July 29, 2011; accepted in principle January 25, 2012; accepted for publication January 31, 2012
Corresponding editor: Mark J. Gibbons
In the Baltic Sea, the predatory cladoceran *Cercopagis pengoi* is a non-indigenous species that has potential to compete for mesozooplankton with pelagic zooplanktivorous fish. To understand the extent of diet overlap with these fishes in a coastal area of the northern Baltic proper, we studied the feeding of *C. pengoi* using stable $^{13}$C and $^{15}$N isotope signatures of the predator and possible prey. Feasible combinations of sources were estimated in two ways: (i) with the IsoSource mixing model, and (ii) temporal-tracking analysis. Further, contribution of different prey was related to ambient zooplankton composition to gauge selectivity. The modeling results indicate that *C. pengoi* is an opportunistic generalist predator with a positive selection towards older copepodites (CIV–VI) of *Acartia* spp. and *Eurytemora affinis*, which also have the greatest contribution to its diet. Positive selection towards podonid Cladocera is also likely. In contrast, evidence for extensive feeding on microzooplankton was inconclusive, and bosminids were not found to be an important prey in the zooplankton assemblages studied. As the derived diet of *C. pengoi* overlaps greatly with that of zooplanktivorous fish, food competition between these zooplanktivores is possible.
KEYWORDS: mixing models; temporal-tracking analysis; selectivity; food web interactions; zooplankton
INTRODUCTION
Non-indigenous species can have detrimental effects upon the biodiversity and food web functioning in invaded ecosystems (Noonburg and Byers, 2005; Beardsley, 2006). In some cases, the impact on the recipient community has been catastrophic, as in the case with the invasion of the ctenophore *Mnemiopsis leidy* into the Black and Caspian seas (e.g. Kideys, 2002; Roohi et al., 2008). In other cases, however, no measurable impact (Gozlan, 2008) or even positive effects have been observed as, for example, with another ctenophore *Beroe ovata* that is able to control populations of the earlier introduced *M. leidy* by predation (Shiganova et al., 2001). In general, to predict and evaluate the potential predatory impact of the newly introduced species, one must know the pre-existing trophic linkages in the food web as well as those that the invader establishes and to quantify its dietary composition and requirements.
*Cercopagis pengoi*, a predatory cladoceran originating from the Ponto-Caspian region (Rivier, 1998), was first recorded in 1992 in the Gulfs of Riga and Finland (Ojaveer and Lumberg, 1995; Krylov et al., 1999). Later,
it spread to other parts of the Baltic Sea (Gorokhova et al., 2000) and colonized the North American Great Lakes (Leppäkoski and Olenin, 2000; Therriault et al., 2002). *Cercopagis pengoi* is a voracious predator that can reach high densities during summer (Baltic Sea: up to 1800 ind m$^{-3}$, Uitto et al., 1999; Lake Ontario: up to 2600 ind m$^{-3}$, Ojaveer et al., 2001; Lake Michigan: 700 ind m$^{-3}$, Witt and Cáceres, 2004) and has a potential to alter native zooplankton populations (Ojaveer et al., 2004; Kotta et al., 2006). In pelagic food webs, it acts as both predator and prey and is recognized as a species with a large potential to affect food webs and fish feeding conditions in the invaded ecosystems (Leppäkoski et al., 2002; Vanderploeg et al., 2002).
Before the introduction of *C. pengoi* into the Baltic Sea, the diet of Baltic zooplanktivorous fish, such as herring (*Clupea harengus*) and sprat (*Sprattus sprattus*), was largely composed of copepods (*Acartia* spp. and *Eurytemora affinis*) and cladocerans (podonids and *Bosmina maritima*), with a preference for the copepods (Rudstam et al., 1992; Mehner and Heerkloss, 1994; Arrhenius, 1996; Antsulevich and Välipakka, 2000). *Cercopagis* is a voracious predator, with a very peculiar feeding mode; it punctures the carapace and ingests soft tissues discarding the exoskeleton (Rivier, 1998). This feeding mode enables feeding on a broad size spectrum of prey from an order of magnitude below its own body mass (Pichlová-Ptáčníková and Vanderploeg, 2009). Previous field and experimental studies in the Baltic Sea (Simm et al., 2006; Lehtiniemi and Gorokhova, 2008) and the Laurentian Great Lakes (Laxson et al., 2003; Pichlová-Ptáčníková and Vanderploeg, 2009) have shown that *C. pengoi* feeds on copepods and cladocerans, and hence, it might compete with zooplanktivorous fish feeding on the same kind of prey. Evidence is, however, accumulating that *C. pengoi* can prey upon zooplankton that is too small for the adult fish to utilize efficiently, such as large ciliates, rotifers and small meroplanktonic larvae (Gorokhova et al., 2005; Simm et al., 2006; Lehtiniemi and Lindén, 2006). Because *C. pengoi* is also a prey for various Baltic fish, such as herring and sprat (Ojaveer and Lumberg, 1995; Antsulevich and Välipakka, 2000; Gorokhova et al., 2004), it may channel energy from previously underutilized biomass produced at lower trophic levels to fish. Therefore, if microzooplankton contributes substantially to the diet of *Cercopagis*, the invasion may have a positive effect on fish feeding conditions in the Baltic Sea, particularly in August–September, when mesozooplankton decline (Johansson et al., 1993; Adrian et al., 1999) during the consumption peak by young-of-the-year fish (Rudstam et al., 1992; Arrhenius and Hansson, 1993). During this period, the microzooplankton contribution to the total zooplankton biomass increases and, simultaneously, *C. pengoi* reaches its abundance peak and may become an important food source for adult herring and sprat, but also contributes to the diets of the young-of-the-year fish (Gorokhova et al., 2004).
The aim of this study was to determine temporal changes in diet composition of *C. pengoi* in the northern Baltic proper. In particular, our objectives were to investigate (i) to what extent *C. pengoi* utilizes microzooplankton and (ii) if *C. pengoi* has a preference for copepods, i.e. preferred planktivorous fish prey. Using stable C and N isotope signatures of potential zooplankton prey and *C. pengoi*, feasible ranges of different prey contributions to *C. pengoi* nutrition were estimated using IsoSource mixing models (Phillips and Gregg, 2003). Stable isotope analysis (SIA) has been proved useful in determining trophic interactions in aquatic ecosystems (Zanden and Rasmussen, 2001; Post, 2002). This method enables diet analysis when traditional methods are not applicable; as when the study organisms, feeding habits make gut content analysis impossible. Such feeding habits include fluid feeding, as is the case with *C. pengoi*. Further, we related IsoSource-based estimates to ambient zooplankton community composition to derive prey preferences. In addition, we analysed relationships between the consumer and its potential prey on different sampling occasions and used significant positive relationships as an indication of strong trophic linkage, following a probabilistic approach proposed by Melville and Connolly (Melville and Connolly, 2003); the results were compared with the IsoSource modelling results. This allowed us to evaluate the selectivity of *C. pengoi* for various zooplankton prey organisms and their contribution to its diet.
**METHOD**
**Study site and sampling**
Himmerfjärden Bay is situated in the southern archipelago of Stockholm, north-western Baltic proper. It is $\sim 30$-km long, has a mean salinity of 6 (PSU) and a mean depth of 17 m. It receives discharge water from a municipal water treatment plant located at the bay-head. Zooplankton for SIA and population analysis were sampled at two stations in Himmerfjärden Bay on a fortnightly basis in June–September 2007. The sampling stations, H2 and H4 (Fig. 1), are situated at the mouth of the bay (H2) and half-way up the bay, closer to the water treatment plant (H4).
Stable isotope analysis
Samples for SIA were taken from the upper 15 m of the water column using the same net as for the population analysis sampling. Immediately after collection, the zooplankton were separated from phytoplankton and cyanobacteria using a light trap (Larsson et al., 1986). The samples were diluted in filtered seawater and kept in insulated containers for transportation back to the laboratory (3–4 h), which allowed time for the zooplankton to clear their guts.
In the laboratory, the samples were sequentially filtered through different sieves (500, 250 and 35 μm), to get a rough size separation. The samples were then snap-frozen on a mesh at −80°C and stored for up to 10 days before sorting. Mesozooplankton samples were thawed and sorted by taxa and life stage under a dissecting microscope (Wild Heerbrugg, <×50) into the following groups: (i) copepodites CI–III (younger copepodites, stage I–III), Acartia spp. (A. bifilosa, A. tonsa and A. longiremis) and E. affinis; (ii) Acartia spp. CIV–VI (older copepodites, stage IV–VI); (iii) E. affinis CIV–VI; (iv) podonids (Podon leuckartii, P. intermedius and Pleopsis polyphemoides); (v) B. maritima; (vi) juvenile C. pengoi (barb stage I) and (vii) adult C. pengoi (barb stages II and III; these were pooled due to their similar body sizes; Uitto et al., 1999). Sorted individuals were placed in pre-weighed tin capsules and dried at 60°C for 72 h. Three replicates were taken for each prey group, with an average sample dry weight of 0.13 mg and typically consisted of three to seven individuals for C. pengoi or >50 individuals for the copepodites and smaller cladocerans. To obtain samples of microzooplankton for SIA, zooplankton retained on the 35-μm sieve were sieved once more through a 200-μm sieve to remove residual mesozooplankton. The zooplankton fraction retained on the 35-μm sieve was considered microzooplankton; and were mainly composed by rotifers (Keratella spp. and Synchaeta spp.) and juvenile copepods (mainly nauplii of Acartia spp. and E. affinis). Using a dissecting microscope, the animals were pipetted onto pre-combusted (5 h at 500°C) and pre-weighed Whatman filters (47 mm) and dried at 60°C for 72 h. Once dry, four discs of 4-mm diameter were cut from the filters and placed in pre-weighed tin capsules; these samples had an average dry weight of 0.20 mg, consisting of several hundred individuals. All samples were stored in desiccators until shipping for analysis. The SIA was conducted by continuous-flow mass spectrometry with an automated CN analyzer SL 20-20, PDZ Europa at the Stable Isotope Facility, University of California at Davis, USA. Ratios of $^{15}$N/$^{14}$N and $^{13}$C/$^{12}$C were expressed as the relative per mil ($\delta$, ‰) difference between samples and...
conventional standards (Vienna Pee Dee belemnite for C and atmospheric N\textsubscript{2} for N).
**Effects of freezing on δ\textsuperscript{15}N values**
Snap freezing has been reported to increase δ\textsuperscript{15}N values in cyclopoid copepods and cladocerans (Feuchtmayr and Grey, 2003). Therefore, the effect of freezing on δ\textsuperscript{15}N values in our zooplankton samples was tested by taking an additional five replicate samples of *Acartia* spp. CIV–VI that were processed directly after sampling, rather than freezing them. The results from these samples were then compared with five replicates of frozen samples from the same sampling occasion using an unpaired t-test. As no significant difference in δ\textsuperscript{15}N values was found between the frozen and fresh samples (unpaired t-test: $t_0 = 0.88$, $P > 0.4$), the freezing was considered an adequate storage method allowing the sampling schedule to be followed and all samples to be processed uniformly.
**Mixing models**
To determine the source contributions to the diet of *C. pengoi*, we estimated the relative contribution of each prey with the IsoSource mixing model (Phillips and Gregg, 2003) implemented in the SISUS platform (http://statacumen.com/sisus/). Only δ\textsuperscript{15}N values of potential prey and predators were used because relative uniformity of δ\textsuperscript{13}C values among the prey and predators limited the utility of these data. The model examines all possible combinations of each prey potential contribution (0–100%) in small increments (here 1%). Combinations that summed to within 0.01‰ of *C. pengoi* signature were considered feasible solutions. The results are presented as histograms to demonstrate the frequency distribution of dietary biomass contributions (%). In the Baltic Sea, *C. pengoi* has a generation time of 14–17 days (Svensson and Gorokhova, 2007), and hence, its isotopic signature should reflect the diet during that period. Therefore, to obtain prey isotopic composition, which is representative of the consumer signature, food sources were averaged for a 2-week period, i.e. between the neighbouring sampling occasions. To be used in the mixing model, food sources are required to be significantly different from each other; they were compared pairwise prior to being used as end-members in SISUS with an unpaired t-test ($P < 0.05$ in all cases). Two \textsuperscript{15}N fractionation factors were tested for the model calculations: 2.4‰ (calculated using the model for invertebrates suggested by Caut et al. (Caut et al., 2009) and data for H2 as the least affected by the sewage effluents) and 3.4‰ (the average fractionation factor for δ\textsuperscript{15}N recommended by Post, 2002). As the latter fractionation factor resulted in non-converging models (in 13 out of 37 cases, data not shown), whereas all models using the fractionation factor 2.4‰ converged, the results presented here are based on models using the 2.4‰ fractionation factor. All statistical analyses were performed using GraphPad Prism 4.01 (GraphPad Software, USA); significance was accepted when $P < 0.05$.
**Temporal-tracking analysis**
To determine whether temporal-tracking occurred, mean δ\textsuperscript{13}C and δ\textsuperscript{15}N values for *C. pengoi* and potential prey on each sampling occasion and station were used as Cartesian coordinates, and Euclidean distances were calculated between the value for *C. pengoi* and a prey on all dates when they both occurred. These distances were averaged ($D_0$) to produce a measure of correlation in a two-dimensional space (tracking). To obtain a distribution of predator/potential prey distances, sampling date labels of prey groups were changed and Euclidean distances were recalculated. The observed $D_0$ of the predator/prey combination was then compared with this distribution of possible $D$ values, giving a probabilistic significance test (Melville and Connolly, 2003). If the $D_0$ value was small relative to the distribution of possible values, then *C. pengoi* was said to be tracking that particular prey. This was done for all possible combinations of prey against the observed *C. pengoi* data.
**RESULTS**
**Seasonal dynamics of zooplankton communities**
The total biomass of zooplankton communities varied at both stations over the season, with ~2-fold higher values at H4 than at H2 (Fig. 2). At both stations, two biomass peaks occurred, one at the end of July (940 and 560 mg m\textsuperscript{-3} at H4 and H2, respectively) and the other in September (690 and 270 mg m\textsuperscript{-3} at H4 and H2, respectively). The zooplankton community structure differed between the two sites: *Acartia* spp., *E. affinis* and microzooplankton contributed most to the zooplankton stocks at station H2, while *B. maritima* and *E. affinis* dominated at station H4. Throughout the season, *C. pengoi* biomass was relatively low at both stations, contributing up to 2 and 5% to the total zooplankton biomass, at H2 and H4, respectively, and with a 10-fold difference between the sites (≤5 and ≤46 mg m\textsuperscript{-3} at H2 and H4, respectively; Fig. 2).
δ¹⁵N and δ¹³C values in the predator and prey
The δ¹⁵N values varied from 6 to 10‰ and from 10 to 14‰ in samples collected at H2 and H4, respectively; while δ¹³C values were more similar between the stations: −25 to −21 and −25 to −19‰ at H2 and H4, respectively (Fig. 3). There were considerable differences in the isotopic composition of zooplankton between the two stations (Fig. 3). At H4, zooplankton δ¹⁵N values were 3–4‰ higher compared with those at H2; the differences were significant for all prey groups considered (unpaired t-test; P < 0.05 in all cases). In most groups, the δ¹³C values followed the same trend, being higher at H4 compared with H2; the exception were podonids (P > 0.10) and *C. pengoi* (P > 0.15) that did not differ between the stations. Since there were significant differences in isotopic composition of zooplankton between the stations, the data from H2 and H4 were used separately in the mixing models and temporal-tracking analysis.
When values were averaged over the season, no significant differences in δ¹³C values were found between prey groups at either station (P > 0.05 in all cases), whereas differences in δ¹⁵N were significant (P < 0.05 in all cases). At both sites, *C. pengoi* occupied the highest trophic position among the zooplankton tested, as indicated by its δ¹⁵N values (9.47 and 13.95‰ for H2 and H4, respectively; Fig. 3). There were no significant differences in either δ¹³C or δ¹⁵N values between juvenile and adult *C. pengoi* (paired t-test; δ¹⁵N: t = 1.744, df = 8, P > 0.12; δ¹³C: t = 1.626, df = 8, P > 0.14), and therefore, all samples for this species were pooled before use in IsoSource models and temporal-tracking analysis. A high level of observed within-station variability was found, and this was mostly due to changes in isotopic concentrations over the season, with particularly pronounced decrease in δ¹⁵N in nearly all prey groups during the end of July–August (data not shown). These seasonal changes made it necessary to construct separate mixing models for each sampling occasion.
Source contributions estimated by IsoSource mixing models
Several prey groups contributed substantially to the diet of *C. pengoi* (Fig. 4), with large copepodites (CIV–VI) having the greatest contribution on a seasonal basis. At station H2, dominant contributors were *Acartia* spp.
CIV–VI (8–71%), followed by microzooplankton (5–47%), *E. affinis* CIV–VI (8–35%), podonids (12–22%) and copepodites CI–III (13–20%). At station H4, *B. maritima* (8–85%) and *E. affinis* CIV–VI (4–69%) dominated, whereas *Acartia* CIV–VI (4–29%) and microzooplankton (8–20%) contributed less. During the period when *C. pengoi* was present in the water column, abundances of *B. maritima* and podonids at station H2 and of podonids and copepodites CI–III at station H4 were very low (Fig. 2). As a result, it was not always possible to collect enough sample material for SIA of these groups and hence they were not included in models for all sampling occasions/stations (Fig. 4).
**Prey-tracking analysis**
Those prey that were more closely tracked by *C. pengoi* (*P* < 0.05) were well separated from those less closely tracked (Table I). Old copepodites (CIV–VI), particularly *E. affinis*, were most closely tracked, while *B. maritima* and microzooplankton were not. Tracking of podonids was not possible to evaluate, due to the fact that this prey nearly disappeared from the water column when the *C. pengoi* population started to increase.
**Prey selectivity**
A positive selection by *C. pengoi* was assumed to occur, if the average contribution of a prey species in the diet, estimated with the IsoSource mixing model, exceeded its contribution to the ambient zooplankton community. At station H2 (Fig. 5), this occurred for podonids prior to their abundance dropping to very low levels, and occasionally for the older copepodites (CIV–VI) of *Acartia* spp. and *E. affinis*. At H4 (Fig. 5), the old copepodites were the preferred groups; positive selection was also shown towards *B. maritima* on one occasion in July. At both stations (Fig. 5), positive or neutral selection was also observed for microzooplankton on several sampling occasions.
---
**Table I: Temporal tracking of prey by Cercopagis pengoi at stations H2 and H4**
| Potential prey group | H2 | H4 |
|----------------------|----|----|
| | P% | *P* value | P% | *P* value |
| Copepodites CI–III | 0 | 0.0243 | nd | — |
| *Acartia* spp. CIV–VI| 0 | 0.0218 | 67 | 0.4767 |
| *E. affinis* CIV–VI | 0 | 0.0220 | 0 | 0.0235 |
| Podonids | 50 | na | nd | — |
| *B. maritima* | nd | — | 0 | 0.1386 |
| Microzooplankton | 0 | 0.1184 | 33 | 0.4864 |
P%, the percentage of possible *D* values smaller than observed $D_0$, and corresponding *P* values (one-sample *t*-test) are shown. Low values indicate tracking in time of prey isotope signatures by the predator; na, prey occurred at insufficient occasions ($n < 4$) when the consumer was present; nd, no stable isotope data available. Significant values (*P* < 0.05) are in bold face.
Fig. 5. Contribution (%) of different prey groups to total zooplankton community biomass and to the diet of *Gercephagus pengoi* (average %, estimated by the IsoSource mixing model using $\delta^{15}$N values) at stations H2 and H4 (in columns). Capital letters on the x-axis represents the months June to September. Higher percentage of prey in the diet than in the ambient zooplankton community indicates a positive selection for this prey by *C. pengoi*. No diet contribution estimate was possible for *B. maritima* at H2 due to the lack of material for SIA, which was also the case for podonids at H4.
DISCUSSION
According to the IsoSource model outputs, relative contributions of different prey vary during the season and at different stations, consistent with the view that *C. pengoi* is a generalist predator (Pichlová-Ptáčniková and Vanderploeg, 2009). Both methods employed (i.e. IsoSource mixing model and temporal tracking of prey) indicated older copepodites as both the dominant and the preferred prey of *C. pengoi*. This agrees with experimental studies showing that *C. pengoi* feed on copepods, a prey with relatively fast escape abilities (Pichlová-Ptáčniková and Vanderploeg, 2009), including *E. affinis* (Simm et al., 2006; Lehtiniemi and Gorokhova, 2008). Moreover, if individual body weight are taken into account to ensure a more direct comparison with the isotopic representation, the consumption rates observed by Simm et al. (Simm et al., 2006) for copepodites of *E. affinis* would be >2-fold higher compared with those for *B. maritima* and copepod nauplii. Also, as copepods dominated the zooplankton community, our study is in line with earlier studies both from the Baltic Sea (Simm et al., 2006) and The Laurentian Great Lakes (Laxson et al., 2003) that suggested that zooplankton groups dominating the ambient community tend to contribute more to the diet of *Cercopagis*; this being indicative of opportunistic feeding. Evidence for the importance of microzooplankton, however, is inconclusive. While this prey had a low to high likelihood of contribution based on the IsoSource modelling (maximal values 16–90%) and in most cases was found to be positively selected for (Fig. 5), no evidence for the association between this source and *C. pengoi* nutrition was found in the temporal-tracking analysis. This is considered as a more robust analysis, as it does not involve any assumptions regarding trophic shift values (Melville and Connolly, 2003). As this fraction was taxonomically heterogeneous consisting of different species of rotifers, copepods and barnacle nauplii, it is likely that there was a differential predation on these taxa. This might have resulted in varying estimates of the dietary contribution, depending on the proportion of different taxa in the ambient plankton as was indeed indicated by the IsoSource mixing model results (Table I). Another explanation for the difference between the IsoSource output and temporal-tracking analysis could be related to the fact that the 90-µm mesh of the WP2 net has lower sampling efficiency for microzooplankton (Johansson, 1992). This may result in underestimation of the standing stocks and therefore possible overestimation of selectivity. Experimental studies have shown that *C. pengoi* can feed on these small zooplankters (Lehtiniemi and Lindén, 2006; Pichlová-Ptáčniková and Vanderploeg, 2009), but to what extent this occurs in nature is not clear. We also expected that young *C. pengoi* would have lower δ¹⁵N values due to a higher contribution of microzooplankton in their diet compared with adults. However, the isotopic differences between the age groups were not significant, most probably due to the high variability in body size of Instar II (see Table I in Grigorovich et al., 2000 and Fig. 6 in Uitto et al., 1999) that comprised 58–90% of the *C. pengoi* population during the study period and, therefore, was most probably overrepresented in the stable isotope samples of adults. This high natural variability in instar body size precluded evaluating diet differences related to age. On the other hand, the lack of isotopic difference between ages may reflect overlapping diets in similarly sized *C. pengoi* regardless of their developmental stage, which is the most plausible explanation. Overall, the discrepancy between the IsoSource model output and temporal prey-tracking analysis indicates that different microzooplankton species should be treated separately and that size-, rather than, instarspecific differences in *C. pengoi* should be considered to find out what microzooplankton prey, and when, is actually consumed by *C. pengoi*. However, it is also likely that rotifers and nauplii would have similar isotopic signatures, as they feed on similar prey (mostly nanoplankton), and this may hamper their use as separate prey.
**Fig. 6.** Photograph of *Cercopagis pengoi* feeding on a small podonid (*circled*); the arrow indicates the predators’ mouthparts.
groups in mixing models. IsoSource results indicated that cladocerans had low to medium (podonids, station H2 only) and low to high (*B. maritima*, station H4 only) contribution to the diet of *C. pengoi*. When these estimates were compared with the proportions of these prey groups in the ambient zooplankton communities, it became apparent that *C. pengoi* showed a positive selection towards podonids on every occasion SIA samples were available. However, podonids were only found in the water column until the beginning of July, comprising not >10% of the total zooplankton biomass (Fig. 2), resulting in the fact that there were only two sampling occasions when SIA samples for both *C. pengoi* and podonids could be collected. This precluded temporal-tracking analysis for this prey. However, tangible evidence for *C. pengoi* feeding on podonids was found while sorting samples for SIA (Fig. 6). IsoSource modelling results for the other cladoceran prey, *B. maritima*, suggest highest contribution to the diet and positive selection only at the beginning of the summer, when its share in the total zooplankton biomass was extremely low (<1%). This low abundance of *B. maritima* makes it highly unlikely that it could have been an important prey during this time. Consistent with this, the temporal-tracking analysis results do not indicate a significant association of this prey with *C. pengoi*. Field observations support this conclusion: a positive correlation between abundances of *B. maritima* and *C. pengoi* has been observed in the Gulf of Finland and was suggested to be a result of *C. pengoi* preying on other species that compete with *B. maritima*, such as *Pleopsis polyphemoides* and *Evadne nordmannii* (Pöllumäe and Kotta, 2007). Also, to meet the energy requirements of *C. pengoi*, feeding rates on bosminids were not sufficient, whereas those on larger prey were (Laxson et al., 2003). However, considering the high diet contribution range of *B. maritima* during the period when this prey was abundant (Table 1) and the evidence from predation experiments suggesting this cladoceran is a readily consumed prey (Laxson et al., 2003; Simm et al., 2006; Pichlová-Ptačíková and Vanderploeg, 2009), it is possible that *C. pengoi* preys actively on bosminids when they dominate zooplankton (Laxson et al., 2003). In the studied area, the prey that contribute most to the diet of *C. pengoi* are the dominant copepodites of *Acartia* spp. and *E. affinis*, while the contribution of other prey is less pronounced (podonids and microzooplankton) or not important (*B. maritima*).
When using the IsoSource mixing model, the diet composition and prey preference estimates differed between the two sites, with relative contributions of *Acartia* spp. and *E. affinis* differing between the stations (*Acartia* spp.: 35 ± 27 and 15 ± 9%; *E. affinis*: 21 ± 9 and 34 ± 4%; grand mean values ± SD for the contributions to the diet at stations H2 and H4, respectively). These differences can be explained by (i) the differences in community composition and prey abundances (Fig. 2; note the higher abundance and proportion of *Acartia* spp. and *E. affinis* at stations H2 and H4, respectively) presenting the predator with different food choices, and (ii) the lack of SIA samples for certain prey, which might have resulted in errors in mixing model calculations. Indeed, the low abundance of young copepodites (CI–III) and podonids at H4 and of *B. maritima* and podonids at H2 made it impossible to get estimates for these prey groups and, consequently, the estimated proportions of other prey may be biased.
The observed differences in isotopic composition between the two sites could depend on several factors, none of which is mutually exclusive. Seston at H4 is isotopically heavier than at H2 due to the greater terrestrial influence and the closer proximity to the water treatment plant (Höglander, 2005). Savage (Savage, 2005) found the strongest influence of the $^{15}$N-enriched effluent to be within 10 km of the outfall, which encompasses H4 but not H2, a concentration gradient that may also be strengthened by the limited water exchange in the bay (Savage, 2005). In addition, the concentration of $^{15}$N at H2 may be reduced due to the greater contribution of isotopically light diazotrophic cyanobacteria in the outer part of the bay (pers. com. Dr S. Hajdu, Systems Ecology, Stockholm University). Understanding causes of variability in isotopic composition and processes of isotope fractionation is important for interpretation of stable isotope data when using SIA for studying food web structure and functioning. Experimental studies show that fractionation is governed by many variables, such as temperature and feeding activity (Barnes et al., 2007), type of food (Crawley et al., 2007), diet isotopic ratios (Caut et al., 2008), C:N ratio of the primary food source and availability of N (Adams and Sterner, 2000), making fractionation factors vary both geographically and temporally. Strictly speaking, the percentages of the prey contributions in the diet should always be treated with caution as exact diet- and temperature-specific fractionation factors are rarely known, particularly for omnivores living in fluctuating environments and/or nutrient gradients, such as the Himmerfjärden Bay exposed to sewage discharge. This is why it is important to use an independent line of evidence, such as temporal-tracking analysis of differences between sampling occasions, which does not rely on diet fractionation (Melville and Connolly, 2003). Insights into diet composition obtained with other methods, such as feeding experiments, biochemical tracers (e.g. fatty acids), DNA-based and compound-specific isotopic
analysis, could further facilitate interpretation and increase the reliability of SIA for feeding and food web studies (Ben-David and Schell, 2001; Gorokhova and Lehtiniemi, 2007).
With larger copepodes (CIV–VI) making up the main part of the *C. pengoi* diet (Fig. 4), there is a significant diet overlap with zooplanktivorous fish in the Baltic Sea (Rudstam et al., 1992; Melner and Heerkloss, 1994; Arrhenius, 1996). This should not present a problem as long as the copepod populations remain high and zooplanktivores are therefore not food limited. However, prior to the invasion of *Cercopagis*, young-of-the-year herring in areas close to Himmerfjärden were suggested to be food limited (Arrhenius and Hansson, 1996), indicating conditions of a possible food competition. On the other hand, since *C. pengoi* has been found to contribute substantially to the diet of Baltic Sea zooplanktivorous fish (Ojaveer and Lumberg, 1995; Antsulevich and Välipakka, 2000; Gorokhova et al., 2004, 2005; Lankov et al., 2010), the risk of competition may be reduced, particularly if adult herring are abundant. Moreover, given the low stocks of *C. pengoi* observed in our study (<5% of the total zooplankton biomass), it is unlikely to exert a heavy predation pressure on the rest of the zooplankton community. Nevertheless, *C. pengoi* does add an additional predation pressure on copepods and therefore might promote increased competition among zooplanktivores, particularly when abundant (Uitto et al., 1999). In Lake Ontario, an increased predation on juvenile copepods has been implicated in causing a decline in the copepod populations (Benoit et al., 2002). This may also occur in the Baltic Sea as suggested by decreased *E. affinis* stocks following *C. pengoi* invasion in the Gulf of Finland (Lehtiniemi and Gorokhova, 2008). With a reduction in copepod stocks, food availability for fish would decrease leading to a possible reduction in fish stocks. On the other hand, post-larval fish are not very efficient in preying upon microzooplankton that *C. pengoi* is able to feed upon (Lehtiniemi and Lindén, 2006; Pichlová-Ptačníková and Vanderploeg, 2009); to some extent this feeding was also supported by the IsoSource model outputs (Fig. 4). Therefore, as *C. pengoi* is preyed upon by zooplanktivorous fish (Gorokhova et al., 2004; Lankov et al., 2010), the energy from microzooplankton, channelled through *C. pengoi*, may give fish a better access to previously underutilized energy. However, limited dependence of *C. pengoi* on microzooplankton (Table I) together with differences in food quality between the copepods and microzooplankton makes it unlikely that the consumption of the microzooplankton compensates for the possible competition with fish for copepods. Furthermore, *C. pengoi* may not represent a beneficial food source, as the indigestible tails occupy space in fish stomachs preventing more prey from being ingested (Lankov et al., 2010).
To conclude, using SIA for *in situ* diet assessment, we found that *C. pengoi* behaves as an opportunistic generalist predator, with the main prey being copepods that dominated the zooplankton community in the study area. These characteristics of the predator agree with previously reported laboratory observations for *Cercopagis*-fed zooplankton typical for freshwaters (Pichlová-Ptačníková and Vanderploeg, 2009). Both IsoSource modelling and temporal-tracking analysis indicate that older copepodites of *Acartia* spp. and *E. affinis* are the dominant prey of *C. pengoi*. Additionally, IsoSource results implicate this prey as consistently preferred by *Cercopagis*. In contrast, evidence for rotifers (*Keratella* spp. and *Synchaeta* spp.) and nauplii (*Acartia* substantially to *Cercopagis* nutrition was inconclusive. To understand patterns of *C. pengoi* impacts on the food webs in the invaded ecosystems, cross-system comparative studies applying an array of methods are needed. In particular, studies investigating mechanisms by which opportunistic feeding is manifest in *Cercopagis* populations, the vulnerability of specific zooplankton communities and the effects this predator has on the trophic dynamics within and across ecosystems will be particularly important in light of increasing invasions.
**ACKNOWLEDGEMENTS**
We thank S. Svensson, B. Abrahamsson and L. Lundgren (Systems Ecology, Stockholm University, Sweden) for help in collecting zooplankton samples.
**FUNDING**
This research was supported by the Swedish National Monitoring program and grants from The Swedish Research Council for Environment, Agricultural Sciences, and Spatial Planning (Formas), the Swedish Environmental Protection Agency (Naturvårdsverket) and the foundation Baltic Sea 2020.
**REFERENCES**
Adams, T. S. and Sterner, R. W. (2000) The effect of dietary nitrogen content on trophic level $^{15}$N enrichment. *Limnol. Oceanogr.*, **45**, 601–607.
Adrian, R., Hansson, S., Söderlund, B. et al. (1999) Effects of food availability and predation on a marine zooplankton community—a
study on copepods in the Baltic Sea. *Int. Rev. Hydrobiol.*, **84**, 609–626.
Antsulevich, A. and Välipakka, P. (2000) *Cercopagis pengoi*—New important food object of the Baltic herring in the Gulf of Finland. *Int. Rev. Hydrobiol.*, **85**, 609–619.
Arrhenius, F. (1996) Diet composition and food selectivity of 0-group herring (*Clupea harengus* L.) and sprat (*Sprattus sprattus* (L.)) in the northern Baltic Sea. *ICES J. Mar. Sci.*, **53**, 701–712.
Arrhenius, F and Hansson, S. (1993) Food consumption of larval, young and adult herring and sprat in the Baltic Sea. *Mar. Ecol. Prog. Ser.*, **96**, 125–137.
Arrhenius, F and Hansson, S. (1996) Growth and seasonal changes in energy content of young Baltic Sea herring (*Clupea harengus* L.). *ICES J. Mar. Sci.*, **53**, 792–801.
Barnes, C., Sweeting, C. J., Jennings, S. *et al.* (2007) Effect of temperature and ration size on carbon and nitrogen stable isotope trophic fractionation. *Funct. Ecol.*, **21**, 356–362.
Beardsley, T. M. (2006) Predicting aquatic threats. *BioScience*, **56**, 459–461.
Ben-David, M. and Schell, D. M. (2001) Mixing models in analyses of diet using multiple stable isotopes: a response. *Oecologia*, **127**, 180–184.
Benoit, H. P., Johannsson, O. E., Warner, D. M. *et al.* (2002) Assessing the impact of a recent predatory invader: the population dynamics, vertical distribution, and potential prey of *Cercopagis pengoi* in Lake Ontario. *Limnol. Oceanogr.*, **47**, 626–635.
Caut, S., Angulo, E. and Courchamp, F (2008) Discrimination factors (15N and 13C) in an omnivorous consumer: effect of diet isotopic ratio. *Funct. Ecol.*, **22**, 255–263.
Caut, S., Angulo, E. and Courchamp, F (2009) Variation in discrimination factors (delta 15N and delta 13C): the effect of diet isotopic values and applications for diet reconstruction. *J. Appl. Ecol.*, **46**, 443–453.
Crawley, K. R., Hyndes, G. A. and Vanderklift, M. A. (2007) Variation among diets in discrimination of δ¹³C and δ¹⁵N in the amphipod *Allorchestes compressa*. *J. Exp. Mar. Biol. Ecol.*, **349**, 370–377.
Feuchtmayr, H. and Grey, J. (2003) Effect of preparation and preservation procedures on carbon and nitrogen stable isotope determinations from zooplankton. *Rapid Commun. Mass Spectrom.*, **17**, 2605–2610.
Gorokhova, E., Aladin, N. and Dumont, H. (2000) Further expansion of the genus *Cercopagis* (Crustacea, Branchiopoda, Onychopoda) in the Baltic Sea, with notes on the taxa present and their ecology. *Hydrobiologia*, **429**, 207–218.
Gorokhova, E., Fagerberg, T. and Hansson, S. (2004) Predation by herring (*Clupea harengus*) and sprat (*Sprattus sprattus*) on *Cercopagis pengoi* in a western Baltic Sea bay. *ICES J. Mar. Sci.*, **61**, 959–965.
Gorokhova, E., Hansson, S., Höglander, H. *et al.* (2005) Stable isotopes show food web changes after invasion by the predatory cladoceran *Cercopagis pengoi* in a Baltic Sea bay. *Oecologia*, **143**, 251–259.
Gorokhova, E. and Lehtiniemi, M. (2007) A combined approach to understand trophic interactions between *Cercopagis pengoi* (Cladocera: Onychopoda) and mysids in the Gulf of Finland. *Limnol. Oceanogr.*, **52**, 685–695.
Gozlan, R. E. (2008) Introduction of non-native freshwater fish: is it all bad? *Fish Fish.*, **9**, 106–115.
Grigorovich, I. A., MacIsaac, H. J., Rivier, I. K. *et al.* (2000) Comparative biology of the predatory cladoceran *Cercopagis pengoi* from Lake Ontario, Baltic Sea and Caspian Sea. *Arch. Hydrobiol.*, **149**, 23–50.
HELCOM. (1988) Guidelines for the Baltic Monitoring Programme for the third stage. *Baltic Sea Environ. Proc.*, **27 D**, 161.
Hernroth, L. (1985) Recommendations on methods for marine biological studies in the Baltic Sea. Mesozooplankton biomass assessment. *The Baltic Marine Biologists*, **10**, 32.
Höglander, H. (2005) Studies of Baltic Sea plankton—spatial and temporal patterns. Doctoral Thesis, Stockholm University, Sweden
Johansson, S. (1992) Regulating factors for coastal zooplankton community structure in the northern Baltic proper. Doctoral Thesis, Stockholm University, Sweden
Johansson, S., Hansson, S. and Araya-Nunez, O. (1993) Temporal and spatial variation of coastal zooplankton in the Baltic Sea. *Ecography*, **16**, 167–173.
Kideys, A. E. (2002) Fall and rise of the Black Sea ecosystem. *Science*, **297**, 1482–1484.
Kott, P. (1953) Modified whirling apparatus for the subsampling of plankton. *Aust. J. Mar. Freshw. Res.*, **4**, 387–393.
Kotta, J., Kotta, I., Simm, M. *et al.* (2006) Ecological consequences of Biol. Invasions: three invertebrate case studies in the north-eastern Baltic Sea. *Helgol. Mar. Res.*, **60**, 106–112.
Krylov, P. I., Bychenkov, D. E., Panov, V. E. *et al.* (1999) Distribution and Seasonal dynamic of the Ponto-Caspian invader *Cercopagis pengoi* (Crustacea, Cladocera) in the Neva Estuary (Gulf of Finland). *Hydrobiologia*, **393**, 227–232.
Lankov, A., Ojaever, H., Simm, M. *et al.* (2010) Feeding ecology of pelagic fish species in the Gulf of Riga (Baltic Sea): the importance of changes in the zooplankton community. *J. Fish. Biol.*, **77**, 2268–2284.
Larsson, U., Blomqvist, S. and Abrahamsson, B. (1986) A new sediment trap system. *Mar. Ecol. Prog. Ser.*, **31**, 205–207.
Laxson, C. L., McPhedran, K. N., Makarewicz, J. C. *et al.* (2003) Effects of the non-indigenous cladoceran *Cercopagis pengoi* on the lower food web of Lake Ontario. *Freshwat. Biol.*, **48**, 2094–2106.
Lehtiniemi, M. and Gorokhova, E. (2008) Predation of the introduced cladoceran *Cercopagis pengoi* on the naïve copepod *Eurytemora affinis* in the northern Baltic Sea. *Mar. Ecol. Prog. Ser.*, **362**, 193–200.
Lehtiniemi, M. and Lindén, E. (2006) *Cercopagis pengoi* and *Mysis* spp. alter their feeding rate and prey selection under predation risk of herring (*Clupea harengus membras*). *Mar. Biol.*, **4**, 1432–1793.
Leppäkaisa, E., Gollasch, S., Gruszka, P. *et al.* (2002) The Baltic—a sea of invaders. *Can. J. Fish. Aquat. Sci.*, **59**, 1175–1188.
Leppäkoski, E. and Olenin, S. (2000) Non-native species and rates of spread: lessons from the brackish Baltic Sea. *Biol. Invasions*, **2**, 151–163.
Mehner, T and Heerkloss, R. (1994) Direct estimation of food consumption of juvenile fish in a shallow inlet of the southern Baltic. *Int. Rev. Hydrobiol.*, **79**, 295–304.
Melville, A. J. and Connolly, R. M. (2003) Spatial analysis of stable isotope data to determine primary sources of nutrition for fish. *Oecologia*, **136**, 499–507.
Noonburg, E. G. and Byers, J. E. (2005) More harm than good: when invader vulnerability to predators enhances impact on native species. *Ecology*, **86**, 2555–2560.
Ojaveer, H., Kuhns, L. A., Barbiero, R. P. et al. (2001) Distribution and population characteristics of *Cercopagis pengoi* in Lake Ontario. *J. Gt. Lakes Res.*, **27**, 10–18.
Ojaveer, H. and Lumberg, A. (1995) On the role of *Cercopagis* (*Cercopagis* *pengoi* Ostroumov) in Pärnu Bay and the NE part of the Gulf of Riga ecosystem. *Proc. Est. Ac. Sci. Biol. Ecol.*, **5**, 20–25.
Ojaveer, H., Simm, M. and Lankov, A. (2004) Population dynamics and ecological impact of the non-indigenous *Cercopagis pengoi* in the Gulf of Riga (Baltic Sea). *Hydrobiologia*, **522**, 261–269.
Phillips, D. L. and Gregg, J. W. (2003) Source partitioning using stable isotopes: coping with too many sources. *Oecologia*, **136**, 261–269.
Pichlová-Plačníková, R. and Vanderploeg, H. A. (2009) The invasive cladoceran *Cercopagis pengoi* is a generalist predator capable of feeding on a variety of prey species of different sizes and escape abilities. *Fundam. Appl. Limnol.*, **173**, 267–279.
Pöllumäe, A. and Kotta, J. (2007) Factors affecting the distribution of the zooplankton community in the Gulf of Finland in the context of interactions between naïve and introduced predatory cladocerans. *Oceanologica*, **49**, 277–290.
Post, D. M. (2002) Using stable isotopes to estimate trophic position: models, methods, and assumptions. *Ecology*, **83**, 703–718.
Rivier, I. K. (1998) The Predatory Cladocera (Onychopoda: Podoniidae, Polyphemidae, Cercopagidae) and Leptodorida of the world. *Guides to the Identification of the Micro-invertebrates of the Continental Waters of the World*. Vol. 13. Backhuys Publishing, Leiden, pp. 1–213.
Roohi, A., Yasin, Z., Kidveys, A. E. et al. (2008) Impact of a new invasive ctenophore (*Mnemiopsis leidy*) on the zooplankton community of the Southern Caspian sea. *Mar. Ecol.*, **29**, 421–434.
Rosen, R. A. (1981) Length—dry weight relationships of some freshwater zooplankton. *J. Freshwater Ecol.*, **1**, 225–229.
Rudstam, L. G., Hansson, S., Johansson, S. et al. (1992) Dynamics of planktivory in a coastal area of the northern Baltic Sea. *Mar. Ecol. Prog. Ser.*, **80**, 159–173.
Savage, C. (2005) Tracing the influence of sewage nitrogen in a coastal ecosystem using stable nitrogen isotopes. *AMBIO*, **34**, 145–150.
Shiganova, T. A., Bulgakova, Y. V., Volokiv, S. P. et al. (2001) The new invader *Beroe ovata* Mayer 1912 and its effect on the ecosystem in the northeastern Black Sea. *Hydrobiologia*, **451**, 187–197.
Simm, M., Lankov, A., Põllupüü, M. et al. (2006) Estimation of consumption rates of the predatory cladoceran *Cercopagis pengoi* in laboratory conditions. In: Ojaveer, H. and Kotta, J. (eds), *Alien Invasive Species in the North-Eastern Baltic Sea: Population Dynamics and Ecological Impacts*. Estonian Marine Institute Report Series, 14. Estonian Marine Institute, Tallinn, pp. 1–66.
Svensson, S. and Gorokhova, E. (2007) Embryonic development time of parthenogenetically reproducing *Cercopagis pengoi* (Cladocera, Onychopoda) in the northern Baltic proper. *Fundam. Appl. Limnol.*, **170**, 257–261.
Therriault, T. W., Grigorovich, I. A., Kane, D. D. et al. (2002) Range expansion of the exotic zooplankter *Cercopagis pengoi* (Ostroumov) into western Lake Erie and Muskegon Lake. *J. Great Lakes Res.*, **28**, 698–701.
Uitto, A., Gorokhova, E. and Välipakka, P. (1999) Distribution of the non-indigenous *Cercopagis pengoi* in the coastal waters of the eastern Gulf of Finland. *ICES J. Mar. Sci.*, **56**, 49–57.
Vanderploeg, H. A., Nalepa, T. F., Jude, D. J. et al. (2002) Dispersal and emerging ecological impacts of Ponto-Caspian species in the Laurentian Great Lakes. *Can. J. Fish. Aquat. Sci.*, **59**, 1209–1228.
Witt, A. M. and Cáceres, C. E. (2004) Determining interactions of non-native predators in their new environment: *Bythotrephes longimanus* and *Cercopagis pengoi* in southwestern Lake Michigan. *J. Gt. Lakes Res.*, **30**, 519–527.
Zandén, M. J. V. and Rasmussen, J. B. (2001) Variation in δ¹⁵N and δ¹³C trophic fractionation: implications for aquatic food web studies. *Limnol. Oceanogr.*, **46**, 2061–2066.
|
Cobalt-Catalyzed Diastereo- and Enantioselective Allyl Addition to Aldehydes and α-Ketoesters through Allylic C–H Functionalization
Haiyan Zhang
Shanghai Institute of Organic Chemistry
Jun Huang
Shanghai Institute of Organic Chemistry
Fanke Meng (✉ firstname.lastname@example.org)
Shanghai Institute of Organic Chemistry https://orcid.org/0000-0003-1009-631X
Article
Keywords: cobalt, allylic C–H, enantioselective
DOI: https://doi.org/10.21203/rs.3.rs-58188/v1
License: This work is licensed under a Creative Commons Attribution 4.0 International License. Read Full License
Catalytic reactions that can generate nucleophilic allyl–metal intermediates directly from simple alkenes without prefunctionalization, and ones that produce various homoallylic alcohols diastereo- and enantioselectively are of great importance in organic synthesis. Transformations that accomplish these two tasks simultaneously are in high demand, particularly if the catalysts, substrates and reagents are inexpensive and easy to access. Here we report a catalytic process that chemoselective formation of nucleophilic allyl–cobalt complexes through oxidative allylic C–H cleavage of alkenes followed by site-, diastereo- and enantioselective addition to aldehydes and α-ketoesters. The enantioenriched products that are otherwise difficult to access are obtained in up to 96% yield, with >95:5 dr and 98:2 er. The cobalt-based catalyst is derived from a commercially available chiral phosphine ligand. The utility of the method is demonstrated through enantioselective formal synthesis of lithospermic acid and total synthesis of dihydrodehydrodiconiferylalcohol.
Introduction
Transition-metal-catalyzed carbon–hydrogen (C–H) bond functionalization has long been central to researchers in organic chemistry, as it enables direct installation of functional groups without the need of multistep manipulations to introduce functionalities, allowing significant shortening synthetic routes and improving the overall efficiency of organic synthesis.\textsuperscript{1–5} Although activation of C(sp\textsuperscript{2})–H bonds promoted by transition-metal complexes have been well-studied, catalytic functionalization of C(sp\textsuperscript{3})–H bonds has been much less developed due to their inherent inertness to metal complexes insertion and low reactivity and instability of the resulting C(sp\textsuperscript{3})–metal bonds.\textsuperscript{6,7} The greatest challenge for such process is the control of selectivity.\textsuperscript{8} Although significant advances have been made in regioselective C–H activation through directing effects or electronic biases, enantioselective C(sp\textsuperscript{3})–H functionalization has been received less attention (see Supplementary Information for more references).\textsuperscript{9} Allyls are versatile groups that can be easily transformed to various functional groups and widely used in organic synthesis.\textsuperscript{10–12} Enantioselective functionalization of allylic C–H bonds constitutes an ideal approach to introduce allyl groups to chiral molecules and attracts researchers’ interests.\textsuperscript{13} Several catalytic enantioselective protocols of allylic C–H functionalization via an electrophilic allyl–metal intermediate promoted by a Pd\textsuperscript{14–27} or Rh\textsuperscript{28,29} complex have been developed (Figure 1a). More recently, enantioselective allylic C–H functionalization via allyl radicals through metal-catalyzed or photocatalyzed C–H bond cleavage has been revealed (Figure 1a).\textsuperscript{30–36} However, enantioselective transformations of nucleophilic allyl–metal intermediates directly produced from oxidative allylic C–H bond cleavage promoted by a single catalyst remained undeveloped. Herein, we report an unprecedented protocol for catalytic generation of nucleophilic allyl–Co complexes followed by diastereo- and enantioselective addition to aldehydes and α-ketoesters to furnish a variety of homoallylic alcohols that are otherwise difficult to access.
Enantioenriched homoallylic alcohols are important building blocks in complex molecule synthesis.\textsuperscript{10} The most attractive approach to access such class of molecules is direct formation of nucleophilic allyl–metal complexes from readily available alkenes followed by enantioselective addition to carbonyls. Although generation of allyl–Cu intermediates from Cu–H\textsuperscript{37–41} or Cu–B\textsuperscript{42,43} addition to polyunsaturated hydrocarbons or deprotonation\textsuperscript{44–46} followed by enantioselective addition to carbonyl compounds has been studied, access to nucleophilic allyl–metal complex from direct oxidative cleavage of the inert allylic C–H bond of simple alkenes and subsequent enantioselective addition to carbonyls promoted by a single multi-tasking catalyst remained unknown. Inspired by Sato’s works (Figure 1b),\textsuperscript{47–50} we envisioned that Co(I)–Me complex I in situ produced from AlMe\textsubscript{3} and Co(II) salt coordinates with the alkene moiety chemoselectively (vs. aldehyde) and facilitates the oxidative insertion to the allylic C–H bond to provide the $h^3$-Co(III)–allyl complex III. Complex III has to undergo reductive elimination chemoselectively to afford Co(I)–allyl complexes IV and V instead of generation of VII and VIII or Me, H-addition to the aldehyde (IX, X). One of the two possible $h^1$-Co(I)–allyl complexes have to react with the aldehyde 2 selectively to deliver the homoallylic alcohol VI and regenerate the catalyst. Successful implementation of the aforementioned plan demands that the single catalyst has to not only efficiently promote the C–H bond cleavage and subsequent aldehyde addition, but also accurately control the chemo-, site- and stereoselectivity of each step.
**Results**
To identify conditions that would deliver homoallylic alcohol 3 in favor of 1,2-disubstituted alkenes (VII, VIII, Figure 1c) or alcohols (IX, X, Figure 1c), we chose the reaction involving allylbenzene 1a and benzaldehyde 2a (Figure 2). We soon found that unlike reactions with CO\textsubscript{2} and ketones,\textsuperscript{47–49} branched homoallylic alcohol 3a was furnished exclusively, indicating that $h^1$-Co(I)–allyl complex V reacts preferentially and the rate of isomerization of IV to V is more rapid than that of aldehyde addition of $h^1$-Co(I)–allyl complex IV to afford 4a (Figure 1c). Further investigation of a variety of chiral phosphine ligands led us to find that few chiral Co complexes could promote the reaction. Reaction of allylbenzene 1a with benzaldehyde 2a in the presence of Co complex derived from 6b furnished homoallylic alcohol 3a in trace amount (16% yield). Co complex formed from 6f provided significant higher efficiency (52% yield) albeit low enantioselectivity (56:44 er). Follow-up studies revealed that reaction involving phosphine 6j produced homoallylic alcohol 3a in 69% yield, 90:10 dr and 93.5:6.5 er. Control experiment showed that in the absence of the allylbenzene 1a, the Me-addition product 5a was generated in 21% yield, whereas 5a was not detected in the reaction of allylbenzene 1a and benzaldehyde 2a. It is worth mentioning that although most ligands did not deliver the homoallylic alcohol 3a, b-methyl-styrene and b-ethyl-styrene (trace amount) formed from reductive elimination of the allyl and Me/H (complex III, Figure 1c) was observed in most cases, illustrating that most ligands promoted the C–H bond cleavage but not the aldehyde addition.
Further optimization of reaction temperature indicated that transformation performed at 65 °C furnished the desired product **3a** with improved diastereo- and enantioselectivity (92:8 dr and 95:5 er) without significant erosion of yield (68%). The approach can be utilized to prepare a wide range of enantioenriched homoallylic alcohols (Figure 3). The requisite Co complex is derived from commercially available Co salt and chiral phosphine ligand **4j**. Aldehydes bearing halogens are tolerant in the reaction conditions (**3b–d, 3h–j**), although it is known that phosphine–Co(I) complex can undergo oxidative addition to carbon–halogen bond.\(^{51}\) Aldehydes that contain electron-donating (**3e–g**), electron-withdrawing (**3k–m**) and sterically demanding (**3n–o**) aryl groups are suitable substrates. Reactions of heteroaryl aldehydes afforded the homoallylic alcohols (**3p–q, 3s–v**) in 51–67% yield, 83:17->95:5 dr and 95:5–98:2 er. Furyl, thienyl and 3-indolyl are not tolerate in the elevated temperature, but the reactions proceeded smoothly at room temperature (**3s–v**). Aliphatic aldehydes were transformed to the desired products as a single diastereomer albeit with diminished enantioselectivity (**3r, 3w**). Allylbenzenes substituted with electron-donating groups were transformed to the homoallylic alcohols (**7a–c, 7m–n**) in 49–67% yield, 88:12–91:9 dr and 90:10–95:5 er at 65 °C. We found that reactions with allylbenzenes bearing electron-withdrawing groups can proceed in high efficiency even at room temperature with high diastereo- and enantioselectivity (**7d–l, 7o–r**). Particularly, the aldehyde moiety in the substrates that contain ketone (**7i–j, 7q**) or cyano groups (**7g, 7p**) reacted chemoselectively. Allylbenzenes that contain sterically congested aryl (**7s–v**) and various heteroaryl groups (**7w–aa**) are suitable substrates.
1,4-Dienes, upon undergoing the enantioselective allylic C–H functionalization, will produce homoallylic alcohols bearing a 1,4-diene unit, which widely exist in natural products\(^{52}\) and are intermediates that are commonly used in the synthesis of biological active molecules\(^{53}\) and small molecule probe\(^{54}\) (see Supplementary Information for more references). It is more challenging to control the chemo- and site selectivity compared with allylbenzenes, as two different olefins are present in the substrate. However, very few enantioselective methods have been developed for addition of a 1,4-diene unit to carbonyls and the diversity of 1,4-diene groups that can be introduced is very limited.\(^{55,56}\) We next applied the reaction conditions to the transformations of 1,4-dienes (Figure 4). Unexpectedly, we found that a trisubstituted alkene moiety is required for high efficiency and stereoselectivity. 1,4-Dienes containing a \((E)\)- or \((Z)\)-trisubstituted alkene moiety can be transformed to the desired homoallylic alcohols in 41–67% yield, 91:9 dr and 94:6–95:5 er (**8a–b**) at room temperature. Further studies suggested that 1,4-dienes bearing a range of aryl (**8c–m**) and heteroaryl (**8n–q**) alkenes are suitable substrates. The reactions proceeded in high efficiency and stereoselectivity without the need of elevated temperature, whereas transformations of 1,4-dienes that contain trisubstituted alkenes of other substitution patterns (**8s–t**) or without an aryl group (**8r, 8t**) furnished the homoallylic alcohols in 38–42% yield, 87:13->95:5 dr and 90:10–91:9 er. The limitation of this method for the allyl precursors is that 1,4-enynes and simple alkyl-substituted alkenes are not reactive.
Enantioenriched α-hydroxyl acids and their derivatives are important motifs in biologically active molecules. However, only two examples of enantioselective allyl addition to α-ketoesters have been disclosed so far and only simple allyl group can be introduced,\(^{57,58}\) as it is more difficult for the catalyst
to provide good efficiency and differentiate the two substituents on the carbonyl effectively. We further expanded the scope of electrophile to α-ketoesters (Figure 5). Compared with aldehydes, α-ketoesters are less reactive and require higher reaction temperatures. Both allylbenzene 1a and 1,4-diene 11a can react with a variety of α-ketoesters (10a–h, 12a–h) to afford tertiary homoallylic alcohols in 38–60% yield, 75:25→95:5 dr and 90:10–98:2 er. α-Ketoesters derived from various alcohols are suitable substrates (10f–h, 12f–h). Reactions of the α-ketoester bearing an ester group delivered γ-lactones (10e, 12e), which are key intermediates to access spirocycles that are common structures in biologically active molecules.\textsuperscript{59} Surprisingly, aryl-substituted α-ketoester provided low diastereo- and enantioselectivity. It might be because that the ester group serves as a larger substituent in the six-membered transition state of the carbonyl addition.
As shown above, a wide range of enantioenriched homoallylic alcohols bearing aryl or alkenyl groups at β-position can be prepared through this approach. Such building blocks are still difficult to access in high stereoselectivity with broad scope. We further demonstrated the utility of this method through application to synthesis of biologically active molecules (Figure 6). 2,3-Dihydrobenzofuran moiety is a ubiquitous structural motif in a vast number of natural products and synthetic compounds that display a wide range of biological activity as shown in Figure 6a.\textsuperscript{60} We envisioned that enantioselective allylic C–H functionalization involving allylbenzenes bearing an ortho-phenol substituent followed by Mitsunobu cyclization would furnish the 2,3-dihydrobenzofuran core of such class of molecules.\textsuperscript{61}
Lithospermic acid has been recognized as an active component in \textit{Danshen}, one of the most popular traditional herbs used in the treatment of cardiovascular disorders, cerebrovascular diseases, various types of hepatitis, chronic renal failure, and dysmenorrhea (see Supplementary Information for a complete bibliography).\textsuperscript{62} Recent studies revealed that lithospermic acid has potent and non-toxic anti-HIV activity.\textsuperscript{63,64} As indicated in Figure 6b, the synthetic route commenced with the enantioselective allylic C–H functionalization process on gram scale.\textsuperscript{65} Treatment of the allylbenzene 1b (2.03 g) prepared in one step and 93% yield from commercially available o-eugenol with aldehyde 2b (1.00 g) derived from vanillin in quantitative yield in the presence of Co complex generated from 6j at 55 °C afforded homoallylic alcohol 13 (1.04 g) in 62% yield, >95:5 dr and 93:7 er. The TMS group was removed simultaneously during the work-up. Subsequent Mitsunobu cyclization followed by switching the protecting group delivered 2,3-disubstituted benzofuran moiety in 80% overall yield. Oxidative cleavage of the alkene and Pinnick oxidation of the resulting aldehyde furnished a known fragment that can be converted to lithospermic acid in 71% overall yield. This catalytic enantioselective route is two-step shorter than previously reported with similar efficiency (6 steps vs. 8 steps).\textsuperscript{66}
Dihydrodehydrodiconiferylalcohol (also named 3',4-di-O-methylcedrusin) was isolated in low yield from the red latex produced by various South American \textit{Croton} species, which has the potential interest as an inhibitor of cell proliferation.\textsuperscript{67} Further studies based on this compound indicated that some derivatives inhibit the growth of a variety of cancer cells by interaction with tubulin.\textsuperscript{68} The synthesis began with transformation of allylbenzene 1c (3.19 g) prepared from eugenol in five steps and 78% overall yield with
aldehyde 2b (1.00 g) promoted by Co complex derived from 6j afforded the homoallylic alcohol 16 (1.11 g) in 49% yield, >95:5 dr and 93:7 er. Mitsunobu cyclization followed by oxidative cleavage of the alkene and subsequent reduction with simultaneous global deprotection furnished dihydrodehydrodiconiferylalcohol in 60% overall yield, >95:5 dr and 93:7 er, accomplishing a much more efficient and stereoselective synthesis than previously reported (9 steps, 23% overall yield, >95:5 dr, 93:7 er vs. 12 steps, 1.2% overall yield, >95:5 dr, 62:38 er).69
To gain some preliminary insight into the reaction mechanism, a series of experiments were conducted (Figure 7). Kinetic experiments revealed that in the transformations of allylbenzenes and 1,4-dienes, allylic C–H bond cleavage might not be the rate-determining step. Allylbenzene 1d bearing a b-methylstyrene moiety was transformed chemoselectively to afford 7ab in 70% yield, 93:7 dr and 93:7 er. To investigate whether the cleavage of the C–H bond is enantioselective, we performed the transformation with allylbenzene 1e. A roughly 1:1 mixture of 3x and 3y associated with trace amount of 3a was generated, indicating that either the C–H bond cleavage might not be enantioselective, or isomerization of two $h^1$-allyl–Co complexes might not be stereochemically retaining.
In conclusion, we have developed an unprecedented protocol for catalytic generation of nucleophilic allyl–Co complexes through allylic C–H bond activation followed by site-, diastereo- and enantioselective addition of carbonyls to furnish a wide range of homoallylic alcohols that are otherwise difficult to access. This approach is further applied to enantioselective formal synthesis of lithospermic acid and total synthesis of dihydrodehydrodiconiferylalcohol. The advances outlined here demonstrate that simple unsaturated molecules can be directly converted to functionalized allyl nucleophiles without the need for succumbing to one-at-a-time installation of each functional group, resulting in pathways that are unnecessarily time consuming, costly and waste-generating. A single multifunctional Co-based catalyst assembled from an inexpensive Co salt and commercially available phosphine can accurately control the chemo-, site- and stereoselectivity. The possibility of using other easily available unsaturated hydrocarbons for efficient and stereoselective preparation of high value enantioenriched building blocks through C–H functionalization strategy is under investigation.
**Methods**
To an oven-dried 8 mL sealed tube were charged with Co(acac)$_2$ (5.1 mg, 0.02mmol, 10 mol %) and 6j (10.1 mg, 0.02mmol, 10 mol %) followed by addition of DMSO (2.0 mL). The resulting mixture was allowed to stir at room temperature for 1.0 hour. AlMe$_3$ (1.0 M in hexane, 0.30 mL, 0.3 mmol, 1.5 equiv) was added and resulting solution was allowed to stir for 5 minutes, then the substrate 1a (47.2 mg, 0.4 mmol, 2.0 equiv) and 2a (21.2 mg, 0.2 mmol, 1.0 equiv) was added. The tube was closed tightly and the mixture was allowed to stir at 65 °C for 12 h. After cooled to 0 °C, the sealed tube was carefully opened and the reaction was quenched by 10 mL of 3.0 M HCl aqueous solution and washed with diethyl ether (3 × 20 mL). The combined organic layer was washed with brine (20 mL) and dried over Na$_2$SO$_4$. After the solids were filtered off, the solvent was removed under reduced pressure and the residue was purified by
silica-gel column chromatography (eluent: hexane/ethyl acetate, 7:1) to afford the **3a** as colorless oil (30.5 mg, 0.136 mmol, 68%).
**Declarations**
**Author contributions**
H.Z. and J. H. performed catalyst studies and method development studies, as well as synthesis of lithospermic acid and dihydrodehydrodiconiferylalcohol. F. M. designed and directed the investigations and composed the manuscript with revisions provided by the other authors.
**Conflict of Interest**
The authors declare no conflict of interest.
**Acknowledgements**
This work was financially supported by Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB20000000), National Natural Science Foundation of China (Grant No. 21702222), Shanghai Rising-Star Program (Grant No. 19QA1411000) and the Science and Technology Commission of Shanghai Municipality (Grant No. 17JC1401200).
**References**
1. Lyons, T. W. & Sanford, M. S. Palladium-catalyzed ligand-directed C–H functionalization reactions. *Chem. Rev.* 110, 1147–1169 (2010).
2. Engle, K. M. & Yu, J.-Q. Developing ligands for palladium(II)-catalyzed C–H functionalized: intimate dialogue between ligand and substrate. *J. Org. Chem.* 78, 8927–8955 (2013).
3. Hartwig, J. F. & Larsen, M. A. Undirected, homogeneous C–H bond functionalization: challenges and opportunities. *ACS. Cent. Sci.* 2, 281–292 (2016).
4. Gensch, T., Hopkinson, M. N., Glorius, F. & Wencel, J. Mild metal-catalyzed C–H activation: examples and concepts. *Chem. Soc. Rev.* 45, 2900–2936 (2016).
5. Yang, L. & Huang, H. Transition-metal-catalyzed direct addition of unactivated C–H bonds to polar unsaturated bonds. *Chem. Rev.* 115, 3468–3517 (2015).
6. Jazzar, R., Hitce, J., Renaudat, A., Sofack-Kreutzer, J. & Baudoin, O. Functionalization of organic molecules by transition-metal-catalyzed C(sp$^3$)–H activation. *Chem. Eur. J.* 16, 2654–2672 (2010).
7. Dastbaravardeh, N., Christakakou, M., Haider, M. & Schnürch, M. Recent advances in palladium-catalyzed C(sp$^3$)–H activation for the formation of carbon–carbon and carbon–heteroatom bonds. *Synthesis* 46, 1421–1439 (2014).
8. Giri, R., Shi, B.-F., Engle, K. M., Maugel, N. & Yu, J.-Q. Transition metal-catalyzed C–H activation reactions: diastereoselectivity and enantioselectivity. *Chem. Soc. Rev.* 38, 3242–3272.
9. Saint-Denis, T. G., Zhu, R.-Y., Chen, G., Wu, Q.-F. & Yu, J.-Q. Enantioselective C(sp$^3$)–H bond activation by chiral transition metal catalysts. *Science* 359, 759–780 (2018).
10. Yus, M., González-Gómez, J. C. & Foubelo, F. Diastereoselective allylation of carbonyl compounds and imines: application to the synthesis of natural products. *Chem. Rev.* 113, 5595–5698 (2013).
11. Yus, M., González-Gómez, J. C. & Foubelo, F. Catalytic enantioselective allylation of carbonyl compounds and imines. *Chem. Rev.* 111, 7774–7854 (2011).
12. Denmark, S. E. & Fu, J. Catalytic enantioselective addition of allylic organometallic reagents to aldehydes and ketones. *Chem. Rev.* 103, 2763–2793 (2003).
13. Wang, R., Luan, Y. & Ye, M. Transition metal-catalyzed allylic C(sp$^3$)–H functionalization via η$^3$-allylmetal intermediate. *Chin. J. Chem.* 37, 720–743 (2019).
14. Covell, D. J. & White, M. C. A chiral Lewis acid strategy for enantioselective allylic C–H oxidation. *Angew. Chem. Int. Ed.* 47, 6448–6451 (2008).
15. Takenaka, K., Akita, M., Tanigaki, Y., Takizawa, S. & Sasai, H. Enantioselective cyclization of 4-alkenoic acids via an oxidative allylic C–H esterification. *Org. Lett.* 13, 3506–3509 (2011).
16. Wang, P.-S. et al. Asymmetric allylic C–H oxidation for synthesis of chromans. *J. Am. Chem. Soc.* 137, 12732–12735 (2015).
17. Ammann, S. E., Liu, W. & White, M. C. Enantioselective allylic C–H oxidation of terminal olefins to isochromans by palladium (II)/chiral sulfoxide catalysis. *Angew. Chem. Int. Ed.* 55, 9571–9575 (2016).
18. Du, H., Zhao, B. & Shi, Y. Catalytic asymmetric allylic and homoallylic deamination of terminal olefins via formal C–H activation. *J. Am. Chem. Soc.* 130, 8590–8591 (2008).
19. Wang, P.-S. et al. Access to chiral hydropyrimidines through palladium-catalyzed asymmetric allylic C–H amination. *Angew. Chem. Int. Ed.* 56, 16032–16036 (2017).
20. Trost, B. M., Thaisrivongs, D. A. & Donckeke, E. J. Palladium-catalyzed enantioselective allylic alkylations through C–H activation. *Angew. Chem. Int. Ed.* 52, 1523–1526 (2013).
21. Trost, B. M., Donckeke, E. J., Thaisrivongs, D. A., Osipov, M. & Masters, J. T. A. A new class of non-C$_2$-symmtric ligands for oxidative and redox-neutral palladium-catalyzed asymmetric allylic alkylations of 1,3-diketones. *J. Am. Chem. Soc.* 137, 2776–2784 (2015).
22. Lin, H.-C. et al. Highly enantioselective allylic C–H alkylation of terminal olefins with pyrazol-5-ones enabled by cooperative catalysis of palladium complex and Brønsted acid. *J. Am. Chem. Soc.* 138, 14354–14361 (2016).
23. Wang, T.-C. et al. Enantioselective synthesis of 5-alkylated thiazolidinones via palladium-catalyzed asymmetric allylic C–H alkylations of 1,4-pentadienes with 5H-thiazol-4-ones. *Org. Lett.* 20, 4740–4744 (2018).
24. Lin, H.-C. et al. Nucleophile-dependent Z/E- and regioselectivity in the palladium-catalyzed asymmetric allylic C–H alkylation of 1,4-dienes. *J. Am. Chem. Soc.* 141, 5824–5834 (2019).
25. Liu, W., Ali, S. Z., Ammann, S. E. & White, M. C. Asymmetric allylic C–H alkylation via palladium (II)/cis-ArSOX catalysis. *J. Am. Chem. Soc.* 140, 10658–10662 (2018).
26. Wang, P.-S., Lin, H.-C., Zhai, Y.-J., Han, Z.-Y. & Gong, L.-Z. Chiral counteranion strategy for asymmetric oxidative C(sp$^3$)–H/ C(sp$^3$)–H coupling: enantioselective α-allylation of aldehydes with terminal alkenes. *Angew. Chem. Int. Ed.* 53, 12218–12221 (2014).
27. Chai, Z. & Rainey, T. J. Pd (II)/Brønsted acid catalyzed enantioselective allylic C–H activation for the synthesis of spirocyclic rings. *J. Am. Chem. Soc.* 134, 3615–3618 (2012).
28. Li, Q. & Yu, Z.-X. Enantioselective rhodium-catalyzed allylic C–H activation for the addition to conjugate dienes. *Angew. Chem. Int. Ed.* 50, 2144–2147 (2011).
29. Zhu, D. et al. Highly chemo- and stereoselective catalyst-controlled allylic C–H insertion and cyclopropanation using donor/donor carbenes. *Angew. Chem. Int. Ed.* 57, 12405–12409 (2018).
30. Li, J. et al. Site-specific allylic C–H bond functionalization with a copper-bound N-centred radical. *Nature* 574, 516–521 (2019).
31. Li, Y., Lei, M. & Gong, L. Photocatalytic regio- and stereoselective C(sp$^3$)–H functionalization of benzylic and allylic hydrocarbons as well as unactivated alkanes. *Nat. Catal.* 2, 1016–1026 (2019).
32. Mitsunuma, H., Tanabe, S., Fuse, H., Ohkubo, K. & Kanai, M. Catalytic asymmetric allylation of aldehydes with alkenes through allylic C(sp$^3$)–H functionalization mediated by organophotoredox and chiral chromium hybrid catalysis. *Chem. Sci.* 10, 3459–3465 (2019).
33. Tanabe, S., Mitsunuma, H. & Kanai, M. Catalytic allylation of aldehydes using unactivated alkenes. *J. Am. Chem. Soc.* DOI: 10.1021/jacs.0c04735 (2020).
34. Andrus, M. B. & Zhou, Z. Highly enantioselective copper-bisoxazoline-catalyzed allylic oxidation of cyclic olefins with *tert*-butyl *p*-nitroperbenzoate. 124, 8806–8807 (2002).
35. Malkov, A. V., Bella, M., Langer, V. & Kočovský, P. PINDY: a novel, pinene-derived bipyridine ligand and its application in asymmetric, copper(I)-catalyzed allylic oxidation. *Org. Lett.* 2, 3047–3049 (2000).
36. Xiong, Y. & Zhang, G. Enantioselective 1,2-difunctionalization of 1,3-butadiene by sequential alkylation and carbonyl alkylation. *J. Am. Chem. Soc.* 140, 2735–2738 (2018).
37. Liu, R. Y., Zhou, Y., Yang, Y. & Buchwald, S. L. Enantioselective allylation using allene, a petroleum cracking byproduct. *J. Am. Chem. Soc.* 141, 2251–2256 (2019).
38. Tsai, E. Y., Liu, R. Y., Yang, Y. & Buchwald, S. L. A regio- and enantioselective CuH-catalyzed ketone alkylation with terminal allenes. *J. Am. Chem. Soc.* 140, 2007–2011 (2018).
39. Li, C. et al. CuH-catalyzed enantioselective ketone alkylation with 1,3-dienes: scope, mechanism, and applications. *J. Am. Chem. Soc.* 141, 5062–5070 (2019).
40. Li, C., Shin, K., Liu, R. Y. & Buchwald, S. L. Engaging aldehydes in CuH-catalyzed reductive coupling reactions: stereoselective allylation with unactivated 1,3-diene pronucleophiles. *Angew. Chem. Int. Ed.* 58, 17074–17080 (2019).
41. Fu, B. et al. Copper-catalyzed asymmetric reductive allylation of ketones with 1,3-dienes. *Org. Lett.* 21, 3576–3580 (2019).
42. Meng, F., Jang, H., Jung, B. & Hoveyda, A. H. Cu-catalyzed chemoselective preparation of 2-(pinacolato)boron-substituted allylcopper complexes and their in situ site-, diastereo-, and enantioselective additions to aldehydes and ketones. *Angew. Chem. Int. Ed.* 52, 5046–5051 (2013).
43. Feng, J.-J., Xu, Y. & Oestreich, M. Ligand-controlled diastereodivergent, enanto- and regioselective copper-catalyzed hydroxyalkylboration of 1,3-dienes with ketones. *Chem. Sci.* 10, 9678–9683 (2019).
44. Wei, X.-F., Xie, X.-W., Shimizu, Y. & Kanai, M. Copper(I)-catalyzed enantioselective addition of enynes to ketones. *J. Am. Chem. Soc.* 139, 4647–4650 (2017).
45. Yazaki, R., Kumagai, N. & Shibasaki, M. Direct catalytic asymmetric addition of allyl cyanide to ketones via soft Lewis acid/hard Brønsted base/hard Lewis base catalysis. *J. Am. Chem. Soc.* 132, 5522–5531 (2010).
46. Yazaki, R., Kumagai, N. & Shibasaki, M. Direct catalytic asymmetric addition of allyl cyanide to ketones. *J. Am. Chem. Soc.* 131, 3195–3197 (2009).
47. Michigami, K., Mita, T. & Sato, Y. Cobalt-catalyzed allylic C(sp$^3$)–H carboxylation with CO$_2$. *J. Am. Chem. Soc.* 139, 6094–6097 (2017).
48. Mita, T., Hanagata, S., Michigami, K. & Sato, Y. Co-catalyzed direct addition of allylic C(sp$^3$)–H bonds to ketones. *Org. Lett.* 19, 5876–5879 (2017).
49. Mita, T., Uchiyama, M., Michigami, K. & Sato, Y. Cobalt-catalyzed nucleophilic addition of the allylic C(sp$^3$)–H bond of simple alkenes to ketones. *Beilstein J. Org. Chem.* 14, 2012–2017 (2018).
50. Mita, T., Uchiyama, M. & Sato, Y. Catalytic intramolecular coupling of ketoalkenes by allylic C(sp$^3$)–H bond cleavage: synthesis of five- and six-membered carbocyclic compounds. *Adv. Synth. Catal.* 362, 1275–1280 (2020).
51. Gandeepan, P. & Cheng, C.-H. Cobalt catalysis involving π components in organic synthesis. *Acc. Chem. Res.* 48, 1194–1206 (2015).
52. Takahashi, C. et al. Penostatins, novel cytotoxic metabolites from a *Penicillium* species separated from a green alga. *Tetrahedron Lett.* 37, 655–658 (1996).
53. Fujioka, K., Yokoe, H., Yoshida, M. & Shishido, K. Total synthesis of penostatin B. *Org. Lett.* 14, 244–247 (2012).
54. Kwon, O., Park, S. B. & Schreiber, S. L. Skeletal diversity via a branched pathway: efficient synthesis of 29400 discrete, polycyclic compounds and their arraying into stock solutions. *J. Am. Chem. Soc.* 124, 13402–13404 (2002).
55. Yanagisawa, A., Nakatsuka, Y., Nakashima, H. & Yamamoto, H. Asymmetric γ-selective pentadienylation of aldehydes catalyzed by BINAP•Ag(I) complex. *Synlett.* 1997, 933–934 (1997).
56. Gao, S. & Chen, M. Enantioselective syntheses of 1,4-pentadien-3-yl carbinols via Brønsted acid catalysis. *Org. Lett.* 22, 400–404 (2020).
57. Zheng, K., Qin, B., Liu, X. & Feng, X. Highly enantioselective allylation of α-ketoesters catalyzed by N,N'-dioxide-In(III) complexes. *J. Org. Chem.* 72, 8478–8483 (2007).
58. Robbins, D. W. et al. Pratical and broadly applicable catalytic enantioselective additions of allyl-B(pin) compounds to ketones and α-ketoesters. *Angew. Chem. Int. Ed.* 55, 9610–9614 (2016).
59. Singh, P., Mittal, A., Kaur, P. & Kumar, S. De-novo approach for a unique spiro skeleton-1,7-dioxa-2,6-dioxospiro[4.4]nonanes. *Tetrahedron* 62, 1063–1068 (2006).
60. Nevagi, R. J., Dighe, S. N. & Dighe, S. N. Biological and medicinal significance of benzofuran. *Eur. J. Med. Chem.* 97, 561–581 (2015).
61. Wang, H., Li, G., Engle, K. M., Yu, J.-Q. & Davies, H. M. L. Sequential C–H functionalization reactions for the enantioselective synthesis of highly functionalized 2,3-dihydrobenzofurans. *J. Am. Chem. Soc.* 135, 6774–6777 (2013).
62. Kelley, C. J. et al. Polyphenolic acids of Lithospermum ruderale (Boraginaceae). I. Isolation and structure determination of lithospermic acid. *J. Org. Chem.* 40, 1804–1815 (1975).
63. Abd-Elazem, I. S., Chen, H. S., Bates, R. B. & Huang, R. C. C. Isolation of two highly potent and non-toxic inhibitors of human immunodeficiency virus type 1 (HIV-1) integrase from *Salvia miltiorrhiza*. *Antiviral Res.* 55, 91–106 (2002).
64. Shigematsu, T. et al. Inhibition of collagen hydroxylation by lithospermic acid magnesium salt, a novel compound isolated from Salviae miltiorrhizae Radix. *Biochim. Biophys. Acta, Gen. Subj.* 1200, 79–83 (1994).
65. O'Malley, S. J., Tan, K. L., Watzke, A., Bergman, R. G. & Ellman, J. A. Total synthesis of (+)-lithospermic acid by asymmetric intramolecular alkylation via catalytic C–H bond activation. *J. Am. Chem. Soc.* 127, 13496–13497 (2005).
66. Wang, D.-H. & Yu, J.-Q. Highly convergent total synthesis of (+)-lithospermic acid via a late-stage intermolecular C–H olefination. *J. Am. Chem. Soc.* 133, 5767–5769 (2011).
67. Apers, S., Vlietinck, A. & Pieters, L. Lignans and neolignans as lead compounds. *Phytochem. Rev.* 2, 201–207 (2003).
68. Pieters, L., de Bruyne, T., Claeys, M. & Vlietinck, A. Isolation of a dihydrobenzofuran lignin from south American dragon’s blood (croton spp.) as an inhibitor of cell proliferation. *J. Nat. Prod.* 56, 899–906 (1993).
69. Hemelaere, R., Carreaux, F. & Carboni, B. A diastereoselective route to trans-2-aryl-2,3-dihydrobenzofurans through sequential cross-metathesis/isomerization/allylboration reactions: synthesis of bioactive neolignans. *Eur. J. Org. Chem.* 2015, 2470–2481 (2015).
**Figures**
Reaction Design. a, Transition-metal catalyzed enantioselective allylic C–H functionalization via allyl-metal complexes of different natures. b, Co-catalyzed allylic C–H functionalization to generate a nucleophilic allyl–Co complex and subsequent reactions with CO2 delivered linear products, whereas transformations of the allyl–Co complexes produced from C–H activation with ketones provided a mixture of linear and branched products without efficient control of diastereoselectivity. c, Proposed catalytic cycle for Co-catalyzed enantioselective allylic C–H functionalization with the issues that need to be addressed.
Figure 2
Examination of Co catalysts for Enantioselective Allylic C–H Functionalization. Reactions were carried out under N2 atmosphere; see Supplementary Information for details. NA, not applicable; ND, not determined. * Yield of purified products. † Site selectivity and diastereomeric ratios were determined by analysis of 400 MHz 1H NMR spectra. ‡ Enantiomeric ratios were determined by HPLC analysis. See Supplementary Information for details.
Figure 3
Substrate Scope for Co-Catalyzed Enantioselective Allylic C–H Functionalization of Allylbenzenes with Aldehydes.
Figure 4
Substrate Scope for Co-Catalyzed Enantioselective Allylic C–H Functionalization of 1,4-Dienes.
8a: 23 °C; 67% yield
91.9 dr, 95:5 er
8b: 23 °C; 41% yield
91.9 dr, 94:6 er
8c: 23 °C; 96% yield
>95:5 dr, 87:13 er
8d: 23 °C; 52% yield
91.9 dr, 94.5:5.5 er
8e: 23 °C; 78% yield
91.9 dr, 95:5 er
8f: 23 °C; 87% yield
91.9 dr, 94:6 er
8g: 23 °C; 82% yield
91.9 dr, 94:6 er
8h: 23 °C; 87% yield
92.8 dr, 95:5 er
8i: 23 °C; 88% yield
91.9 dr, 93:7 er
8j: 23 °C; 82% yield
91.9 dr, 95:5 er
8k: 23 °C; 61% yield
89:11 dr, 95:5 er
8l: 23 °C; 63% yield
91:9 dr, 93:7 er
8m: 23 °C; 59% yield
91.9 dr, 94:6 er
8n: 23 °C; 60% yield
89:11 dr, 94:6 er
8o: 23 °C; 61% yield
91.9 dr, 91:9 er
8p: 23 °C; 46% yield
91.9 dr, 92:8 er
8q: 23 °C; 62% yield
92.8 dr, 91:9 er
8r: 65 °C; 42% yield
87:13 dr, 90:10 er
8s: 65 °C; 38% yield
90:10 dr, 90:10 er
8t: 65 °C; 40% yield
>95:5 dr, 91:9 er
unsuccessful alkenes
Ph—C≡C—C≡C—MeO₂C—C≡C—Me—C≡C—OMe
Figure 5
Substrate Scope for Co-Catalyzed Enantioselective Allylic C–H Functionalization of 1,4-Dienes.
Application of Co-Catalyzed Enantioselective Allylic C–H Functionalization to Biologically Active Natural Product Synthesis. a, Numerous natural products and their derivatives with diversified biologically activities contain a 2,3-disubstituted benzofuran core. b, A known intermediate 15 that can be converted to lithospermic acid was prepared in 6 steps enantioselectively. c, The first catalytic enantioselective total synthesis of dihydrodehydrodiconiferylalcohol was accomplished in 9 steps.
Preliminary Mechanistic Studies. a, Competitive reactions of allylbenzene 1d and 1,4-diene 11a with their deuterated counterparts resulted in similar rates of C–H and C–D bond cleavage. b, Allylic C–H bond is cleaved preferentially compared with that of α-methyl-styrene, and α-methyl-styrenes can react under reaction conditions with lower stereoselectivity than corresponding allylbenzenes. c, Transformation of allylbenzenes substituted with a (E)-deuterium led to isomerization of the stereochemistry of the alkene.
Supplementary Files
This is a list of supplementary files associated with this preprint. Click to download.
- SI.pdf
- taa.cif
- aa.cif
- ta.cif
|
RESPONSE OF THE GUINEA PIG HEART TO HYPOTHERMIA
CHARLES G. WILBER AND FRANCES J. ZEMAN
Colorado State University, Fort Collins, Colorado and
University of California, Davis, California
ABSTRACT
Electrocardiograms were obtained from guinea pigs acclimatized to 8°C and 22°C respectively and then observed at 4°-5°C. No differences were recorded in cardiac responses to cold in guinea pigs from the two acclimatization temperatures. Heart rate decreased linearly with body temperature. Various durations on the ECG record varied non-linearly with temperature: $y = a + b/(x-c)$, where $y$ is duration, e.g. T wave, $x$ is colonic temperature in °C, and $a$ and $b$ are constants, giving respectively the value of $y$ when $x$ is 0°C and the slope of the line relating body temperature to $y$. The $Q_{10}$ value of the heart rate varies with temperature.
A detailed study of the electrocardiogram of the guinea pig has been published (Zeman and Wilber, 1965). It seemed desirable to follow this with a report on the response of the heart in the guinea pig, *Cavia porcellus*, to hypothermia. The report is similar to previous ones published on the primate heart (Wilber, 1964) and the heart in cold-blooded animals (Wilber, 1960; 1962).
![Diagrammatic representation of normal human electrocardiogram showing the durations of the various waves. P wave represents atrial excitation period; PR interval time for travel of excitation from sinuatrial node to ventricle; QRS, time for excitation to spread thru walls of ventricles; QT, duration of ventricular systole; T wave, oxidative recovery period of ventricular muscle.]
**Figure 1.** Diagrammatic representation of normal human electrocardiogram showing the durations of the various waves. P wave represents atrial excitation period; PR interval time for travel of excitation from sinuatrial node to ventricle; QRS, time for excitation to spread thru walls of ventricles; QT, duration of ventricular systole; T wave, oxidative recovery period of ventricular muscle.
**MATERIALS AND METHODS**
Electrocardiograms were obtained from resting unanesthetized guinea pigs using methods previously described (Zeman and Wilber, 1965). Colonic temperatures were recorded with the aid of a thermistor inserted 8.5 centimeters into the colon of each guinea pig. Measurements were made in lead II of: durations of P waves, PR intervals, QRS complexes, QT intervals, T waves, RR intervals,
---
1Manuscript received April 18, 1967.
*The Ohio Journal of Science* 68(4): 229, July, 1968.
PR segments, and ST intervals; the heart rate was also calculated from the tracings.
To induce hypothermia, the animals were exposed in a cold chamber held at 4–5°C. In order to insure comfort, the animals were supported in a rubber sling.
A total of 24 male guinea pigs each weighing between 235 and 425 grams (range) was used. Twelve were kept for two weeks at 22°C before exposure; twelve were kept at 8°C for two weeks before cold exposure.
This research was conducted in accordance with the "Principles of Laboratory Animal Care" of the American Physiological Society and the National Society for Medical Research. The work was supported by the United States Air Force under contract number AF41(609)–2700 monitored by the Arctic Aeromedical Laboratory, Fort Wainwright, Alaska. The views expressed are those of the authors and do not necessarily reflect official Air Force opinion. The data were analyzed using an IBM 1620 Computer (University of Delaware Computer Center Library Program 6.0.134.)
RESULTS
The results are summarized in table 1 in the form of mean values. It is quickly evident, by inspection, that there are no significant differences in the hypothermic response of the guinea pigs acclimated to 22°C as compared with those acclimated to 8°C (table 1). Detailed individual results for each animal plus the complete computer printout have been deposited with the Defense Documentation Center (Wilber and Zeman, 1966). A diagram of the normal human electrocardiogram is shown in figure 1 for purposes of comparison.
DISCUSSION
Within the range of temperature studied in this investigation, the heart rate (HR) varies with temperature (T°C) in a linear fashion:
\[ \text{HR} = -382.52 + 19.88 \ T^\circ C \]
It is evident that the heart in the guinea pig ceases to beat long before the body temperature reaches 0°C. In fact at about 18–19°C, the heart has ceased to beat. In few instances in which the heart still beats at a body temperature below 18°C, the beat is irregular and does not persist for more than a minute or so.
The variables obtained by measuring different durations on lead II of the electrocardiogram showed a non-linear relationship to body temperature. These relationships were of the general form:
\[ y = a + b/(x-c) \]
where \( y \) is the dependent variable (e.g. the duration of the T wave as measured in lead II of the electrocardiogram), \( x \) is the body temperature in degrees Celsius, \( a \) is the value of the measured variable (e.g. duration of the P wave, or of the QRS interval) when the body temperature is 0°C, and \( b \) is then the slope of the line relating body temperature to any one of the variables: e.g. P wave, QT interval, etc.
The computer program calculated prediction equations for relating the values of various measurements on the electrocardiograms to the respective body temperatures of the guinea pigs on which these measurements were made. These prediction equations are given as follows:
\[
\begin{align*}
P \text{ wave} &= 9.32 + 437/(x-16.06) \pm 14 \\
PR \text{ interval} &= 2239/(x-10.10) - 34.37 \pm 21 \\
QT \text{ interval} &= 2607/(x-13.26) - 17.64 \pm 17 \\
T \text{ wave} &= 5.29 + 607.3/(x-13.18) \pm 10 \\
RR \text{ interval} &= 5878/(x-15.97) - 120.18 \pm 50 \\
PR \text{ segment} &= 2470/(x-2.74) - 53.01 \pm 14 \\
ST \text{ interval} &= 1927/(x-14.13) - 13.53 \pm 16
\end{align*}
\]
In the above equations, the dependent variables are measured in milliseconds, and \( x \) is the body temperature in degrees Celsius. The number preceded by the symbol \( \pm \) is the standard error of estimate for the different prediction equations.
The QRS interval is measured in milliseconds. It reflects the spread of the wave of negativity (which is the excitatory phenomenon in the heart) through the branches of the cardiac conducting system and the subsequent stimulation of the ventricles to contract. The present data indicate that the QRS complex increases in duration linearly with decreasing temperature of the body until the colonic temperature is 26°C, at which time the QRS duration is 33±6 milliseconds. At that temperature there is a break in the line which relates body temperature to duration of the QRS complex; further decrease in the colonic temperature is followed by additional lengthening of the QRS complex, but at a steeper slope until the value is 53±10 milliseconds at a colonic temperature of 22°C.
Above a colonic temperature of 26°C, the QRS duration varies as follows:
\[ QRS = 60 - 1.05 \times x \]
where \( x \) is the colonic temperature in degrees Celsius. Below 26°C colonic temperature, the QRS duration varies with the temperature as follows:
\[ QRS = 196 - 6.5 \times x. \]
The temperature coefficient (\( Q_{10} \)) of the heart varies with temperature as is shown below:
| Temperature Range °C | \( Q_{10} \) |
|----------------------|-------------|
| 22–32 | 4.5 |
| 24–34 | 3.1 |
| 26–36 | 2.5 |
| 28–38 | 2.1 |
The $Q_{10}$ value for most chemical reactions is somewhere between 2 and 3. This signifies that, for a $10^\circ C$ increase in temperature, the rate of the reaction is doubled or trebled. In most enzyme reactions, an increase of $10^\circ C$ in the reaction mixture results in a doubling of the rate of reaction.
If the equation relating heart rate to body temperature is solved for temperature at zero heart rate, the result indicates that, in the guinea pig, the heart would theoretically cease to function at about $19^\circ C$. Previous work (Nardone, Wilber, and Musacchia, 1955) has suggested that, at about $16-18^\circ C$, the mammalian heart generally stops functioning in a coordinated manner.
The relation of the QT interval to the RR interval has been proposed as follows:
$$QT = k/RR$$
(Lepeschkin, 1951)
In the present study, $k$ varied with the heart rate as shown below:
| Heart Rate | $k$ |
|------------|-----|
| 376 | 6.96|
| 288 | 7.21|
| 165 | 7.98|
| 77 | 9.11|
The above variation of $k$ may not have great significance. The mean value for $k$ in a sample of 34 presumably normal guinea pigs has been reported as $11.94 \pm 0.81$ (Zeman and Wilber, 1965). The value of $k$ has been found to increase with depressed heart rate and temperature in the monkeys *Ateles* and *Cebus* (Wilber, 1964).
The PR interval measures atrio-ventricular transmission time directly. The reciprocal of the PR interval is a useful gauge of the relative speed of passage of the electrical impulse from atrium to ventricle. In the guinea pigs, the average relative speed of transmission from atrium to ventricle decreased about four times as the body temperature fell from $38^\circ C$ to $22^\circ C$. This fact means that the relative speed of transmission was decreased by about 76 percent of the control value over a $16^\circ C$ fall in body temperature. This change is equal to about a 4.8 percent decrease in relative speed of transmission per degree Celsius fall in colonic temperature. In the money *Cebus*, there is reported to be about a three percent decrease in transmission speed per degree Celsius fall in body temperature (Wilber, 1964).
The relative QT interval, $(QT/RR) \cdot 100$, varied with body temperature and with heart rate as follows:
| Body Temperature $^\circ C$ | Heart Rate | Relative Q-T Interval |
|-----------------------------|------------|-----------------------|
| 38 | 376 | 52 |
| 34 | 288 | 51 |
| 28 | 165 | 41 |
| 24 | 98 | 39 |
| 22 | 77 | 32 |
**SUMMARY**
The time relations of the electrocardiogram of adult male guinea pigs subjected to hypothermia are given. The data indicate that there is no difference in cardiac response to hypothermia in guinea pigs acclimated to $22^\circ C$ as compared with those acclimated to $8^\circ C$. $Q_{10}$ value of the heart rate varies with temperature, but has an overall mean value of about 3.
REFERENCES
Lepeschkin, E. 1951. Modern Electrocardiography. Vol. 1. Williams and Wilkins, Baltimore.
Nardone, R., C. G. Wilber and X. J. Musacchia. 1955. Electrocardiogram of the opossum during exposure to cold. Am. J. Physiol. 181: 352–356.
Wilber, C. G. 1960. Effect of temperature on the heart in the alligator. Amer. J. Physiol. 198: 861–863.
———. 1962. Some circulatory problems in reptiles and amphibians. Ohio J. Sci. 62: 132–138.
———. 1964. Response of heart in two new world monkeys to hypothermia. Comp. Physiol. Biochem. 11: 323–327.
——— and Frances Zeman. 1966. The effect of ascorbic acid supplementation and acclimatization to cold on the electrocardiogram of the guinea pig exposed to cold. DA-49-193MD-2627. Defense Doc. Ctr. Alexandria, Va. 27+61 pp.
Zeman, Frances J. and C. G. Wilber. 1965. Some characteristics of the guinea pig electrocardiogram. Life Sci. 4: 2269–2274.
|
Experimental Research of Back Effect for Mechano-Sliding Fatigue of the 0.45% Carbon Steel–Siluminum Active System*
A. Bogdanovich and I. Lis
Yanka Kupala State University of Grodno, Grodno, Belarus
email@example.com
УДК 539.4
Экспериментальные исследования обратного эффекта при трибоусталостных испытаниях стали 45 с использованием силуминовой активной трибосистемы
А. Богданович, И. Лис
Государственный университет им. Я. Купала, Гродно, Беларусь
Экспериментально исследовано накопление повреждений в зоне трения скольжения металла–метала при трибоусталостных испытаниях с использованием активной трибосистемы. Описана экспериментальная методика. Получены экспериментальные значения характеристик сопротивления металла трибоусталости в исследуемом узле трения. Проанализированы полученные результаты и приведены новые данные по обратному эффекту при трибоусталостных испытаниях стали 45 с использованием силуминовой активной трибосистемы.
Ключевые слова: трибоусталость, трение скольжения, обратный эффект.
Introduction. The back effect is defined as change of a friction and wear process characteristics due to repeated stresses effect [1]. The experiment including two stages has been planned:
sliding friction (sliding fatigue) test;
wear-fatigue (mechano-sliding fatigue) test.
At the first stage of tests it is necessary to find the regularities of wear processes for the next it comparison with the test results on the second stage.
1. Methods of Research.
1.1. Sliding Fatigue Tests. The model of an active system for sliding fatigue tests is shown in Fig. 1. The specimen $I$ made of a 0.45% carbon steel with a test portion diameter of 10 mm is cantilever fixed in a spindle 2 of a testing machine UKI-6000-2 and rotate with frequency of 3000 min$^{-1}$. The counterspecimen 3 with a width of 4 mm, made of siluminum is pressed to dangerous cross section of the specimen $I$ with the force $F_N$ which magnitude is supported a stationary value during the tests of each friction pair the specimen/counterspecimen. The lubricant – universal all-weather engine oil SuperLuxoil SAE 15W-40 is brought in a friction region by the dropwise method. The measuring of magnitude of the
* Report on International Colloquium “Mechanical Fatigue of Metals” (13–15 September 2010, Opole, Poland).
© A. BOGDANOVICH, I. LIS, 2011
ISSN 0556-171X. Проблемы прочности, 2011, № 4 59
cumulative linear wear $i$ of friction pair with the precision of 2 $\mu m$ in the local points 1–8 uniformly proportioned on ambit of dangerous cross section of the specimen is made periodically (Fig. 1b).

**Fig. 1.** Scheme of sliding fatigue tests (a) and scheme of the wear $i$ measuring (b).
The cumulative wear of a friction pair specimen/counterspecimen $i_f^{lim} = 100 \mu m$ has been accepted as the limiting state. We used $10^7$ cycles as the base of tests according to the state standard of Belarus STB 1448-2004 [2]. Let’s consider some results of tests of the examined frictional pair at a contact loading $F_N = 280 \text{ N}$ as example. The circle diagram of wear $i, \mu m$, for each of 8 points uniformly proportioned on ambit of dangerous cross section of the specimen is shown in Fig. 2. Values i in these local points at the given number of cycles $N$ of loading we connected by direct lines. It is visible, that wear process occurs nonuniformly on ambit of the specimen, and the greatest irregularity is observed at the initial stage (diagram $a$ in Fig. 2). Irregularity decreases with the growth of number of cycles. The greatest wear occurred at $N = 6.5 \cdot 10^6$ cycles for the local points 5 and 6 (101 and 103 $\mu m$).
Irregularity of the wear process is caused primarily by difference of physics-and-mechanical properties of the surface blanket of metal.

**Fig. 2.** The circle diagram of wear at contact load $F_N = 280 \text{ N}$ for number of cycles: (a) $0.2 \cdot 10^6$; (b) $0.51 \cdot 10^6$; (c) $10 \cdot 10^6$; (d) $2.2 \cdot 10^6$; (e) $6.5 \cdot 10^6$.
The kinetics of wear for the local points 1, 2, 5, and 6 is presented in Fig. 3. It is fixed, that the experimental points are well featured by degree dependence of a type \( i = aN^b + c \). Values of coefficients \( a \), \( b \), and \( c \) and a coefficient of correlation \( k \) are determined with the help of the MathCAD system and reduced in Table 1.
**Table 1**
*The Characteristics of the Equations of Regression of the Wear for the Local Points*
| Coefficients | Number of point |
|--------------|-----------------|
| | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
| \( a \) | 1.995 | 2.945 | 2.030 | 1.851 | 0.522 | 0.444 | 0.218 | 0.985 |
| \( b \) | 0.251 | 0.228 | 0.252 | 0.257 | 0.337 | 0.348 | 0.390 | 0.295 |
| \( c \) | −4.307 | −5.107 | −5.150 | −3.882 | −2.468 | −1.695 | −2.231 | −3.328 |
| \( k \) | 0.986 | 0.985 | 0.986 | 0.986 | 0.978 | 0.976 | 0.969 | 0.983 |

Figure 3 visually displays the most intensive wear process occurs at the given contact load during the first 700–800 thousand cycles of loading. This period is term as a running-in stage. It is caused by cutting of microprotuberances of a surface. As a result the area of contact of friction surfaces and the linear wear intensity decrease (curves in Fig. 3 after 800 thousand cycles of a loading become more gently sloping).
It is fixed that regularities of wear process are saved at other values of a contact load, and its magnitude makes major impact on wear intensity.
1.2. Mechano-Sliding Fatigue Tests. The model of an active system for the mechano-sliding fatigue tests is shown in Fig. 4. The bending load $Q$ is affixed to free end of specimen $l$. It provides the amplitude of the cyclic bending stresses $\sigma_a$ for a dangerous section of specimen. Other test specifications remained invariable.
The magnitude $\sigma_a$ supported on a fixed level ($\sigma_a = 160$ MPa) for the first series of wear-fatigue tests.

Tests result at contact load $F_N = 300$ N and at amplitude of the cyclic stresses $\sigma_a = 160$ MPa is displayed in the circle diagram of wear (Fig. 5) and in the kinetic curves of wear in local points (Fig. 6). Wear process occurs nonuniformly on ambit of the specimen as well as in other cases.

From the Fig. 6 it is visible, that the active system has the greatest wear intensity during the first 80–90 thousand of cycles. The wear-fatigue tests for an active system has been realized at $\sigma_a = 256$ MPa (next series) also. Character of irregularity and wear intensity is saved.
2. Results of Experiments and Discussion. Comparison of the some tests results is introduced in Fig. 7. Wear intensity decreases at $F_N = 410$ N if the amplitude of the cyclic bending stresses for a specimen critical section increases.
Fig. 6. The kinetic curves of wear in local points 1, 2, 3, and 6 at $F_N = 300$ N and $\sigma_a = 160$ MPa.
Fig. 7. The kinetic curves of wear (mean values).
However, at constant contact load $F_N = 250$ N wear intensity increases when the amplitude of the cyclic bending stresses $\sigma_a$ for a specimen critical section increases from 0 to 160 MPa, and wear intensity decreases sharply if the amplitude of the cyclic stresses $\sigma_a$ grows to 256 MPa.
The 16 active systems are tested at various contact loads over the range from 140 to 450 N in total.
The sliding fatigue curve (curve 1 in Fig. 8) is built in half-logarithmic coordinates contact load $F_N$ – long time $N$ defined as the logarithm of number of cycles before reaching of wear magnitude $i_f^{\text{lim}}$ on tests results. The similar curves built by sliding fatigue tests results of the other materials are given in [1, 2].

**Fig. 8.** The experimental sliding (1) and mechano-sliding (2, 3) fatigue curves for the 0.45% carbon steel–siluminum pairs of friction and active systems.
It has appeared that the curve 1 has three characteristic sections: (I) the field of quasistatic fracture (approximately to $N = 4.2 \cdot 10^6$ cycles at region of contact loads $F_N = 320–450$ N), (II) and (III) the fields of low-cyclic and multicyclic fracture ($F_N = 150–320$ N), respectively. The boundary between fields II and III isn’t determined, therefore the experimental points are approximated by one line. It is visible from Fig. 8 the sliding fatigue curve 1 has two branches: left with the big inclination and right which is situated almost vertically. The contact load corresponding to a sliding fatigue limit for the investigated pair of a friction is $F_f = 150$ N.
Mechano-sliding fatigue curves (curves 2 and 3 in Fig. 8) are built analogously. The values of sliding and mechano-sliding fatigue characteristics for the 0.45% carbon steel–siluminum active systems are introduced in Table 2. They correspond to [2].
As we can see from Fig. 8 and Table 2 the mechano-sliding fatigue curve 2 is built on tests results at contact loads over the range from 200 to 450 N and at the
The Sliding and Mechano-Sliding Fatigue Characteristics for the 0.45% Carbon Steel–Siluminum Active Systems
| Characteristics | Characteristic’s value |
|-----------------|------------------------|
| | sliding fatigue curve | mechano-sliding fatigue curve |
| Fatigue limit, N| $F_f = 150$ | $N(F_N, \sigma_a = 160 \text{ MPa})$ $N(F_N, \sigma_a = 256 \text{ MPa})$ |
| Turning point of fatigue curve, cycles | $N_{FG} = 5.0 \cdot 10^6$ | $N_{FG} = 10 \cdot 10^6$ $N_{FG} = 6.3 \cdot 10^6$ |
| Fatigue curve exponent | $m_F = 0.05$ | $m_{FG} = 0.19$ $m_{FG} = 0.51$ |
amplitude of the cyclic bending stresses for a critical section of specimen 160 MPa, the mechano-sliding fatigue curve 3 is built on tests results at contact loads at $F_N = 250–410$ N and at the amplitude of the cyclic stresses $\sigma_a = 256$ MPa.
The contact load corresponding to a mechano-sliding fatigue limit for the examined active system is compounded $F_{f_0} = 240$ N (at $\sigma_a = 160$ MPa) and $F_{f_0} = 260$ N (at $\sigma_a = 256$ MPa) that is accordingly in 1.6 and 1.7 times exceeds value of the contact load corresponding to a sliding fatigue limit $F_f = 150$ N.
The field of quasistatic fracture is detected only on a sliding fatigue curve 1 (Fig. 8). Probably this field for curves 2 and 3 is located above the peak value of a contact load ($F_N = 450$ N) accepted for these tests.
The increasing of fatigue curve exponents $m_F$ and $m_{FG}$ in the fields of low-cyclic and multicyclic fracture with increasing of the amplitude of the cyclic bending stresses $\sigma_a$ is observed.
Thus, new regularities of a back effect for mechano-sliding fatigue of the 0.45% carbon steel–siluminum active systems are found.
Резюме
Експериментально досліджено накопичення пошкоджень у зоні тертя ковзання метал–метал при трибовтомних випробуваннях із використанням активної трибосистеми. Описано експериментальну методику. Отримано експериментальні значення характеристик опору металу трибовтомі в досліджуваному вузлі тертя. Проаналізовано отримані результати та приведено нові дані щодо зворотного ефекту при трибовтомних випробуваннях сталі 45 з використанням силумінової активної трибосистеми.
1. L. A. Sosnovskiy, *Tribo-Fatigue. Wear-Fatigue Damage and Its Prediction (Foundation of Engineering Mechanics)*, Springer (2004).
2. *Tribo-Fatigue. Wear-Fatigue Tests Methods. Sliding-Mechanics Fatigue Tests* [in Byelorussian], The Standard of Republic of Belarus, STB 1448-2004, Minsk (2004).
Received 10. 02. 2011
|
Turkish government fiercely rules out third-party involvement in peace process
Hurriyet Daily News, 17.11.2014
The Turkish government has categorically closed the door on the involvement of any third party in the peace process, which it labels as its brainchild. “This process is a local process,” Deputy Prime Minister Yalcin Akdogan said about the process aimed at ending the three-decade-long conflict between the outlawed Kurdish Workers’ Party (PKK) and Turkish security forces.
“It is a process which Turkey implemented with its own will. We don’t believe that inclusion of a different country, mechanism, system, organization or structure would be right” Akdogan said.
“Turkey is advancing this process with its own opportunities and capabilities,” he added. Akdogan responded to various questions in a meeting with the state-run Anadolu Agency’s “Editor Desk.” Akdogan’s strongly worded remarks were an apparent response to the PKK’s demand for an international mediator, possibly the United States, to help get peace talks with Turkey back on track and avert an escalation of their insurgency. “We have now reached the point where there has to be movement. That is why we are suggesting a third power observe this process. This could be the United States. It could also be an international delegation. We need a go-between, we need observers. We would also accept the Americans. From our view, it is moving in this direction,” Cemil Bayik, a founding member and leading figure of the PKK, told Austrian newspaper Der Standard in early November.
The U.S., like its NATO ally Turkey and the European Union, classifies the PKK as a terrorist organization. Furthermore, Akdogan also argued that the involvement of a third country would complicate the process. As an example of such a complication, he cited the dialogue held between state officials and the PKK abroad between 2009 and 2011 in a series of meetings publicly known as the “Oslo talks.” The talks collapsed after a PKK attack killed 13 soldiers near Diyarbakir in July 2011. Akdogan’s remarks on the peace process were followed by a make-or-break meeting with a delegation from the Peoples’ Democratic Party (HDP) later in the same day, with both sides emerging to release carefully constructed announcements to help the dialogue channels remain open on the process. While describing the climate concerning the peace process as “moderate,” Akdogan underlined the need for patience, determination and sincerity in order to find a resolution to the chronic problem.
Akdogan made the remarks before holding the meeting with the HDP’s parliamentary delegation. Emphasizing that dialogue is the basis of the peace and resolution process, Akdogan said, “The meeting with the HDP is substantial.” Speaking to reporters after the meeting, HDP deputy parliamentary group chair Idris Baluken described the meeting as “positive.” Baluken said they discussed ways of keeping dialogue channels open, while noting that such meetings would be held
later this week in order to discuss practical steps to be taken in the upcoming phase of the process. “We have arrived at a consensus to keep dialogue channels open. Important assessments have also been made about moving on in a more constructive way, in regards to the language being used and policies that are being spoken of, for the acceleration of the negotiation process,” Baluken said. As he noted, they also discussed some “practical steps,” including the HDP parliamentary delegation’s visit to the jailed leader of the PKK, Abdullah Ocalan. Baluken sounded confident that the visit would take place shortly, although an exact date is yet to be set.
Ocalan, serving a life sentence on İmralı Island in the Sea of Marmara, has been in dialogue with state officials and the HDP and its predecessor, the Peace and Democracy Party (BDP), since at least late 2012 and is playing a central role in the process. However, due to the recent tension, no parliamentary delegation has been able to visit Ocalan since Oct. 22. At the meeting with Akdogan, Baluken was accompanied by the HDP’s other deputy parliamentary chair, Pervin Buldan, and HDP Istanbul deputy Sirri Sureyya Onder. All three lawmakers are frequent visitors to Ocalan as part of the process. The government and the HDP also exchanged views on the inclusion of a monitoring team within the process, Baluken acknowledged. Noting that the government had no objection on that particular issue, Baluken highlighted that the team could be composed of prominent opinion leaders from Turkey.
On the same day, a presentation in Parliament by Interior Minister Efkan Ala on his ministry’s budget as part of ongoing deliberations on the 2015 Central Governance Budget Law at Parliament’s Planning and Budget Commission offered an opportunity for opposition lawmakers to attack the government’s policy on the Kurdish issue. Mehmet Gunal of the Nationalist Movement Party (MHP) argued that a chapter in the presentation which was titled as “struggle with terrorism,” should actually be changed to “negotiation with terrorism.” Meanwhile, a heated exchange of words took place between Gunay and the HDP’s Hasip Kaplan over use of the word “Kurdish.” “In the 21st century, denying Kurd and Kurdish is burying your head in the sand,” Kaplan said. When Gunal taunted deputies from the ruling Justice and Development Party (AKP) for letting Kaplan use the Word “Kurdish,” the tension escalated, prompting the president of the commission to give a break to the session.
Turkey, Iraq refresh vows to mend ties
Hurriyet Daily News, 20.11.2014
Turkey and Iraq have reinforced their pledge to mend ties during the Turkish prime minister’s visit to the Iraqi capital of Baghdad, agreeing on closer cooperation against Islamist jihadists’ threat and revive economic ties with a greater momentum than before.
Turkish Prime Minister Ahmet Davutoglu’s visit to the country, a vast part of which has been swept by Islamic State of Iraq and the Levant (ISIL) militants since June, is the latest of a series of signs that mark determinations to open a new page in bilateral relations, which were strained during the former Iraqi government’s reign.
The countries repeatedly clashed in a variety of issues including the independent oil exports of the Kurdish Region Government (KRG) and the Syrian war during the term of former Iraqi Prime Minister Nouri al-Maliki. However, in the face of a common threat from ISIL and with the relief of a recent Arbil-Baghdad agreement over oil exports, Davutoglu, and his Iraqi counterpart Haider al-Abadi were confident in recovering their relations to make them even stronger than before. “Iraq’s stability and peace is Turkey’s stability and peace,” Davutoglu said during a joint news conference with al-Abadi on Nov. 20. “Only few countries in the world match up to this extent in terms of historical and social relations, as well as the threats and advantages they face,” he stated. Al-Abadi, meanwhile, announced the neighbors have agreed to share intelligence and information against ISIL.
The Iraqi prime minister added talks on cooperation opportunities over security and military fields will be continued with joint works of the two countries’ delegations. The prime ministers also announced al-Abadi will come to Turkey in December upon Davutoglu’s invitation. “I invited Mr. al-Abadi and his ministers to a joint Cabinet meeting on one of these dates: Dec. 24, 25 or 26 and he accepted it,” Davutoglu said. Al-Abadi also confirmed a visit to Turkey in December is on his agenda. Davutoglu also held a meeting with Iraqi President Fuad Masum and Parliamentary Speaker Salim al-Jabouri before departing for the KRG’s capital Arbil, where he was set to meet with Kurdish officials. Davutoglu’s visit followed a trip to Turkey by Iraqi Foreign Minister Ibrahim al-Jaafari earlier this month that was aimed at patching up the chilly ties between the two neighbors.
Turkey could face an influx of 2-3 million more Syrian refugees if President Bashar al-Assad's forces or Islamic State of Iraq and the Levant (ISIL) insurgents advance around Aleppo, Turkish Foreign Minister Mevlut Cavusoglu said on Nov. 18.
"Right now who is filling the void left by ISIL as a result of the coalition's air attacks? It is the regime," Cavusoglu told a news conference in Ankara. "But there is not that much difference between the ISIL and the regime. Both kill cruelly, especially civilians. And neither hesitate to use whatever weapons are available to them."
U.S. warplanes have been bombing ISIL forces in parts of Syria, but Assad's military has intensified its campaign against some rebel groups in the west and north that Washington sees as allies, including in and around Aleppo. Rebels and Syrian government forces hold parts of Aleppo, Syria's most populous city before the war. ISIL has seized territory from rival Islamist groups in a belt of territory north of Aleppo, threatening rebel supply routes. It also holds large sections of territory elsewhere in the wider Aleppo province. Turkey has been a staunch supporter of the Free Syrian Army (FSA), an umbrella term for dozens of armed groups fighting against both Assad and ISIL. It has also been pushing for the U.S.-led coalition to broaden its campaign to also tackle al-Assad.
"A weakening of the moderate opposition, the FSA backed by the coalition, will make the adverse situation in Syria worse and more unstable," Cavusoglu said. "Fearful civilians are fleeing from areas where ISIL, terrorist groups and the regime are gaining ground. A possible advance in Aleppo would mean the influx of two to three million people to the Turkish border to seek asylum." The Syrian civil war has killed close to 200,000 people and forced more than 3 million refugees to flee the country, according to the United Nations. Turkey already hosts more than 1.5 million refugees and has been pushing the United States and its allies to create a safe haven for displaced civilians on the Syrian side of its border.
Turkish Energy Minister Taner Yildiz and U.S. Secretary of Energy Ernest Moniz signed a memorandum of understanding on improving clean wind energy between Turkey and the U.S. on Nov. 20. The two ministers also said more cooperation is on the road ahead between the two countries in clean energies, and perhaps even nuclear energy.
“The agreement is one step closer to a strong relationship between our countries in economic development, clean energy and addressing our mutual concerns on energy security,” said Moniz.
Yildiz said both countries wanted wind turbines with an installed power of 3,000 megawatts to be manufactured in Turkey, and targeted at least $500 million worth of investments. “Turkey had an installed capacity of 19 megawatts twelve years ago. Now, this is up to over 3,500 megawatts. We aim to encourage the production of wind turbines in Turkey and more cooperation between American and Turkish companies,” he said. The two ministers also discussed the possibility of further collaboration on the construction of nuclear power plants in Turkey, said Moniz. “In the U.S., we remain committed to nuclear power. We are promoting small reactors between 50-200 megawatts for the future,” he added.
Yildiz said the U.S.’s policies on nuclear, coal and renewable energy were praiseworthy, while nuclear energy is integral for Turkey to diversify its energy sources. Yildiz also said Turkey and Japan had agreed to make a hosting country agreement for the second nuclear power plant in Turkey. “I believe the hosting country agreement will pass our Parliament by the end of this month, or the beginning of December,” Yildiz said at the Q&A session with reporters after the signing ceremony. The Russian natural gas volume from the West Line to Turkey has recently fallen by 40 percent, Yildiz also noted, although he did not want to comment on the reason for the decrease. “Russia’s daily natural gas delivery to Turkey has decreased by nearly 40 percent, which is not good news for Turkey,” he said, adding that the volume of gas from the West Line has fallen from 42 to 27-28 million cubic meters per day. “The amount should increase again to the contract level. Turkey pays its dues on time,” he added.
Turkey keeps interest rates on hold as inflation stays high
Reuters, 20.11.2014
The Turkish Central Bank kept interest rates on hold Nov. 20 and said it would keep monetary policy tight until the inflation outlook improves significantly. The Central Bank is battling to rein in inflation even as the economy slows and conflict rages in neighboring countries.
The Bank left its one-week repo rate at 8.25 percent, as forecast by 15 of 16 economists polled by Reuters. One economist had expected a cut in the main rate to 7.75 percent. After hiking rates sharply in January to halt a slide in the lira, the central bank cut rates from May to August before leaving them on hold in September and October.
The government slashed its growth estimates and raised its inflation forecast for 2014 and 2015 last month, citing unfavorable conditions in the global economy. Businesses and economists now expect a year-end consumer price inflation rate of 9.22 percent, a Central Bank survey showed last week, creeping up from its previous poll and well above its target rate of 5.3 percent. "With the country's inflation rate standing at just below 9 percent in October, the highest rate among the leading emerging markets, the gap between the current level of consumer prices and the central bank's target is embarrassingly wide," said Nicholas Spiro of Spiro Sovereign Strategy in London. "If the [Central Bank] continues to keep rates on hold until inflation starts to meaningfully decline ... it will win back credibility in the eyes of investors," he said. The Central Bank kept its overnight lending rate at 11.25 percent, its primary dealers' overnight borrowing rate at 10.75 percent and its overnight borrowing rate at 7.50 percent. Two economists had forecast a 75 basis-point cut in the overnight borrowing rate and two predicted cuts in the overnight lending rate.
Turkey’s leading business group has noted its worry over the divergence in perception between Turkey and the European Union, adding that there was “no alternative” to joining the bloc to guarantee a secular and pluralist democracy.
“We are worried about the separation of perceptions between Turkey and the EU. Turkey’s soft power and active foreign policy in its region and toward its neighbors in the early 2000s stemmed from its secular democratic system and its EU membership perspective. Turkey’s rising power status that provided high economic growth came from its harmony with EU values.
Therefore, on the path of a secular and pluralist democracy, there is no tangible and serious alternative to its membership perspective,” said Turkish Industrialists and Businessmen Association (TUSIAD) President Haluk Dincer at the “Turkey – EU Relations: Scenarios for the future” conference organized by TUSIAD and the Brookings Institute Nov. 20 in Istanbul. Dincer said that although many years have passed in Turkey-EU relations, no tangible calendar has ever been set, stressing that accession negotiations should be revived. He listed a number of factors that have affected accession negotiations, placing the decline of the EU’s enlargement perspective after the euro crisis and the EU commission’s failure to foresee any enlargement in the coming five years as the most important factor that could affect both parties. Dincer said the future of the Cyprus negotiations, which also involved international actors, was a determining factor, and emphasized that Turkey “should display a more constructive attitude.”
Claiming that there had recently been a regression in terms of Turkey’s democratic environment, political criteria, state of law, and freedom of speech, Dincer said Chapters 23 and 24 of the accession process - concerning the judiciary, fundamental rights, freedom and security - should be immediately opened. Agreements on readmission and visa facilitation had a positive effect on the public as part of EU-Turkey negotiations, Dincer said, adding that efforts to lift the “political blockades’ of some EU countries could also contribute to negotiations. The TUSIAD head said Turkey should focus on fulfilling the preconditions of the chapters that do not have such political barriers ahead of them. Dincer also emphasized that TUSIAD “did not agree with the understanding that democracy, economic growth and prosperity is a zero-sum game.” “It is a global fact that welfare and high amounts of consumption are not sustainable in the long-run without democracy,” he said.
Turkey attracted a mere $160 million in foreign direct investments (FDI) from the U.S. in the first nine months of 2014, the president of the Union of Chambers and Commodity Exchanges of Turkey (TOBB) has said, in a ceremony to mark the opening of the Istanbul office of the U.S. Chamber of Commerce.
The figure amounts to only 3 percent of Turkey’s overall FDI of $5.6 billion, TOBB head Rifat Hisarciklioglu said on Nov. 18. The U.S. Chamber of Commerce has opened its regional office in Istanbul to boost economic ties with Turkey and other countries in the region.
Hisarciklioglu said ongoing efforts to make Turkey the center for global corporations in the region would be strengthened thanks to the office. “The U.S. will not be 10,000 km away anymore, it will be at the heart of the city,” he said. Turkish Development Minister Cevdet Yilmaz, who was also present at the ceremony, urged for a greater trade volume between the two countries. The U.S. is currently the seventh top recipient of Turkish exports and the fifth highest exporter of goods to Turkey. Yilmaz said U.S. direct investment in Turkey only amounted to $9 billion of the total $137 billion FDI in Turkey since 2002, adding that he expected the proportion to increase.
The new office is located in the Istanbul headquarters of the Turkish Chambers and Stock Markets Union, in the Levent district of the city. The vice president of the U.S. Chamber of Commerce, Myron Brilliant, said the Istanbul regional office was only the chamber’s second such office to be opened in the world. The other office, in Brussels, was opened several years ago. Brilliant added that Turkey was picked for its location as a hub between Europe, the Middle East and Africa, which provides many opportunities for businessmen. “Think about the places where we do business: China, India, Brazil, Russia. We have chosen Turkey for a significant reason,” said Brilliant in his speech.
Iran says nuclear deal ‘possible’ in last-ditch round
Agence France-Presse, 18.11.2014
Iran’s foreign minister insisted on Tuesday 18 November that a nuclear deal remained possible as he arrived for a final round of talks with world powers, with differences still wide just six days before a deadline to strike an agreement between them.
But Mohammad Javad Zarif warned that an accord would only happen if the other side -- the five permanent members of the UN Security Council plus Germany -- refrained from making “excessive demands”. “A deal is still possible,” Zarif was quoted by Iranian media as saying after landing at Vienna airport.
“If, because of excessive demands by the other side, we don’t get a result, then the world will understand that the Islamic Republic sought a solution, a compromise and a constructive agreement and that it will not renounce its rights and the greatness of the nation.” “The mammoth accord being sought by Monday’s deadline, after months of negotiations, is aimed at easing fears that Tehran might develop nuclear weapons under the guise of civilian activities. It could consign to history a 12-year standoff over Iran’s atomic programme, silence talk of war and help normalise Iran’s relations with the West after 35 years of mistrust and antagonism. It could also boost Iran’s economy, improve the lives of ordinary Iranians and mark a rare foreign policy success for US President Barack Obama, five years after he offered Tehran an “outstretched hand”.
US and Iranian negotiations are under domestic pressure not to give too much away, however, while Israel assumed to be the Middle East’s sole nuclear-armed power and others in the volatile region are sceptical. In order to make it virtually impossible for Iran to assemble a nuclear weapon, the US, China, Russia, Britain, France and Germany (the P5+1) want Iran to scale down and put limits on its nuclear programme. Iran, which says its nuclear aims are exclusively peaceful, wants painful sanctions lifted and a recognition of its “right” to a peaceful nuclear programme. Some areas in what would be a highly complex agreement appear provisionally sewn up, like altering a reactor being built at Arak, a different use for the Fordo facility under a mountain to protect it from air attack and more inspections.
But the big problem remains enrichment, which renders uranium suitable for power generation and making nuclear medicines but also for a weapon. At present Iran could use its existing infrastructure to produce enough weapons-grade uranium for one bomb in a few months, although any such “breakout” attempt would be detected very quickly. And Iran wants to ramp up massively the number of enrichment centrifuges in order, it says, to make reactor fuel. The West wants them slashed, saying Iran has no such need at present, while seeking to extend the “breakout” period to at least a year. Other thorny issues are the duration of the accord and the pace at which sanctions are lifted, an area where Iranian expectations are “excessive”, one Western diplomat said.
"We still have gaps to close and do not yet know if we will be able to do so," a senior US official warned late Monday. Given the differences many analysts expect more time to be put on the clock. "There is virtually no possibility that a complete deal will be concluded by November 24," former top US diplomat Robert Einhorn, now an expert with the Brookings Institution, told AFP, predicting another extension of "several more months". And the alternative walking away would be "catastrophic," Arms Control Association analyst Kelsey Davenport said. "Given the political capital that both sides have invested ... it would be foolish to walk away from the talks and throw away this historic opportunity," Davenport told AFP. For now though, with another extension presenting risks of its own fresh US sanctions, not least officials insist that they remain focused on the deadline. "An extension is not and has not been a subject of conversation at this point," the senior US official said. Zarif was due to hold a working lunch in the Austrian capital Tuesday with the powers' lead negotiator, former EU foreign policy Chief Catherine Ashton, before the talks begin in earnest later.
Jerusalem synagogue attack puts Israel close to brink
The Guardian, 19.11.2014
Five Israelis were killed in a frenzied assault by two Palestinians who targeted worshippers at a Jerusalem synagogue, the latest in a series of deadly attacks that many fear is pushing the city to the edge of a dangerous escalation in violence. Four of the people killed were rabbis, three of them holding joint US citizenship, one with dual British citizenship.
The fifth victim was an Israeli policeman, who succumbed to his injuries late on Tuesday night. The attack was greeted by international condemnation, and Israel's Prime Minister, Binyamin Netanyahu, vowed to "respond harshly".
He described the attack as a "cruel murder of Jews who came to pray and were killed by despicable murderers". The two assailants, cousins Ghassan and Uday Abu Jamal, attacked the worshippers with meat cleavers and a gun during early-morning prayers before they were killed by police officers. The circumstances of the incident have added to the sense of crisis in Jerusalem. Witnesses described a chaotic and bloody scene inside the synagogue as police and the attackers engaged in a shootout at the building's entrance. Photographs distributed by Israeli authorities showed a man in a prayer shawl lying dead, a bloodied butcher's cleaver on the floor and prayer books covered in blood. Many in Israel have been alarmed by the religious dimension to the killings. Violence in Jerusalem, areas of Israel and the Israeli-occupied Palestinian territories, has surged in recent months, exacerbated by tensions over a key holy site revered by Muslims as the Noble Sanctuary and Jews as the Temple Mount.
Prominent among those who condemned the killings were the US president, Barack Obama and the British Prime Minister, David Cameron. Denouncing it as a "horrific attack" Obama told reporters at the White House: "Tragically, this is not the first loss of lives that we have seen in recent months. Too many Israelis have died, too many Palestinians have died." Netanyahu called the attack a "cruel murder of Jews who came to pray and were killed by despicable murderers". In an evening press conference he once again accused the Palestinian president, Mahmoud Abbas, of stirring tension in Jerusalem, and called on the international community to express its outrage. In the immediate aftermath of the attack Netanyahu ordered the demolition of the homes of the two attackers. Other measures reported to be under consideration by the public security minister, Yitzhak Aharonovitch, were the loosening of firearms regulations to allow security personnel to carry guns off duty and the reported establishment of security checks on those leaving Palestinian neighbourhoods of the city.
The US consulate in Jerusalem identified the dead Americans as Aryeh Kupinsky, Kalman Ze'ev Levine, and Moshe Twersky. Israeli authorities said the British man killed was Avraham Goldberg, 68, who had immigrated to Israel in the 1990s. The policeman who was killed was Zidan Saif. The four rabbis were buried on Tuesday afternoon in funerals attended by several thousand people and by senior political figures. Relatives in the east Jerusalem neighborhood of Jabal Mukaber later said the attackers were the cousins Ghassan and Uday Abu Jamal, who burst into the Kehillat Bnei Torah synagogue in Har Nof. Israeli media reported that one of the two assailants had worked in a supermarket in the area. Mahmoud Abbas, the Palestinian president, quickly condemned the killings. "We condemn the killing of civilians from any side," he said in a statement. "We condemn the killings of worshippers at the synagogue in Jerusalem and condemn acts of violence no matter their source."
But Hamas, the militant Palestinian group that runs the Gaza Strip, praised the attack. In Gaza, dozens of people took to the streets to celebrate, with some offering trays full of sweets. The Popular Front for the Liberation of Palestine, a small militant group, said the cousins were among their members, though it did not say whether it had instructed them to carry out the attack. Speaking to journalists at the scene, Jerusalem's mayor, Nir Barkat, expressed shock at the brutality of the attack. "To slaughter innocent people while they pray ... it's insane," he said.
In a bleak assessment of the wave of violence, the Israeli justice minister, Tzipi Livni, told Army Radio that she had long feared that a religious war was developing. "And a religious war cannot be solved." In Jabal Mukaber relatives of the two attackers offered theories about the motives for the attack, with some linking it to the death of a Palestinian bus driver found hanged behind his bus, described by Israeli authorities as a suicide but widely believed by many Palestinians to have been a lynching. Other family members, however, blamed recent friction at the Temple Mount which has been blamed for a rash of deadly violence and clashes. A cousin of the men, Sufian Abu Jamal, a construction worker aged 40, described the attack as a "heroic act and the normal reaction of what has been happening to Palestinians in Jerusalem and at the al-Aqsa mosque". At the house of Uday, "Abu Salah", an uncle of one of the men, said his relatives had been made angry by what they had seen on Facebook and television news reports. "It was a situation ripe for an explosion and that is what happened."
The attack was the latest in a series of deadly assaults. Five Israelis and a foreign visitor have been deliberately run over and killed or stabbed to death by Palestinians. About a dozen Palestinians have also been killed, including those accused of carrying out those attacks. Residents trace the recent violence in Jerusalem to July when a Palestinian teenager was burned to death by Jewish assailants, an alleged revenge attack for the abduction and killing of three Jewish teens by Palestinian militants in the occupied West Bank. The US secretary of state, John Kerry, said the attack was “a pure result of incitement”. In an emotional statement in London, Kerry added: “Innocent people who had come to worship died in the sanctuary of a synagogue. They were hatcheted, hacked and murdered in that holy place in an act of pure terror and senseless brutality and murder.”
**IMF says approves 1-bn-euro stand-by loan for Serbia**
*Agence France-Presse, 20.11.2014*
The International Monetary Fund said on Thursday November 20 it had approved a new stand-by loan for Serbia worth around one billion euros ($1.25 billion) to help it achieve economic reforms.
“The government’s economic programme will be supported by a 36-month precautionary stand-by arrangement. The overall size would be around one billion euros,” Zuzana Murgasova, head of an IMF mission that visited the Balkan country for the past two weeks, told reporters. However, the stand-by loan is yet to be approved by the IMF’s executive board, she added.
Serbia has agreed to carry out a comprehensive programme of economic recovery, composed of short-term fiscal consolidation measures and structural reforms, said Serbian Finance Minister Dusan Vujovic. The aim of the programme is to reduce the budget deficit to 4.25 percent and save some 1.3 billion euros by 2016, Vujovic added without giving further details. The deal is to take effect on January 1, he said. “This is an important day for Serbia,” Prime Minister Aleksandar Vucic said hailing the deal. The talks with the IMF came as the Serbian Central Bank (NBS) said on Wednesday that the country’s economy would contract by 2.0 percent this year. Earlier the government had forecast a 1.0 percent negative growth. The deal was reached after Serbia had taken various measures to reduce its high fiscal deficit, including a 10-percent cut of pensions and public sector monthly wages above 200 euros ($250).
The adopted measures also included the privatisation of some 500 loss-making state-owned companies by the end of 2015 that cost up to 600 million euros per year in subsidies. In addition Serbia has also adopted a new labour law to cut some job protections and raise the retirement age for women to 65. Serbia, which began negotiations to join the European Union in January, is expected to report a record budget deficit of 8.0 percent of gross domestic product (GDP) this year.
In a country of 7.2 million people, more than 700,000 are employed in the public sector while 1.7 million are pensioners. The unemployment rate is around 17 percent and has been reduced by three percent in the last six months, Vujovic said. Most people with jobs struggle to live on an average monthly salary of 350 euros ($444). The IMF had frozen a billion-euro loan ($1.3 billion) in 2012 due to the Serbian government’s inability to meet its terms.
NATO warns on Russian troops amid call to honour Ukraine peace plan
Agence France-Presse, 18.11.2014
NATO warned on Nov. 18 of a “very serious” build-up of Russian soldiers and weapons inside Ukraine and on its border as Germany’s foreign minister urged Kyiv and Moscow to respect a tattered peace plan. The West is keeping up pressure on Russia over Ukraine following a bad-tempered G20 summit in Australia at the weekend which Russian President Vladimir Putin left early.
In Brussels, NATO’s head Jens Stoltenberg issued a warning to Moscow over the seven-month conflict in Ukraine’s east which has killed over 4,100 people and plunged relations between the West and Russia to a post-Cold War low.
Stoltenberg said there was a “very serious build-up” of troops, artillery and air defence systems inside Ukraine and on the Russian side of the border as he arrived to meet European Union defence ministers in Brussels. “Russia has a choice. Russia can continue on a path of isolation,” Stoltenberg said. “The international community calls on Russia to be part of the solution.” At the same time, German Foreign Minister Frank-Walter Steinmeier met Ukraine’s pro-Western leaders, before crunch talks later Tuesday in Moscow with Russian counterpart Sergey Lavrov. The meeting will be the first by a senior European minister since July. Germany is playing the lead role in mediating the crisis with Russia. Steinmeier said a peace plan agreed in Belarus in September, including a frequently violated ceasefire, “were not perfect but they do form a basis. We have to fulfil the agreements.”
Following a meeting with Berlin’s top diplomat, Ukraine’s Prime Minister Arseniy Yatsenyuk repeated calls for US-backed negotiations with Russia on a “neutral territory”. But Moscow quickly told Kyiv that it needs to deal directly with pro-Russian rebels in the region instead. “The authorities in Kyiv do not need to hold talks with Moscow but with representatives of southeast Ukraine,” deputy Russian foreign minister Grigory Karasin told Interfax. As the unrest in eastern Ukraine drags on into the ex-Soviet state’s harsh winter, Ukraine’s military said Tuesday that fresh clashes over the past 24 hours between government forces and rebels killed six of its soldiers. The latest deaths came despite the nominal truce that has halted fighting along much of the frontline but failed to stop bombardments at key flashpoints. Russia rejects claims that it provides military backing for the heavily armed separatist rebels in the east. It also denies that it supplied the anti-aircraft missile
which downed Malaysia Airlines flight MH17 in eastern Ukraine in July, killing 298 people, an incident which sharpened the West’s focus on the unrest. As the race to defuse the conflict steps up, the European Union on Monday agreed to blacklist more Kremlin-backed rebels in Ukraine. However, it stopped short of fresh sanctions against Moscow, saying there was hope of restarting dialogue. New European Union diplomatic chief Federica Mogherini said foreign ministers meeting in Brussels had raised the possibility of her visiting Moscow to “re-engage in a dialogue” in search of a solution. Ahead of Steinmeier’s visit to Moscow later Tuesday, Russian Foreign Minister Sergei Lavrov said his government hoped “that the ‘point of no return’ has not yet been crossed” in Russia-Europe relations.
The comment came as Russia engaged with Germany and Poland in a tit-for-tat series of expulsions of diplomats which has further heightened tensions between the 28-nation EU and its vast eastern neighbour. In unusually strong remarks in Australia on Monday, German Chancellor Angela Merkel vowed the Kremlin “will not prevail”. She called on Western leaders not to lose hope in what may be a long struggle with Russia over Ukraine. Ukraine has urged Brussels to go further to send a clear message to Moscow. During the meeting with Steinmeier, Yatsenyuk lashed out at Russia, insisting that the September peace agreement “is being observed by Ukraine and blatantly violated by the Russian side”.
Ukraine protests one year on: ‘Of course we haven’t achieved our goals’
The Telegraph, 21.11.2014
It was on a freezing November Thursday when up to 1,000 people headed to Kiev’s Independence Square. They brought with them Ukrainian and European flags, a small stage and sound system, and a furious sense of betrayal at President Viktor Yanukovych’s refusal to sign up to a European integration deal.
Few realised it at the time, but the rainy night of November 21, 2013, marked the beginning of one of the most remarkable revolutionary movements in recent times. But many of the revolutionaries who flocked to the square in November and December last year are in a despondent mood.
“No, of course we haven’t achieved our goals,” said Sergei Leshchenko, a journalist turned activist who was one of dozens of former revolutionaries elected to parliament last month. “In fact we still have several steps to go to get there.” In one sense, the EuroMaidan protests that began that night were a runaway success. Viktor Yanukovych is long gone, overthrown by an eruption of popular anger, and Ukraine is on a firmly pro-European course under a president and prime minister who are, at least on paper, committed to far reaching reforms. But that, said Mr Leshchenko, was only half the point. “The goal was not to get rid of Yanukovych,” said “The main idea of Maidan was a country without corruption, a country without oligarchs, country without the old order of politicians.”
European integration was the means by which many Ukrainians hoped to achieve that goal - and until a few hours before the protesters began to gather on that Thursday evening, it had seemed inevitable. "Everything was geared towards signing that deal for months," said one senior Ukrainian diplomat who was working in the foreign ministry at the time. "Senior ministers were hiring people to teach them about the EU and how it works. And then one day the order came down 'OK, we're not doing that any more'." The official reason was to take more time to examine the impact of the agreement on trade relations with Russia. Moscow's opposition to the association agreement had been building since the summer, with the Kremlin openly warning that it may be compelled to tighten trade restrictions with Ukraine in a bid to protect its own domestic producers from a tide of cheap European goods. But those close to process believe it was only when Vladimir Putin spoke to Mr Yanukovych directly, just a few weeks before the deal was to due to be signed, that the Ukrainian president decided to shelve the project.
"There are two big questions: what did Mr Putin say that finally made Yanukovych change his mind about the association deal; and why did he suddenly flee in February?" said the diplomat. "To be honest, I'm not sure we will ever know the answers." In retrospect, that first U-turn needn't have been the end of Mr Yanukovych's career, said Mr Leschhenko. "I still think if he had signed the association agreement in the early days, he would be president now," said Mr Leshchenko. "In fact, right up to early December, when protests were galvanised by police violently evicted students from the square, he could probably have fired the interior minister and survived." Why Mr Yanukovych declined to opt for any of these solutions remains a mystery. Instead, he made a series of erratic decisions, wavering between halfhearted police crackdowns that fuelled anger without clearing the square, and calls for conciliation without offering any concessions. For reasons still unclear, on February 22 he fled the capital immediately after signing an agreement that would have kept him in power until December.
At least 17 police officers and over 100 protesters were dead, and central Kiev was a blackened wreck, but the revolutionaries had prevailed. Today, Independence Square is once again the pristine, bustling heart of Kiev's shopping district. The cobbles have been relaid, the pavements scrubbed clean of the ash of burnt tires, and the gutted remains of the Trade Union building, once the protesters' headquarters, is hidden behind an advertising hoarding celebrating Ukrainian unity. The transformation is disorientating. At a bakery near the Kozatsky hotel, customers munch pancakes at tables on which volunteer surgeons pulled shrapnel and bullets from protesters' legs. The pavement outside, where the bodies of the snipers' victims the surgeons could not save were laid in February, is bare. But while the barricades have vanished, unanswered questions remain. "No riot police have been jailed for the sniper killings in February. No corrupt politician has been imprisoned, Yanukovych and his entourage still avoid prosecution," said Mr Leshchenko.
"It is sad and depressing that we have not punished politicians responsible for corruption and bloodshed. We need that to set the precedent for change," he added. Throughout last winter, the established politicians were following, rather than leading the crowd. The first demonstrations on November 21 were organised independently via Facebook by people unwilling wait for the weekend protests called by mainstream opposition politicians, including Mr Yatsenyuk, suggested. And when the three "political" opposition leaders finally signed the February 21 agreement with Mr Yanukovych, the rank and file on the streets refused to accept it. But in the post-revolutionary government the conventional politicians have the initiative once again, and, some former activists suspect, represent the interests of the very establishment they set out to overturn.
Petro Poroshenko, the new president, was an early backer of the revolution and an avowed pro-European. But he is both an oligarch and a political insider who worked with previous presidents including Mr Yanukovych. Mr Yatsenyuk, the prime minister, was one of the triumvirate of established party leaders who assumed leadership of the protest movement last winter, but who were always viewed with deep scepticism by many in the crowd. With the election of a new parliament last month, of which he himself is a new member, Mr Leshchenko is hopeful deeper reforms can get underway. “After the elections there is no longer any excuse not to take the next steps,” he said.
“As long as there is pressure in society for this change, I hope it will stop the politicians from getting lazy.” But it will be tough going. With intense negotiations about the formation of a coalition government on going, some fear the 2014 revolution could succumb to the infighting that discredited the Orange revolution of 2004-5. And the last thing Ukraine can afford now is a government paralysed by infighting. The Hryvnia, Ukraine’s currency, has lost half of its value against the dollar since February. The threat of default on foreign currency debt is growing. And while Russia recently restored gas supplies under a European-brokered agreement, Ukraine is reliant on outside assistance to pay its vast gas bills.
Meanwhile, in Kiev’s military cemeteries, fresh graves are appearing alongside those of the dead of previous wars with each passing week. The bloody secessionist uprising in eastern Ukraine has killed more than 4,000 people since April, deprived Ukraine of some of its most productive industrial assets. Amid reports of a massive separatist troop buildup, Mr Yatsenyuk announced on Friday that the government’s priority was equipping the army to repel possible offensives by the Russian-backed rebels who have seized control of a swathe of territory in the east of the country. No one knows yet whether the vast convoys of military supplies seen travelling through rebel territory are a show of force to discourage Kiev from trying to win land by force, or part of preparations for an upcoming Winter offensive. As we reach the anniversary of the first Maidan protests, Ukraine’s future is still uncertain.
China surprises with interest rate cut to spur growth
Reuters, 21.11.2014
China cut interest rates unexpectedly on Friday, stepping up a campaign to prop up growth in the world's second-largest economy as it heads towards its slowest growth in nearly a quarter century.
The cut - the first such move in over two years - came as factory growth has stalled and the property market, long a pillar of growth, has remained weak, dragging on broader activity and curbing demand for everything from furniture to cement and steel. "It's a surprise, another Friday night special," said Mark Williams, Chief Asia Economist with Capital Economics in London.
"It may not have a major impact on GDP growth - that depends on if policy makers also allow the rate of credit growth to pick up." The People's Bank of China said it was cutting one-year benchmark lending rates by 40 basis points to 5.6 percent. It lowered one-year benchmark deposit rates by less - just 25 basis points. The changes take effect from Saturday. The central bank also took a step to free up deposit rates, allowing banks to pay depositors 1.2 times the benchmark level, up from 1.1 times previously. "They are cutting rates and liberalising rates at the same time so that the stimulus won't be so damaging," said Li Huiyong, an economist at Shenyin and Wanguo Securities. Recent data showed bank lending tumbled in October and money supply growth cooled, raising fears of a sharper economic slowdown and prompting calls for more stimulus measures, including cutting interest rates. But many analysts had expected the central bank to hold off on cutting interest rates for now, as it has opted instead for measures like fiscal spending, as it also tries to balance the need to reform the economy.
Chinese leaders have also repeatedly stressed they would tolerate somewhat slower growth as long as the jobs market remained resilient, even as they rolled out a series of more modest stimulus measures this year. The risks faced by China's economy are not that scary and the government is confident it can head off the dangers, President Xi Jinping told global business leaders earlier this month, seeking to dispel worries about the world's economy. In a speech to chief executives at the Asia Pacific Economic Cooperation (APEC) CEO Summit, Xi said even if China's economy were to grow 7 percent, that would still rank it at the forefront of the world's economies.
President Barack Obama has ordered a comprehensive review of U.S. policy governing efforts to free Americans being held by militant groups overseas, the White House said on Nov. 17.
In recent months, Islamic State of Iraq and the Levant (ISIL) militants have beheaded three Americans, including Peter Kassig, an aid worker and former U.S. Army Ranger. “The administration’s goal has always been to use every appropriate resource within the bounds of the law to assist families to bring their loved ones home,” White House National Security Council spokesman Alistair Baskey said.
“In light of the increasing number of U.S. citizens taken hostage by terrorist groups overseas and the extraordinary nature of recent hostage cases,” added Baskey, “this summer President Obama directed relevant departments and agencies, including the Departments of Defense and State, the FBI, and the Intelligence Community, to conduct a comprehensive review of how the U.S. government addresses these matters.” The administration could not detail all the steps it was taking to free U.S. hostages, but Baskey said “we will continue to bring all appropriate military, intelligence, law enforcement, and diplomatic capabilities to bear to recover American hostages.
Those efforts continue every day.” ABC News reported that a Pentagon official wrote last week to U.S. Representative Duncan Hunter that the review would include an emphasis “on examining family engagement, intelligence collection, and diplomatic engagement policies.” It added that a Nov. 11 letter to Hunter from Christine Wormuth, undersecretary of defense for policy, did not explicitly address the issue of ransom payments, which it is U.S. policy not to pay. ABC News said Hunter wrote the White House in August after the beheading of U.S. journalist James Foley by ISIL, urging Obama “to guarantee we are maximizing our recovery efforts.” ISIL previously killed U.S. journalist, Steven Sotloff and British aid workers David Haines and Alan Henning.
Barack Obama used a heartfelt televised address to the nation on Thursday to explain his decision to enact sweeping immigration reforms that will shield from deportation almost five million people currently living in the country illegally.
The president unveiled controversial executive action that will make millions of undocumented migrants eligible to live and work in what he described as “a nation of immigrants”. He urged America to show compassion to newcomers who entered the country illegally but have worked hard. He also put down roots yet still “see little option but to remain in the shadows or risk their families being torn apart”.
“Are we a nation that tolerates the hypocrisy of a system where workers who pick our fruit and make our beds never have a chance to get right with the law?” he asked. “Are we a nation that accepts the cruelty of ripping children from their parents’ arms?” The address was a passionate and unapologetic attempt by the president to explain one of the boldest and most contentious decisions of his six-year presidency. Unless major immigration legislation is passed before 2016, Obama’s decision almost certainly means immigration will be a central issue for candidates in the next presidential election. It is an especially toxic issue for Republicans, who are united in opposition to Obama’s action but bitterly divided over how to deal with the millions of undocumented migrants in the country. Leaders have said that not acting risks the party’s long term future, but the conservative base has consistently opposed any reform that includes a path to citizenship for those who enter the country illegally.
Furious Republicans equate Obama’s decision to an “amnesty” for undocumented migrants, and are planning measures to counter it when they assume control of both houses of Congress in January. “If President Obama acts in defiance of the people and imposes his will on the country, Congress will act,” the incoming Republican Senate majority leader, Mitch McConnell, said on the eve of the president’s remarks. “We’re considering a variety of actions. But make no mistake, when the newly elected representatives of the people take their seats, they will act.” Obama’s action combines increased resources for border security and a direction to the Department of Homeland Security, which oversees border and immigration issues, to adopt a policy of removing “felons, not families”. But the most far-reaching aspect of the executive action is the creation of a new “deferred action” program that will benefit the estimated 3.7 million undocumented immigrants who are the parents of US citizens or permanent legal residents.
Those who have been in the country for more than five years, pass a criminal background check, pay taxes and submit biometric data will receive deportation relief and can apply to work. Obama is also expanding his 2012 deferred action against childhood arrivals (DACA) order, which benefited young people brought to the country illegally as children, who are known as DREAMers. Unlike the
previous order, which applied only to young people of a certain age brought by their parents before 2007, the new DACA program will be expanded to apply to all undocumented migrants, regardless of age, who were brought to the country as children illegally before 2010. The White House is bracing itself for a political storm over the wisdom, legality and fairness of the president's decision, which will inflame an already hostile relationship between Obama and congressional Republicans. The president's critics immediately said they would challenge the his actions, which they characterised as undemocratic and possibly unlawful.
Republicans have pointed out that these executive actions run counter to dozens of statements by Obama in recent years that appeared to indicate he did not believe he had the power to make such sweeping changes to the immigration system. The White House was careful to stress Obama was "acting within his legal authority" and made public a detailed legal opinion from the Justice Department's Office of Legal Counsel, which advised the president on the legality of his decision. In his address, Obama also challenged Republicans who are unhappy with his moves to bring legislation to replace them, while also denying that his actions are equivalent to an amnesty. "Amnesty is the immigration system we have today – millions of people who live here without paying their taxes or playing by the rules, while politicians use the issue to scare people and whip up votes at election time," he said. Mass amnesty would be unfair. Mass deportation would be both impossible and contrary to our character.
Obama added: "Mass amnesty would be unfair. Mass deportation would be both impossible and contrary to our character. What I'm describing is accountability – a common-sense, middle-ground approach: if you meet the criteria, you can come out of the shadows and get right with the law." Although several presidents, including Republicans Ronald Reagan and George HW Bush, have enacted executive changes to the immigration system, none have acted unilaterally to shield so many people from deportation. Almost half of the estimated 11 million undocumented migrants living in the US illegally could benefit from the changes should they apply. The schemes protecting undocumented migrants from deportation will not apply to recent or future illegal immigrants. Those who do qualify will not receive a path to citizenship, be permitted to leave and re-enter the country or obtain subsidies under the Affordable Care Act.
And the protections will only last for three years and could be reversed by Obama's successor in the White House. If the order is rescinded by the next president or simply allowed to expire, millions who have signed up, declaring their presence in the country, could theoretically become vulnerable to deportation again. Hillary Clinton, the most prominent Democratic candidate for the White House in 2016, issued a statement in support of Obama's action. She said it was an "abdication of responsibility" by Republicans to reform the immigration system that had forced the president to bring the stop-gap measure but added "only Congress can finish the job by passing permanent bipartisan reform".
Polls indicate voters are divided over how to treat the millions – many of them Latino – living in the country illegally, and Obama used his address to attempt to persuade the country of the benefits of allowing them to stay. The president said "tracking down, rounding up and deporting millions" was unrealistic, pointing out that illegal border crossings are at a historic low and made the economic case for allowing undocumented migrants the chance to work and pay taxes. In the most emotional segment of his address, Obama's voice strained as he argued that Americans "are and always will
be a nation of immigrants”. “We were strangers once, too,” he said. “And whether our forebears were strangers who crossed the Atlantic, or the Pacific, or the Rio Grande, we are here only because this country welcomed them in, and taught them that to be an American is about something more than what we look like, or what our last names are, or how we worship.” Obama’s package of measures also includes an increase in resources to the southern border with Mexico, where there was a brief surge of unaccompanied Central American children over the summer. Additionally, the Obama administration said it would streamline the immigration court process and, in a move that will please Silicon Valley, make it easier for highly-skilled workers, graduates and entrepreneurs to obtain work visas. On Friday, the president will fly to Las Vegas to sign the measures at Del Sol high school, where he kickstarted the push for comprehensive immigration reform almost two years ago.
Five months after Obama made that speech, the Senate passed a bipartisan bill that, had it become law, would have bolstered border security and provided a path to citizenship for many of the 11 million people living in the country illegally. While supported by senior Republicans who are desperate to mend the party’s reputation among Hispanic voters, the legislative efforts stalled in the more conservative House of Representatives. “Had the House of Representatives allowed that kind of a bill a simple yes-or-no vote, it would have passed with support from both parties, and today it would be the law,” Obama said, adding that he would continue to press for a holistic legislative solution. “But until that happens, there are actions I have the legal authority to take as President – the same kinds of actions taken by Democratic and Republican presidents before me – that will help make our immigration system more fair and more just.”
Announcements & Reports
▶ How Europe can lead public-sector transformation
Source: Accenture
Weblink: http://www.accenture.com/SiteCollectionDocuments/PDF/Accenture-How-Europe-Can-Lead-Public-Sector-Transformation.pdf
▶ How to do business Turkey?
Source: Deloitte
Weblink: http://www2.deloitte.com/content/dam/Deloitte/tr/Documents/tax/tr-how-to-do-business-in-turkey-2014.pdf
▶ A new vision for growth: key trends in human capital 2014
Source: PwC
Weblink: http://www.pwc.com/en_GX/gx/hr-management-services/pdf/pwc-key-trends-in-human-capital-2014.pdf
▶ Emerging trends in real estate: the global outlook for 2014
Source: PwC
Weblink: http://www.pwc.com/gx/en/asset-management/emerging-trends-in-real-estate-global-2014/assets/pwc-emerging-trends-in-real-estate-the-global-outlook-for-2014.pdf
▶ Strengthening regional and national capacity for disaster risk management: the case of ASEAN
Source : Brookings Institute
Weblink : http://www.brookings.edu/-/media/research/files/reports/2014/11/disaster-resilience/2014disasterresilience.pdf
▶ Accelerating exports in the Middle Market
Source : Brookings Institute
Weblink : http://www.brookings.edu/-/media/research/files/reports/2014/10/middle%20market/acceleratingexportsinthemiddlemarket.pdf
▶ Russian ‘deniable’ intervention in Ukraine: how and why Russia broke the rules
Source : Chatham House
Weblink : http://www.chathamhouse.org/sites/files/chathamhouse/field/filed_publication_docs/NTA010_01Allison_1.pdf
Upcoming Events
▶ A Transatlantic Pakistan Policy
Date : 24 November 2014
Place : Washington – USA
Website : http://www.gmfus.org/archives/a-transatlantic-pakistan-policy/
▶ Europe’s Capital Markets Union
Date : 24 November 2014
Place : Brussels – Belgium
Website : http://www.bruegel.org/no/events/event-detail/view/475-europes-capital-markets-union-1/
▶ Mapping Competitiveness with European Data
Date : 28 November 2014
Place : Brussels – Belgium
Website : http://www.bruegel.org/no/events/event-detail/event/470-mapping-competitiveness-with-european-data/
▶ From De-industrialization to the future of industries
Date : 28 November 2014
Place : Brussels – Belgium
Website : http://www.bruegel.org/no/events/event-detail/event/474-from-de-industrialization-to-the-future-of-industries/
▶ **11th Asia Europe Economic Forum**
- **Date**: 05 December 2014
- **Place**: Tokyo – Japan
- **Website**: http://www.bruegel.org/rclevents/event-detail/view/460/
▶ **18th Middle East Iron and Steel Conference**
- **Date**: 08 December 2014
- **Place**: Dubai – United Arab Emirates
- **Website**: http://www.woodmac.com/public/events
▶ **PONI 2014 Winter Conference**
- **Date**: 09 - 10 December 2014
- **Place**: Washington – USA
- **Website**: http://cais.org/event/poni-2014-winter-conference
▶ **Ageing and Health: Policy-making in an Era of Longevity**
- **Date**: 09 February 2015
- **Place**: London – United Kingdom
- **Website**: http://www.chathamhouse.org/conferences/ageing
▶ **Security and Defense**
- **Date**: 23 February 2015
- **Place**: London – United Kingdom
- **Website**: http://www.chathamhouse.org/Defence2015
▶ **Diversifying MENA Economies**
- **Date**: 02 - 03 March 2015
- **Place**: London – United Kingdom
- **Website**: http://www.chathamhouse.org/conferences/MENA-Economies
▶ **Creating an Effective Financial System**
- **Date**: 09 March 2015
- **Place**: London – United Kingdom
- **Website**: http://www.chathamhouse.org/conferences/financialsystem
▶ **Innovation Forum 2015**
- **Date**: 26 March 2015
- **Place**: Chicago – USA
- **Website**: http://www.economist.com/events-conferences/americas/innovation-2015
|
DISCRIMINATING BETWEEN PROBLEMS IN LIVING
Increased interest in these problems has been accompanied by a gradual eroding of once solid boundaries between the areas of cognitive, social, personality, and clinical psychology. This has fostered exciting new approaches to the study of these traditionally clinical problems. Examples include Kuiper's social-cognitive approach to depression (e.g., Kuiper, Olinger, & MacDonald, 1982), Horowitz's prototype approach to loneliness and depression (e.g., Horowitz, French, & Anderson, 1982), Zimbardo's social skills and attributional approaches to shyness (Brodt & Zimbardo, 1981; Zimbardo, 1977) and various attributional approaches to depression, loneliness, and shyness (e.g., Anderson, 1983; Anderson & Arnoult, 1985; Anderson, Horowitz, & French, 1983; Anderson, Jennings, & Arnoult, 1988; Seligman et al., 1979). The results of these and related studies have been successfully incorporated into new therapies and have helped to explain the efficacy of more traditional therapies as well as to suggest improvements in them (see e.g., Antaki & Brewin, 1982; Bandura, 1977; Beck et al., 1979; Frieze, Bar-Tal, & Carroll, 1979).
One impediment to continued progress in understanding and treating these widespread problems is confusion about the separability and measurability of the constructs. A large number of studies suggest that the problems of loneliness, depression, shyness, and social anxiety are at least highly interrelated, if not essentially the same (e.g., Anderson, Horowitz, & French, 1983; Anderson & Arnoult, 1985; Russell, Peplau, & Cutrona, 1980; Russell, Peplau, & Ferguson, 1978). Results of a few studies suggest, however, that two of the problems, loneliness and depression, are somewhat different even though highly correlated (Anderson, Horowitz, & French, 1983; Horowitz, French, & Anderson, 1982; Weeks et al., 1980).
However, the distinction between two of the problems—shyness and social anxiety—is particularly problematic. Recent conceptual analyses point out that shyness has sometimes referred to a form of social anxiety; at other times social anxiety has been defined as one aspect of shyness (specifically, the subjective experience of apprehension and nervousness in social situations). Several other definitions and distinctions have also been made (see Leary, 1983, and Daly and McCroskey, 1984, for excellent treatments of these issues). Although the conceptual distinctions between these various problems in living seem clear and meaningful, the empirical distinctiveness of depression, loneliness, shyness, and social anxiety is still very much an open question.
A conclusive construct validation study of these four concepts would require a number of different measures of each construct as well as measures of other constructs with varying theoretical relations to the four target constructs, as well as a large and varied sample of subjects. Practically, several studies of more modest scope is a more reasonable
way to proceed. This article reports an initial attack on the problem. Briefly, subjects completed a measure of each construct. A series of confirmatory and exploratory factor analyses examined the interrelatedness and distinctiveness of the measures. In a sense, our study is best viewed as a measure validation study rather than as a construct validation study, because we examined only one measure of each construct. Thus our results speak most directly to issues concerning the particular measures we examined. Obviously, the results also address (less strongly) the construct validity issue.
METHOD
Subjects. A total of 302 undergraduates from Rice University and the University of Houston completed a questionnaire packet containing the scales for either $3 or credit toward a psychology course requirement. Each subject received a written explanation of the study after completing the questionnaire packet.
Procedure. Subjects completed a questionnaire packet containing the short form of the Beck Depression Inventory (BDI; Beck & Beck, 1972), the revised UCLA Loneliness Scale (LS; Russell, Peplau, & Cutrona, 1980), the Shyness Scale (SS; Cheek & Buss, 1981), the Social Anxiety Scale (SAS; Fenigstein, Scheier, & Buss, 1975), and three other shyness items. (Two of these were recently added to the SS, personal communication from Jonathan Cheek, 1982; the third was a 6-point self-descriptive shyness item from Zimbardo's (1977) shyness inventory. In addition, the packet contained several measures of attributional style that are not relevant to the present study (see Anderson & Arnoult, 1985, for the attributional style results). Note that the SAS and SS have one item in common. We arbitrarily assigned it to SAS.
RESULTS AND DISCUSSION
CONFIRMATORY ANALYSES
Confirmatory maximum likelihood factor analysis was used to examine the degree to which several different factor models could explain the obtained item correlation matrix (Bentler, 1980; Joreskog, 1969, 1978; Joreskog & Sorbom, 1981). The common factor analysis model (e.g., Thurstone, 1947) was used.
We examined results from ten different models. We first examined a null model, which holds that there were no common factors underlying the 50 items. The primary purpose of this was to allow calculation of the rho index (Bentler & Bonnett, 1980) and the PFI (James, Mulaik, & Brett, 1982) measures of model fit for the nine models of interest. Both of these fit measures can range from 0 to 1.0, with larger numbers indicating a better fit of the factor model to the data. They differ primarily in the PFI stresses model parsimony, penalizing the user for adding free parameters that do not appreciably increase model fit. We also examined a third measure of fit, the root mean square residual (RMSR). This can be interpreted as a measure of the average residual correlation resulting from subtracting the reproduced correlation matrix (generated from the factor solution) from the actual correlation matrix. RMSR should be compared with the size of the values in the actual matrix of correlations for interpretation (i.e., small values, relative to the size of the correlations, would indicate good model fit). The results of these analyses are summarized in Table 1.
Inspection of RMSR for the null model indicated relatively poor fit for this model: the value of the RMSR (.238) was quite high relative to the absolute values of the actual correlations, which ranged from .001 to .748, with median = 0.186, mean = 0.207, and SD = 0.126. This low fit is consistent with our prior expectations that the items do in fact measure one or more common factors.
Next, we examined a general one factor model to see if all the items measure the same construct. Both the rho and the PFI fits were extremely poor; the RMSR also indicated that the fit was mediocre at best.
| MODEL | CHI-SQUARE | df | FIT INDICES |
|----------------|------------|-----|-------------|
| | | | rho<sup>a</sup> | RMSR<sup>b</sup> | PFI<sup>c</sup> |
| # | NAME | | | | |
| 0 | Null model | 6821.45 | 1225 | 0.24 | |
| 1 | 1-Factor | 3715.02 | 1175 | 0.53 | 0.10 | 0.44 |
| 2 | 4-Factor orthogonal | 2797.45 | 1174 | 0.70 | 0.16 | 0.56 |
| 3 | 4-Factor oblique | 2395.29 | 1169 | 0.77 | 0.07 | 0.62 |
| 4 | 3-Factor orthogonal | 2561.54 | 1175 | 0.74 | 0.14 | 0.60 |
| 5 | 3-Factor oblique | 2418.17 | 1172 | 0.77 | 0.07 | 0.62 |
| 6 | 4-Factor + method, orthogonal | 2666.46 | 1161 | 0.72 | 0.16 | 0.58 |
| 7 | 4-Factor + method, oblique | 2253.52 | 1151 | 0.79 | 0.07 | 0.63 |
| 8 | 3-Factor + method, orthogonal | 2430.95 | 1162 | 0.76 | 0.14 | 0.61 |
| 9 | 3-Factor + method, oblique | 2278.39 | 1156 | 0.79 | 0.07 | 0.63 |
<sup>a</sup>Comparison of the chi-square/df ratio for each model with the chi-square/df ratio for the null model.
<sup>b</sup>Root mean square residual.
<sup>c</sup>Parsimonious Fit Index.
The next two models represented the *a priori* four factor model (i.e., shyness, social anxiety, loneliness, and depression). One version forced the factors to be orthogonal (uncorrelated), whereas the other provided for oblique (correlated) factors. Although considerably better than the fits of the null and one factor models, the fit indices of the four factor models were somewhat low, especially the orthogonal version. Indeed, its RMSR value was actually worse than the one factor model's, though the corresponding *rho* and PFI fits were much better. Inspection of the factor correlation matrix revealed a pattern of moderate intercorrelation among the four factors, with one extremely high factor correlation (*r* = 0.92) between the social anxiety and shyness scales.
These results indicated that: (1) the grouping of items into the four scales did not satisfactorily account for the item correlations, (2) the four factors formed by this grouping were clearly nonorthogonal, and (3) the distinction between the social anxiety and shyness constructs was artificial (i.e., they measure the same thing).
In view of the very high correlation between the social anxiety and shyness factors, an alternative to the *a priori* 4-factor model was developed that combined these two constructs into a single factor (shyness/social anxiety). Two 3-factor models (orthogonal and oblique) were tested next. If the 4-factor models are more accurate representations of the data, then the 3-factor models should yield considerably poorer fit indices. The 3-factor models actually yielded several slight improvements over the 4-factor models. Overall, the assessment of these models was similar to that for the *a priori* models: (1) the oblique factor solution seemed to be the most reasonable, and (2) the moderate degree of overall model fit suggested that further improvements in the factor model could still be sought. However, the fact that the best-fitting (oblique) 3- and 4-factor models were virtually identical across the various indices of fit indicates that the 3-factor model is superior to the 4-factor model, because the elimination of the distinction between social anxiety and shyness had no impact on the ability of the model to fit the data. (Note that because we did not have *a priori* reasons to test the 3-factor model, it is important that these findings be replicated.)
We attempted to improve the factor model fits by adding a method factor to account for unwanted variation produced by some items being worded in a negative fashion and reverse scored (e.g., see Harvey, Billings, & Nilan, 1985). Four such models were examined (3- and 4-factor models with orthogonal and oblique criteria). Briefly, there was no evidence of a method factor.
No further modifications to the initial factor solution to improve its fit could be derived by rational means; however, we judged the fit of even the best model (3-factor, oblique) to be inadequate. Although statistical significance tests or other such interpretive aids are not currently available for *rho*, PFI, and RMSR, rough "rules of thumb" are commonly used. For *rho*, values less than approximately 0.90 are typically viewed as inadequate; the 3-factor oblique *rho* was .77. Therefore, an exploratory factor analysis was used in an attempt to discover clues as to why the model fit only moderately well. First, it could be the case that an alternative grouping of test items into constructs other than those of the test developers (i.e., a different factor structure) is more appropriate. Second, improperly classified test items (i.e., items with relationships with constructs other than the *a priori* one) might be discovered. Third, undesirably complex items (i.e., items with loadings on multiple factors) might be identified.
**EXPLORATORY ANALYSES**
The 50 variable correlation matrix was analyzed using principal axes common factor analysis. Squared multiple correlations (SMCs) were used as the estimates of communality for the items. Examination of eigenvalues strongly suggested a 3-factor solution.
The 3-factor exploratory factor pattern was striking in terms of the clarity of the solution. Three strong factors emerged, which were characterized by both high loadings on the factors and a relative infrequency of items loading significantly on more than one factor. Two of the factors appeared as predicted on the basis of the original scale composition. Factor 1 was the loneliness factor, with nontrivial loadings by all of the LS items; factor 3 was the depression factor, consisting of all the BDI items. Contrary to the initial conceptualization, factor 2 combined the original social anxiety and shyness items into one grouping.
The main purpose of the exploratory analysis was to identify potential causes of the modest fit for the confirmatory models. There were several items that had nontrivial loadings on a factor other than the *a priori* one. The most serious of these cases were items LS-4 ("I do not feel alone") and LS-9 ("I am an outgoing person"), which had stronger loadings on the depression and the shyness/social anxiety factors, respectively, than on the predicted loneliness factor. In order to evaluate objectively the relative magnitudes of nonpredicted factor loadings, an index was developed, computed by dividing the squared factor loading on the predicted factor by the highest squared nonpredicted factor loading. Inspection of the distribution of the values of the loading index suggested that items with values less than 10 were probably undesirable items, in the sense of having insufficiently high loadings on the predicted factor (relative to the highest nonpredicted loading). Based on this logic, the following items were seen to be candidates for deletion from their *a priori* scales:
BDI-7 (self-harm), BDI-10 (Self-image change), LS-4 (feeling alone), LS-9 (outgoing), LS-11 (feeling left out), LS-17 (withdrawn), SS-4 (asking for information), and SS-10 (eye contact).
In order to evaluate the practical effect of deleting the suspect items, summated scale scores were computed both with and without the undesirable items. The correlations between these scales, as well as coefficient alpha estimates of scale reliability, are reported in Table 2.
Coefficient alpha internal consistency reliability estimates all reached acceptable levels. Note also that the modified scale scores were extremely similar to the total scale scores (all $rs = 0.99$). In addition, the scales exhibited moderate positive intercorrelations. As a strictly empirical matter, these data could be interpreted to suggest that it makes little difference whether or not the undesirable items identified in the factor analysis are included in the final summated scale scores. However, elimination of the undesired items did lead to reduced interscale correlations, as can be seen in Table 2. For example, the loneliness-depression correlation dropped from 0.42 to 0.35. Indeed, both the loneliness-depression and the loneliness-shyness correlations from the modified scales were significantly lower than the corresponding full scale correlations, $ts(299) > 10.99, ps < .001$. Thus the advantage of the modified scales is both greater conceptual and empirical distinctiveness between the summated scores for depression, loneliness, and shyness/social anxiety.
### TABLE 2
Correlations of Summated Scale Scores
| SCALE | LONE-T$^a$ | SSAS-T$^b$ | DEPR-T$^c$ | LONE-M$^d$ | SSAS-M$^e$ | DEPR-M$^f$ |
|-------|------------|------------|------------|------------|------------|------------|
| Lone-T | 0.91$^g$ | | | | | |
| SSAS-T | 0.49 | 0.89 | | | | |
| Depr-T | 0.42 | 0.30 | 0.79 | | | |
| Lone-M | 0.99 | 0.42 | 0.37 | 0.90 | | |
| SSAS-M | 0.48 | 0.99 | 0.30 | 0.41 | 0.90 | |
| Depr-M | 0.39 | 0.29 | 0.99 | 0.35 | 0.29 | 0.78 |
$^a$Loneliness scale, using total number of items.
$^b$Shyness/social anxiety scale, using total number of items.
$^c$Depression scale, using total number of items.
$^d$Modified loneliness scale.
$^e$Modified shyness/social anxiety scale.
$^f$Modified depression scale.
$^g$Diagonal entries are coefficient alpha estimates of scale reliability.
**CONCLUSIONS**
The above results suggest several conclusions. First, these shyness and social anxiety scales could be combined into a single scale. This does not mean that, at a conceptual level, there is no distinction between the shyness and social anxiety constructs. Rather, at the measurement level, it suggests that the scales commonly used to assess shyness and social anxiety may be combined for most practical purposes. It is the task of future research to determine the generality of this conclusion in other populations of subjects and with other measures of shyness and social anxiety. Second, the three remaining constructs do not appear to be orthogonal, as both exploratory and confirmatory analyses indicated that these measures are positively intercorrelated. Third, both the confirmatory factor analyses and the subsequent exploratory analysis suggest that the modest fit of the 3-factor oblique model be attributed to the fact that several items had loadings on both the predicted as well as on an additional unpredicted factor. It is the task of future research to examine the viability of this hypothesis in new samples of subjects.
Of course, more research on these scales and on others designed to assess these problems in living is needed before drawing firm conclusions. However, our results strongly suggest that the 3-factor conceptualization (depression, loneliness, and shyness/social anxiety) uncovered in this study may be more meaningful than the existing 4-factor one. This is certainly true for the specific measures used in this study.
Finally, researchers studying depression, loneliness, and shyness (or social anxiety) would probably be wise to use the modified scales suggested by the exploratory factor analysis. By dropping the contaminated items, the researcher will obtain measures of depression, loneliness, or shyness (social anxiety) that are as factorially pure as possible. This is especially important when testing models or examining theories of differences between these problems in living.
**REFERENCES**
Anderson, C. A. (1983). Motivational and performance deficits in interpersonal settings: The effects of attributional style. *Journal of Personality and Social Psychology, 45*, 1136–1147.
Anderson, C. A., & Arnoult, L. H. (1985). Attributional style and everyday problems in living: Depression, loneliness, and shyness. *Social Cognition, 3*, 16–35.
Anderson, C. A., Horowitz, L. M., & French, R. (1983). Attributional style of lonely and depressed people. *Journal of Personality and Social Psychology, 45*, 127–136.
Anderson, C. A., Jennings, D. L., & Arnoult, L. H. (1988). The validity and utility of the attributional style construct at a moderate level of specificity. *Journal of Personality and Social Psychology.*
Antaki, C., & Brewin, C. (Eds.) (1982). *Attributions and psychological change*. New York: Academic Press.
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. *Psychological Review*, 84, 191–215.
Beck, A. T., & Beck, R. W. (1972). Screening depressed patients in family practice: A rapid technic. *Postgraduate Medicine*, 52, 81–85.
Beck, A. T., Rush, A. J., Shaw, B. F., & Emery, G. (1979). *Cognitive therapy of depression*. New York: Guilford.
Bentler, P. M. (1980). Multivariate analysis with latent variables: Causal modeling. *Annual Review of Psychology*, 31, 419–456.
Bentler, P. M., & Bonett, D. G. (1980). Significance tests and goodness of fit in the analysis of covariance structures. *Psychological Bulletin*, 88, 588–606.
Bradburn, N. (1969). *The structure of psychological well-being*. Chicago: Aldine.
Brodt, S. E., & Zimbardo, P. G. (1981). Modifying shyness-related social behavior through symptom misattribution. *Journal of Personality and Social Psychology*, 41, 437–449.
Cheek, J. M., & Buss, A. H. (1981). Shyness and sociability. *Journal of Personality and Social Psychology*, 41, 330–339.
Daly, J. A., & McCroskey, J. C. (Eds.) (1984). *Avoiding communication: Shyness, reticence, and communication apprehension*. Beverly Hills: Sage.
Fenigstein, A., Scheier, M. F., & Buss, A. H. (1975). Public and private self-consciousness: Assessment and theory. *Journal of Consulting and Clinical Psychology*, 43, 522–527.
Frieze, I., Bar-Tal, D., & Carroll, J. (Eds.) (1979). *New approaches to social problems*. San Francisco: Jossey-Bass.
Harvey, R. J., Billings, R. S., & Nilan, K. J. (1985). Confirmatory factor analysis of the job diagnostic survey: Good news and bad news. *Journal of Applied Psychology*, 70, 461–468.
Horowitz, L. M., French, R., & Anderson, C. A. (1982). The prototype of a lonely person. In L. Peplau & D. Perlman (Eds.), *Loneliness: A sourcebook of current theory, research, and therapy* (pp. 183–205). New York: Wiley.
James, L. R., Mulaik, S. A., & Brett, J. M. (1982). *Causal analysis: Assumptions, models, and data*. Beverly Hills: Sage.
Joreskog, K. G. (1969). A general approach to confirmatory factor analysis. *Psychometrika*, 34, 183–202.
Joreskog, K. G. (1978). Structural analysis of covariance and correlation matrices. *Psychometrika*, 43, 443–477.
Joreskog, K. G., & Sorbom, D. (1981). *LISREL: Analysis of linear structural relationships by the method of maximum likelihood*. Chicago: National Educational Resources.
Kupier, N. A., Olinger, L. J., & MacDonald, M.R. (1982). Depressive schemata and the processing of personal and social information. In L. B. Alloy (Ed.), *Cognitive processes in depression*. New York: Guilford.
Leary, M. R. (1983). *Understanding social anxiety: Social, personality, and clinical perspectives*. Beverly Hills: Sage.
Russell, D., Peplau, L. A., & Cutrona, C. E. (1980). The revised UCLA loneliness scale: Concurrent and discriminant validity evidence. *Journal of Personality and Social Psychology*, 39, 472–480.
Russell, D., Peplau, L. A., & Ferguson, M. L. (1978). Developing a measure of loneliness. *Journal of Personality Assessment*, 42, 290–294.
Secunda, S. K. (1973). Special report: The depressive disorders. Washington, DC: National Institute of Mental Health.
Seligman, M. E. P., Abramson, L. Y., Semmel, A., & Von Baeyer, C. (1979). Depressive attributional style. *Journal of Abnormal Psychology*, 88, 242–247.
Thurstone, L. L. (1947). *Multiple-factor analysis*. Chicago: University of Chicago Press.
Weeks, D. G., Mischela, J. L., Peplau, L. A., & Bragg, M. E. (1980). Relation between loneliness and depression: A structural equation analysis. *Journal of Personality and Social Psychology*, 39, 1238–1244.
Zimbardo, P. G. (1977). *Shyness*. Reading, MA: Addison-Wesley.
Zimbardo, P. G., Pilkonis, P. A., & Norwood, R. M. (1974). *The silent prison of shyness* (ONR Tech. Rep. Z-17). Stanford, CA: Stanford University.
|
LIGHT SPEED TRAP AHEAD
In the early years of the 20th century, the world of science confronted for the first time the full implications of the velocity of light as an unbreakable absolute. As Einstein saw it, the choice was stark. Physics either had to relinquish absolute light or abandon absolute time and space. Since exquisitely precise tests had repeatedly confirmed the absolute speed of light in all directions, regardless of rapid motions of the source or of the observer, Einstein instead boldly relativized space and time. The orderly Cartesian grid of classical theory collapsed and gave way to the baffling elastic mazes of general relativity. Measuring rods shrank and remote atomic timepieces slowed their vibrations from ultraviolet into red as the cosmos expanded from a big bang, black holes convulsed into infinite densities at the end of time, nuclear weapons exploded on cue, and undulating space-time geodesics emerged—least path straight lines swooning curvaceously in a succulent four dimensional continuum. Think of a cosmic Madonna visible through a prism at which mass becomes infinite, lengths shrink to zero, and clocks stand still.
That’s the kind of thing we can expect today in the world of information technology as it collides with the speed of light barrier.
Don’t laugh. Physics is at the heart of this technology. Physics is at the heart of the current rush to conglomeration among the leading semiconductor companies, from Intel (INTC) and Chips & Technologies (CHPS) to National Semiconductor (NSM) and Cyrix (CYRX). Physics is at the historic center of the entire microelectronics epoch.
At the same time as the relativity revolution early this century, quantum theory unveiled the inner structure of matter and upended the Newtonian assumptions of atomic solidity and determinism. Far from an unbreakable massy solid, the atom turned out to be as vacant in proportion to the size of its nucleus as the solar system is empty in proportion to the size of the sun. For an earthbound analogy, think perhaps of the mind of Al Gore. Electrons transpired as...
The lightspeed limit requires creation of entire systems on a chip, overcoming the gap between processor speed and memory speed by putting the processor on the memory.
baffling hybrid waves and particles that could not be fixed in time and space. Think of the views of Ross Perot. Photons emerged as paradoxically mass-less corpuscles of light stuck at a velocity of 186 thousand miles a second. Think of the original true to life version of Speed. Common sense departed physics, giving way to cosmic jokes. But by opening the interior of matter to manipulation, quantum theory ultimately allowed the creation of common computers on grains of sand and fiber optic lines of glass lasing photons around the globe.
Now, these miraculous quantum worlds of computers and communications are meeting a challenge from the lightspeed limit analogous to the upheaval of Einstein’s physics. In much the way that absolute zero overthrew the paradigm of physical time and space, so the light barrier is today subverting the established paradigms of time and space in information technology. In a siege of new alliances, buyouts and mergers, the industry is already transforming itself in relation to the luminal boundaries closing in upon it.
Once an abundant resource that enabled the miraculous pace of electronic processing and communications, the speed of light is now a crucial scarcity. It constrains the future shapes and solutions of computers and networks and puts today’s architectures in jeopardy. From the lithographic gear that inscribes the circuitry on microchips to the “buses” that link microprocessors and memories at the heart of your PC, from the geosynchronous satellites that circle the globe and transmit digital television and data, to the fiber optic networks that connect distant cities on Internet backbones, the lightspeed limit has created a crisis of information systems.
Any crisis so fundamental opens large opportunities for inventors, entrepreneurs, and investors. Today the lightspeed limit is the key to immense new markets in new microprocessor and memory architectures, in digital cameras and scanners, in wireless communications, in low earth orbit satellite systems, in network designs, and in semiconductor capital equipment.
The most obvious and immediate manifestation of the crisis comes in the relationship between microprocessor speeds and memory access times. For the past twenty years, microprocessor clock rates have been advancing at an estimated pace of between 48 and 60 percent per year. As photolithography equipment employs ever shorter wavelengths of light to inscribe ever smaller features on chips, both the speed and density of processors increase. Memory densities also rise at the same rate. But memory access times have been improving at a pace of only seven percent per year, with some signs of a further flattening of the rate of advance. The result is that new microprocessors, such as 300 megahertz Pentiums or 600 megahertz Digital (DEC) Alphas, spend as much as 90 percent of their time in wait states, marking time during memory access.
The new Alpha under development for introduction in 1998 will run at a gigahertz, a billion cycles a second. That means a cycle every nanosecond and each cycle may process as many as three instructions. Yet depending on price, existing memory systems take between six and 60 nanoseconds to deliver data to a register on a processor.
Although the industry is full of announcements of improvements of memory technology, David Clark of MIT sums up the basic predicament: “You can increase memory capacity or memory bandwidth by throwing money at the problem,” building new $2 billion dollar factories or multiplying the number of DRAM chips and thus the number of pins on the motherboard and expanding the number of lines or traces in the bus. “But memory latency (access time) is determined by the speed of light, and you can’t bribe God.” The lightspeed limit essentially rules that electronic signals move nine inches a nanosecond. As chips become denser, with smaller feature sizes and larger silicon areas, interconnect lines both on and off chip become longer and narrower, with hundreds of meters of infinitesimal wire on a single chip. With longer and narrower wires, resistance and capacitance slow the velocity of the signals even further, to speeds of centimeters per nanosecond, less than half of lightspeed in a vacuum. Today, as much as 80 percent of the delay in computer systems comes from interconnections, on and off chip. The gates to system speed are less and less at Intel, and more and more at connector companies such as Molex (MOLX) and AMP (AMP).
The usual answer to this problem is the creation of a complex hierarchy of on-chip and off-chip caches, which exploit propinquity—the likelihood that neighboring addresses will be accessed together. Often offering three tiers of caches, these systems couple expensive fast memories, chiefly static RAMs,
Fiber Bandwidth Expands
Lucent has continued to demonstrate the potential of wavelength division multiplexing (WDM) to further expand fiber bandwidth with the July 21, 1997 announcement of a 100-channel optical amplifier. By the end of 1996, the already abundant bandwidth of fiber optics moved significantly closer to the end user as the RBOCs (Regional Bell Operating Companies) moved to meet exploding customer demand for high bandwidth connections. Fiber's advance within the telco network from long distance trunks to inter-office connections and then into the local loop is being continued with massive fiber terminations on customers' premises. RBOC carriers filed with the FCC in July that not only are terminations increasing but the capacity of their fiber connections are dramatically rising (Chart 2). SBC's cancellation of Pacific Telcos' HFC (Hybrid Fiber Coax) network even appears to be a positive sign as Nynex bypasses HFC to bring fiber even closer to customers through its FTTC (Fiber to the Curb) buildout. Close still to the end user, Mitsubishi Rayon Co., Toyos Industries Inc., and Asahi Chemical Co. are producing low-cost plastic optical fiber (POF) for short distance, high bandwidth connections. Sony, NEC and Toshiba have announced intentions for using POF in their LAN systems.
Directly to the processor, and relegate cheaper but slower DRAMs to the back of the bus. Memory access logic transfers the most likely bits into the fastest memories, ultimately the memory registers on the processor itself. But for an increasing number of applications, including chip design tools such as CAE (computer aided engineering), the propinquity law doesn't work. The caches miss the crucial bits, and every cache miss means hundreds of instructions delayed.
Every computer today is 99 percent memory cells. The processor logic itself occupies less than one percent of silicon area. Silicon area is the best index of the cost of the device. Intel today gains some 75 percent of the profits in the semiconductor industry while incurring a tiny proportion of the silicon cost. Earlier this year Intel began dictating to the memory chip makers how they should accelerate their devices, mandating use of the Rambus (RMBS) technology for enhancing memory bandwidth. Most of the industry is grudgingly following the Intel signal. But the lightspeed limit suggests that this relationship will be reversed. In the future the dictation is going to go in the other direction, coming increasingly from the DRAM makers rather than from the microprocessor manufacturers.
The lightspeed limit requires getting the processor closer to the memory cells. That means real propinquity rather than virtual propinquity through off chip caches. That means creation of entire systems on a chip, overcoming the gap between processor speed and memory speed by putting the processor on the memory. Since systems are mostly made of memories and wires, it follows that many of the new hybrids will be built by companies that can make memories. That used to mean Japan and Korea and still does to a great extent. Mitsubishi and Samsung have both been leaders in integrated processors with as much as 16 megabits of on-chip DRAM cells. Toshiba, NEC (Nippon), and Fujitsu are not far behind. Texas Instruments (TXN), one of GTR's original paradigm companies, is well situated to produce a variety of chips containing logic, digital signal processors, and other devices.
In a shocking upset, however, Micron Technology (MU) of Boise, Idaho, just became the world's leading DRAM producer, with twice the marketshare of second place Samsung (see Chart 1). Although critics pointed out that Micron's lead comes from 16 megabit devices, rather than the current state of the art 64 meg chips, Dell Computer (DELL) announced that it has tested Micron's 256 meg samples and plans to use them in future products. Because Micron's DRAMs use fewer process steps and layers than the competition's, the company has been able to maintain gross margins as high as 49 percent during a period when Asian companies have been cutting back and Motorola announced that it is leaving the DRAM business entirely. The Micron breakthrough in a market long believed to be hopelessly lost to American companies is one of the great stories in the history of entrepreneurship. You can read their entire history in my book, *Recapturing the Spirit of Enterprise*, ICS Press in San Francisco.
All DRAM companies and some static RAM companies, such as Cypress Semiconductor (CY), with good design and a sound understanding of computer logic, can now begin to escape the thrall of Intel. They can move into the lead in producing processor memory combinations for everything from digital cameras and network computers to digital phone-PCs and set top boxes.
In the central rivalry in current electronics, scores of companies are responding to the lightspeed limit by rushing to put systems on a chip. Brian Halla has sponsored two models of the one chip system, one at LSI Logic (LSI) under Will Corrigan and one at National Semiconductor, where Halla now serves as CEO. Today LSI Logic, with its role in the Sony (SNE) Playstation, DirecTV, and other popular consumer products seems to have the edge. LSI also will benefit from its recent alliance with Micron to embed DRAM cells in one chip systems. Micron and LSI jointly announced an unprecedented target product, combining 128 megabits of memory and 8.1 million logic gates. A key rule of semiconductor investing over the last 20 years is don't bet against Micron.
Showing design prowess and the power of its CoreWare system for integrating diverse functions on a single device, LSI recently introduced a $35 chip, DCAM-I01, with a million transistors and a dozen subsystems that performs all the processing functions for a digital camera. With resolution up to 4 million pixels, the LSI device enables pictures that are some five times more dense than images from existing digital cameras on the market. Yet the LSI images are producable at a rate of eleven snapshots per second, some 40 times faster than the competition. Suitable not only for Web page display and
Java has been fully embraced by the enterprise. This is the finding of the June Zona Research survey of enterprises with 250 or more computer users. Within the next 12 months, 47% of respondents plan to test and deploy Java applications in production. Nearly 75% of these Java users will be used to extend existing applications, while 84% said their enterprises will develop new applications in Java. Nearly half (47%) are already using Java. And of those currently using Java, 52% plan beyond testing and are deploying Java within departments (42%) or across the enterprise (10%) (Chart 3). During the next 6 to 24 months, the share of the application development budgets allocated to Java-related activities will climb from 12.7% to 21%. Over the same period, based on man-months allocated to Java, the number of full-time Java developers will triple from an average of 2 to more than 6 (Chart 4). The share of enterprises employing more than 10 Java developers will increase from 6% to 22%, and those with greater than 50 will jump from just 1% in the next 6 months to 8% within 24 months. The factors motivating Java adoption included web browser linkage (cited by 72%), cross-platform compatibility (69%), and programmer interest and the ability to attract top programmers (66%). The survey also found that 40% of enterprises have plans for Network Computers (NCs) and 30% of those who do not have NCs say they are somewhat or very likely to be deploying NCs (Chart 5). NC plans include not only the replacement of aging terminals and more expensive PCs but also the expansion of computing and network resources to new users and uses (Chart 6). These findings compare with a Yankee Group study of the 100 largest US companies in January which found that 17% budgeted for NCs in 1997, 54% are piloting or evaluating NCs, and 65% plan purchases within 2 years.
The multimedia chip market is expanding faster than computer sales as an increasing percentage of PCs ship with advanced audio and video capabilities (Chart 7). Intel's purchase of Chips & Technologies highlights Intel's efforts to expand its hardware expertise beyond the CPU (central processing unit) into the essential subsystem at the same time it is attempting to use fast processors and advanced software to eliminate the need for non-CPU components. In June, Intel outlined its efforts to use the computing power of Pentium II processors and advanced降噪器and video and audio decoding software to allow PCs to play back DVD (digital versatile disks) movies without the need for specialized off-CPU chips. IBM has also announced a software-based DVD playback system. These software-based approaches, however, dominate the CPU resources and are not practical. Currently, the leading solution is provided by Chameleon's Mpxt processor. Chameleon's chip technology is now being used in its own Gateway 2000 Micro DVD PC line. Another major player in the market is the new graphics controller in high-end PCs from Toshiba, IBM, Mirron, Packard Bell-NEC and HP. Gateway announced in July that DVD drives utilizing the Mpxt processor will be an option across its multimedia PC line. The highly successful launch of stand alone DVD players (Chart 8) with continued strong demand despite the restriction of software sales to 7 cities and a limited number of available titles, has encouraged Warner Home Video to increase titles and expand sales nationwide on August 26. Universal Home Video has also joined the list of studios who are coming off the sidelines to embrace DVD through the release of titles.
Microsoft's Internet Explorer 4.0 is designed to be a Netscape killer. Integrating the web browser with the Windows interface, Microsoft has eliminated the distinction between surfing the web and browsing local files. No longer, they hope, is there a need for users to buy and install an independent browser application. Microsoft's success in giving away a browser which closely matches the functionality of Netscape's commercial product has resulted in the erosion since January, 1996, of Netscape's share of browsers in use from some 84% to 66% while Microsoft has climbed from 6.4% to 31% (Chart 9). During the same time Internet users have multiplied about 2.25 times, meaning Microsoft and Netscape have each captured about 50% of the market for new browsers. The impact of Microsoft's browser giveaway is clearly reflected in Netscape's bottom line. Netscape's browser revenues flattened from $20 million in Q4'95 to $17 million in Q1'96, and then dropped to $10 million in Q2'96. Meanwhile, Microsoft's Internet server revenues have quadrupled, and both Netscape's server revenues are facing similar pressures, which became obvious in the 2Q'97 figures. Netcraft's survey of web servers on the public Internet shows Netscape in 3rd place behind for public domain software available from the informal and non-profit Apache Group and Microsoft's Internet server which is offered free to users of Windows NT (Chart 11). Netscape's losing battle against Microsoft free product is demonstrated in their respective shares of new servers found by Netcraft in each month's survey (Chart 12). In Netscape's behalf, it must be said that Netscape is the overwhelmingly favorite choice among Fortune 1000 companies new to the web. And it is interesting using the Netcraft figures to gauge Netscape's share of the Internet server market which is hidden by firewalls from the public Internet. From the perspective of Netscape's financials, however, in which browser sales are the only source of revenue, such a fortified market limits sales and marketing efforts. To offset the impact of free alternative products, Netscape is in need of new products and markets. Netscape's joint venture NCI/Navio, with Oracle, and Novonix, with Novell, offer some promise. Navio's lite version of Navigator has been adopted for use in NCs from NCI, HP, IBM, Tektronix, and Newave, producer of the @workstation. Novonix is raising expectations for Novell's renewed success in corporate networking and the possibility for Netscape to expand its own markets for managing and growing networks. Netscape's web browser is now integrated in Communicator with the addition of advanced multimedia capabilities. Netscape's web browser is also being marketed directly by Lotus. Yet the market remains open. On July 29, Lotus and Microsoft announced the planned close integration of Microsoft's browser into Lotus products. Previously Lotus had shipped Navigator with its products but switched to Microsoft after Netscape began aggressive pursuit of Lotus customers with its Communicator suite. And, within a month, Lotus plans to ship a low-end Web server to compete directly with Netscape. Similarly, there is a risk that Navio's browser sales to NC manufacturers will suffer if they perceive Navio's merger with Oracle's NC producer NCI represents competition for their efforts. And Novonix may require the revival of both Netscape and Novell to succeed. Clearly Netscape's technical expertise, the strikingly clear understanding they have demonstrated of what the Internet could offer and how it could transform both computing and business—the vision which had made Netscape a Telecom Technology company—does not fully compensate for marketing inexperience and brutally aggressive competition. -KE
@Home is the first serious effort to address the speed of light problem in Internet access. @Home has woven together hardware and software in an elaborate system to bring favored Web pages close to the user.
email dispatch, but also for printouts comparable for many purposes to an analog camera’s slides, this device could ignite a huge new semiconductor market.
Under the terms of its July 30, 1997 license agreement with Qualcomm (QCOM), LSI can develop, manufacture and sell code division multiple access (CDMA) application specific integrated circuits (ASICs) to Qualcomm’s subscriber equipment licensees for digital cellular, personal communications services (PCS), and wireless local loop applications around the world. LSI will profit from combining its CoreWare system on a chip technology with Qualcomm’s leading CDMA wireless communications technology.
The ultimate system on a chip must contain analog devices, transducers that interface with the real world—with network physical layers, voice inputs, light reflections, and wireless waveforms. Historically the world’s analog leader, National Semiconductor has gained new state of the art analog to digital converters from its Comlinear subsidiary, which was acquired in 1995, and is now owner of leading edge microprocessors through its purchase of Cyrix. Halla is reconstructing the entire company around the concept of the system on a chip that also connects off the chip to the real world.
The demand for these devices is obvious. For the next decade, the largest markets in electronics will come from producers of the digital wireless communications devices that are rapidly becoming the most common personal computers. Some 50 countries around the globe are approaching a level of economic development that history shows will excite demand for billions of new telephones. Some 55 percent of the world’s population has never made a phone call. There is no chance that this huge market will be served by the current system of stringing rows of wooden poles across steppes and jungles and running backhoes through cities to entrench copper wires. While fiber optics will provide most of the long distance links and is moving closer to the home in developed countries (see Chart 2), the future of the local loop in global telephony is necessarily wireless.
The dominant wireless handsets will be based on complex systems on a chip comprising such functions as voice compression, modulation and demodulation, error correction, convolutional coders and Viterbi decoders, signal reception and transmission, frequency synthesis, and protocol processing. It will be full of National one chip systems, integrating radio receiver transmitters, phase lock loops that synchronize frequencies, comparators, operational feedback amplifiers, audio amplifiers, battery management controllers, low drop voltage regulators for low power, and analog to digital and D-A converters galore. These systems on a chip can only come from companies with full analog competence as well as digital capabilities. Few firms will qualify. High among the leaders, along with Analog Devices (ADI) and Texas Instruments, will be National.
Secreted among National’s assets is a proprietary light speed technology that may bring the company into direct competition with LSI Logic in digital cameras. National last year purchased the rights to the CMOS photoreceptor invented by the legendary Carver Mead of Caltech and Synaptics. Under CEO Federico Faggin, the builder of the first microprocessors at Intel, Synaptics has used Mead’s concepts to dominate the market in notebook computer touchpads and achieve a revenue run rate near $100 million.
CMOS means complementary metal oxide semiconductor and it is the basic system employed by all semiconductor companies, from Intel to NEC, for digital logic devices such as microprocessors and memories. The usual problem of CMOS is called the bipolar latchup transistor, a parasitical device that can crop up between the complementary positive and negative transistors spread across a CMOS chip. Mead saw that this unwanted effect could be converted into an onchip phototransistor that could compete with the now regnant charge coupled devices (CCDs) used in existing digital cameras, scanners, and other light sensing appliances. Invented by the Applied (AAPL) President Gil Amelio early in his career, charge coupled devices are inferior to the Mead concept because they require a specialized silicon wafer fab manufacturing process. Built in an ordinary CMOS fab, the Mead devices not only exploit the bipolar parasitic but also use CMOS transistors operating below threshold at power under .7 volts for further analog processing of the images. Incorporating digital functions as well, the Mead chip could make possible a true one chip digital camera, transcending the lightspeed limit by putting all the functions of the machine on a single silicon sliver where they never have to slow down for ofchip pins and wires.
A second collision of established technology with the limits of light promises to decorate the troposphere with thousands of new satellites. Using geosynchronous satellites, 23 thousand miles up, users of telephones incur the lightspeed limit as the cause of an average half-second delay in international calls. If a half second is offensive for voice communications, which proceed at a leisurely pace of some 64 kilobits per second, it can be eternally for multigigabit data connections. Such Internet protocols as TCP-IP, which rely on acknowledgments, and data streams that require error correction, can suffer grave deterioration. As a result, Globalstar (GSTRF), Iridium (IRIDE), and Teledesic are launching low earth orbit satellites (LEOs) that are 60 times closer to the earth and perform with no more delay than fiber optics.
With booming worldwide demand for communications services, focused on the global Internet, LEOs can become worldwide carriers of last resort. Fiber optics and microwave are not competitors to the LEOs; they are complements. As fiber networks span the globe, reaching hundreds of cities with broadband services, and microwave links proliferate, the market for local access facilities will boom. Many areas lack basic infrastructure in the last mile. Others lack Internet connections, even in developed countries. With Internet traffic growing 200 fold over the last two and one half years, stress will mount on all the world's aging telecom infrastructure still optimized for voice. Covering the entire global population at once, low earth orbit devices can command a potential market limited chiefly by the ability of engineers to finance, build, launch, and maintain these complex systems.
Most famous is Teledesic, the venture initiated five years ago by Craig McCaw and Bill Gates. It will ultimately field some 288 satellites interconnected in the sky with a space based packet network working at 60 gigahertz. The satellites will link to the ground at 17 and 30 gigahertz with phased array digital antennas based on millions of gallium arsenide circuits. At an estimated cost of some $9 billion, Teledesic is an awesome microwave technology based on the Star Wars concepts of the brilliant pebbles program.
Although Teledesic has often been derided as the pipe dream of two billionaires, it recently attracted a potential $100 million investment from Boeing (BA) for a 10% share and an imitative project from Motorola (MOT). Perhaps seeking a dazzling way to distract attention from its tormented Iridium venture, Motorola earlier this year requested approval from the FCC for a combined three way LEO and GEO system that would directly compete with Teledesic. Called Celestri, the $12.9 billion LEO system would complement the previously announced Millennium geosynchronous satellite scheme and a proposed LEO M-Star system. Together the triad could offer symmetric point-to-point connections at up to 155 megabits a second, bursty asymmetric services up to 16 Mbps, broadcast multimedia services, and interactive Internet applications galore. Motorola claims the ability to begin service by 2002 with a capacity of 80 gigabits per second, apparently about double the capacity of Teledesic. But the Motorola advantage comes from the huge down-stream bandwidth volume possible with GEOs.
Combining 63 LEOs and 4 GEOs, the Motorola system will be more complex and hierarchical than the peer network that is being deployed by Teledesic. Since there is no reason that services offered by Teledesic could not be linked on the ground with GEO satellite broadcasts, the Motorola concept does not seem to offer any decisive advantage. The winner will be the customers for global communications and the company that can finance and launch the system first. Teledesic seems to have the head start.
However, the simpler and cheaper 48 LEO CDMA mobile system from GlobalStar, launched by Loral (LOR) and Qualcomm, may well have the most immediate impact. It offers the possibility of global roaming for the increasing millions of CDMA subscribers around the world. But GlobalStar lacks the broadband capabilities of Teledesic and the Motorola stunner.
All these LEOs attest to the imperious pressures of the lightspeed limit in an era of gigabit per second communications. The rise of the Internet, with its need for realtime feedback congestion control and error correction, has made even fiber optic delays troublesome. The speed of light means that it takes between 30 and 100 milliseconds for a message to cross the continent. Acceptable in voice communications, 30 milliseconds means scores of megabits in a one gigabit per second pipeline. The network backbones of Sprint (FON) and MCI (MCIC) now function as fast as 40 gigabits per second. This means more than a gigabit in the pipe.
Further by Weiner-Perkins, TCI (TCOMA), Cox (COX), Comcast (CMCSA) and other cable companies, and beneficiary of a billion plus IPO in mid July, @Home (ATHM) is the first serious effort to address the speed of light problem in Internet access. @Home has woven together hardware and software from Sun (SUNW), Silicon Graphics (SGI), Cisco (CSCO), Sprint, Oracle (ORCL), OSI (OSII), Tivoli (TVLI), Netscape (NSCP) and Teleport Communications Group (TCGI) in an elaborate system of servers, caches, mirrors, and replicators to bring favored Web pages close to the user. The direct connection to the home comes over cable in a hybrid fiber coax system that is currently available to only some 2 million homes but that will be extended, according to current plans, to as many as
As PCs are now outselling TVs by some 40 percent in units, the triumph of the PC and the net approaches as fast as @Home and its rivals can deliver these new digital capabilities to the home.
| ASCENDANT TECHNOLOGY | REPORT(S) Volume: No. | COMPANY (SYMBOL) | Reference Price | Price as of 7/31/97 |
|--------------------------------------------------------------------------------------|-----------------------|-----------------------------------|-----------------|---------------------|
| Cable Modern Service | I: 2, 3 II: 7, 8 | @Home (ATHM) + | 19 1/2 | 19 1/2 |
| Erbium Doped Fiber Amplifiers, Telecommunications Infrastructure, Wave Division Multiplexing (WDM) | II: 2, 3, 4, 7 | Alcatel (ALA) | 16 3/4 | 27 |
| Analog to Digital Converters (ADC), Digital Signal Processors (DSP), Silicon Germanium | II: 3, 7 | Analog Devices (ADI) | 22 3/8 | 31 1/2 |
| Java Thin Client Office Suite, Rapid Application Development (RAD) | II: 6, 7 | Applix (APLX) | 4 1/2 | 6 1/16 |
| Digital Video Codecs | II: 5 | C-Cube (CUBE) | 23 | 24 1/8 |
| Erbium Doped Fiber Amplifiers, Wave Division Multiplexing (WDM) | II: 2, 7 | Ciena (CIEN) | 23 * | 56 1/8 |
| Low Earth Orbit Satellites (LEOS) | I: 2 II: 1, 3, 4 | Globistar (GSTRF) | 21 3/4 | 32 |
| Single Chip ASIC Systems, CDMA Chip Sets | II: 8 | LSI Logic (LSI) + | 31 1/2 | 31 1/2 |
| Telecommunications Equipment, Wave Division Multiplexing (WDM) | II: 1, 2, 7 | Lucent Technologies (LU) | 47 1/8 | 84 7/8 |
| Single Chip Systems | II: 8 | National Semiconductor (NSM) + | 31 1/2 | 31 1/2 |
| Internet Software | I: 1, 3, 4 II: 1, 4, 6, 7, 8 | Netscape Communications (NSCP) | 53 | 36 11/16 |
| Code Division Multiple Access (CDMA) | I: 1, 2 II: 1, 3, 4, 7, 8 | Qualcomm (QCOM) | 38 3/4 | 46 1/4 |
| Java Programming Language, Internet Servers | I: 1, 2, 3, 4 II: 1, 5, 6, 7, 8 | Sun Microsystems (SUNW) | 27 1/2 | 45 11/16 |
| Servernet System Area Networks (SAN) | I: 1, 7 | Tandem Computers (TDM) ** | 9 1/2 | 29 3/8 |
| Optical Equipment, Smart Radios, Telecommunications Infrastructures | I: 1 II: 1, 2, 3 | Tellabs (TLAB) | 29 1/8 | 59 7/8 |
| Digital Signal Processors (DSP), DRAM | I: 2, 3, 4 II: 5, 8 | Texas Instruments (TXN) | 47 1/2 | 115 |
| Wave Division Multiplexing (WDM) Modulators | II: 7 | Uniphase (UNPH) | 58 3/4 | 68 1/4 |
| Code Division Multiple Access (CDMA) Testing Gear | II: 1, 2, 7 | Wireless Telecom Group (WTT) | 10 3/8 | 11 3/4 |
| Field Programmable Logic Chip | I: 3 | Xilinx (XLNX) | 32 7/8 | 47 3/8 |
Note: This table lists technologies in the Gilder Paradigm, and representative companies that possess the ascendant technologies. But by no means are the technologies exclusive to these companies. In keeping with our objective of providing a technology strategy report, companies appear on this list only for these core competencies, without any judgement of market price or timing.
20 million over the next five years. It is also possible that the Terayon CDMA modems, which operate at speeds up to 60 megabits per second in non-upgraded facilities, will become widely available.
At the end of July, @Home announced an alliance with Progressive Networks to deliver streaming real time audio and video over the Internet to personal computers (see Chart 13). Thus the video gap between the capabilities of PCs and TVs is quickly closing. As PCs are now outselling TVs by some 40 percent in units (see Chart 14), the triumph of the PC and the net approaches as fast as @Home and its rivals can deliver these new digital capabilities to the home.
Finally, the lightspeed limit will determine the future interplay between the laws of the microcosm and the telecom that have governed the evolution of the technology. The law of the microcosm is a certitude. It tends to drive intelligence to the edges of networks. Based on the power-delay product in semiconductor engineering, which relates switching speed of transistors to heat dissipation, the law of the microcosm ordains that any increase "n" in the number of transistors on a typical chip will result in "n" squared performance gains. Smaller transistors closer together not only yield more processing logic and memory capacity; they also run faster, cooler, and cheaper.
This law has resulted in an exponential slope of improvement in the cost-performance ratios of single chip systems and microprocessors. Thus it has fueled the ever more widely distributed reach of personal computers. Many analysts now predict a return of more centralized systems and even propose a revival of the mainframe computer.
The speed of light limit, however, converges with the chip density curve to assure that the law of the microcosm will continue to distribute ever more cheap intelligence through the world economy, with no significant trend toward decentralization. The speed of light dictates that one chip systems remain limited in size to reduce the length of interconnections. It resists the centralization of networks. As with @Home, hubs and servers will be dispersed and localized across the Internet mesh.
The law of the telecom will not rescind the waves of emancipation in the global economy fueled by the microcosmic spread of intelligent systems. Instead, the lightspeed limit will enforce a new era of liberation technology.
George Gilder, August 3, 1997
Gilder Technology Report is published by Gilder Technology Group, Inc. and Forbes Inc.
Editor: George Gilder; Associate Editors: Charles Frank and David Minor;
Director of Research: Kenneth Ehrhart.
Monument Mills P.O. Box 660 Housatonic, MA 01236 USA
Tel: (413) 274-0211 Toll Free: (888) 484-2727 Fax: (413) 274-0213
Email: email@example.com
|
Agency Information
AGENCY: FBI
RECORD NUMBER: 124-90096-10038
RECORD SERIES: HQ
AGENCY FILE NUMBER: 92-2989-224
Document Information
ORIGINATOR: FBI
FROM: MM
TO: HQ
TITLE:
DATE: 03/26/1965
PAGES: 32
SUBJECTS:
CHARLES TOURINE
DOCUMENT TYPE: PAPER, TEXTUAL DOCUMENT
CLASSIFICATION: Unclassified
RESTRICTIONS: 4
CURRENT STATUS: Redact
DATE OF LAST REVIEW: 07/20/1998
OPENING CRITERIA: INDEFINITE
COMMENTS: RPT
This document is made available through the declassification efforts and research of John Greenewald, Jr., creator of:
The Black Vault
The Black Vault is the largest online Freedom of Information Act (FOIA) document clearinghouse in the world. The research efforts here are responsible for the declassification of hundreds of thousands of pages released by the U.S. Government & Military.
Discover the Truth at: http://www.theblackvault.com
The confidential source abroad mentioned in the details of this report is Legat, London.
Information from the records of the New York Telephone Company was furnished by Mr. EDWARD L. BRAUNE.
In order not to jeopardize the identity of these valuable informants, the following information is set out in the administrative section.
On January 13, 1965 NY 3610-C-TE was interviewed by SA CHARLES G. DONNELLY and advised he has been in the company of CHARLIE WHITE recently and over the holidays WHITE had his seven year old son residing with him. TOMMY D'AURIA picked up the son at Newark Airport and drove him to CHARLIE's apartment. TOMMY D'AURIA recently purchased two suits, which he claimed were swag, and told informant not to tell CHARLIE WHITE about it because CHARLIE would give him hell if he knew it.
CHUCK DELMONICO, CHARLIE's son, also has been in New York recently and informant believed CHUCK's wife is presently living somewhere in New York. CHUCK is still out of jail pending appeal. CHUCK claims that he is completely innocent of the bank robbery and is considering going to Washington to take truth serum and in some way prove his innocence. CHUCK is trying to talk his wife into divorcing him as he expects to go to jail for a long stretch and there is no use having her wait until he comes out.
Informant stated WHITE is considering selling his apartment in Miami. He stated WHITE does not seem to be doing anything now. He is partners, however, with NAT MODELL, who uses the name NAT GREEN, doing a little bookmaking and CHARLIE is partners with him on one client who bets pretty big. CHARLIE does not take the bet, but NAT does. CHARLIE shares equally with NAT on all of the bets made by this one client.
WHITE likes to eat at Patsy's Restaurant on West 56th Street, New York City, although he is eating in his apartment most of the time. He is also dating hookers on occasion.
WHITE claims to be broke and indicated he had to borrow money from a shyster recently. In line with this, on January 12, 1965, WHITE had a date with a hooker and told informant that he may go to the Pompeii Room and see JIGGS FORLANO. It is noted that JIGGS is a big New York shyster.
Informant was going to see JOE ZICARELLI on January 3, 1965, and when CHARLIE heard this he told informant to say hello to ZICARELLI for him. He said to tell ZICARELLI that WHITE is broke and would like to go back to New Jersey, and if the proposition ZICARELLI made him in the past is still open he would take it. WHITE also mentioned that ZICARELLI was with BONANNO.
Informant saw ZICARELLI on Sunday, January 3, 1965, and ZICARELLI was sick with a cold. When informant said hello for CHARLIE and said CHARLIE was broke, ZICARELLI laughed and said "Don't you believe it, CHARLIE has more money than God". Informant wanted to get the okay from ZICARELLI to bet with a certain bookie and when he asked ZICARELLI he was told that he should get the okay from CHARLIE WHITE for this bookie is "handled by the old man and CHARLIE is under the old man". (The old man referred to is probably "RICHIE THE BOOT" BOIARDO).
Informant returned to CHARLIE WHITE and asked for the okay and CHARLIE told him not to get mixed up with this bookie because if he does then "the old man gets into the act and it would be better that the old man don't get involved". WHITE did not say this in anger, but just indicated that he would rather stay away from involvement with "the old man".
On January 13, 1965 NY 3610-C-TE advised SA CHARLES G. DONNELLY that ROCKY (INU), whom he believes is a brother-in-law or some relation of VITO GENOVESE, used to run several night clubs in New York City, one being the Gay Paree. TONY BENDER used to hang out there occasionally. ROCKY owes informant $15,000.00 and has owed him this amount for several years. CHARLIE WHITE is aware of this and recently he told informant that he thinks he will be able to get this $15,000.00 for him. Informant scoffed at this idea and
CHARLIE stated, "Don't laugh, we will have the 'old man' (RICHIE BOIARDO) go to bat for you". CHARLIE indicated that if RICHIE goes to bat for him it would only be necessary then to have GENOVESE give the okay and informant will get his money.
(ROCKY probably is ROCKY PETILLO, a brother of ANN GENOVESE, ex-wife of VITO GENOVESE).
On January 13, 1965, NY 3610-C-TE advised SA CHARLES G. DONNELLY that he recently asked CHARLIE WHITE what ever happened to KENNY LATER. WHITE stated that "KENNY is now down the river". Informant took this to mean that LATER is dead. CHARLIE stated that KENNY got tied up with South Americans in handling dope and was killed when the deal went sour.
On January 14, 1965, NY 3610-C-TE advised SA CHARLES G. DONNELLY that CHARLIE WHITE told him that he had been to the Pompeii Room the night of January 12, 1965. CHARLIE indicated that many hoodlums were at the Pompeii Room on Tuesday night, among whom CHARLIE mentioned FRANK COSTELLO and JIMMY DOYLE. CHARLIE stated several other names, but the informant could not recall who they were. CHARLIE indicated the hoodlums walk around the Pompeii Room as if they own it. He stated he would not be surprised if JIGGS FORLANO or some other hood had taken it over.
On January 26, 1965, informant advised SAs PAUL G. DURKIN and CHARLES G. DONNELLY that JOE NESLINE is in Spain and will stay there until subject's gambling case is completed, since he does not want to testify or be summoned before a Grand Jury. Periodically NESLINE calls subject at his apartment.
Subject is trying to get control of a casino in a hotel in Puerto Rico. He or his bankers will have to get several million dollars, but they should be able to do it. The legitimate front will be a person from Texas. Soon informant said he will get more details on this operation.
On January 11, 1965 NY 4008-C-TE was interviewed by SA DONALD A. RIVERS and he advised that recent contact had been made with CHARLIE WHITE who resides at 40 Central Park South, Apartment 10F and arrangements had been made to meet with WHITE on January 12, 1965 at Patsy's Restaurant, 56th Street in New York. Informant further stated that WHITE maintains a penthouse apartment in the Hampshire Towers in Miami Beach and has or will open a new restaurant in Florida in partnership with ASH RESNICK. In conversation with WHITE, informant learned that SHERMAN SHARWELL is indebted to WHITE in the amount of $25,000.00. This indebtedness is wholly from gambling.
JUNE WEINSTEIN recently arrived from Las Vegas and carried with her $1,200.00 which she gave to WHITE. This money was from RESNICK who is the owner of the Monaco Motel in Las Vegas. This establishment does not maintain a casino and is strictly for lodging and dining.
Informant stated that WHITE, upon receiving the money, expressed surprise that more was not brought to him. WEINSTEIN indicated that the rest will be given him by RESNICK sometime later this month when RESNICK comes to New York City. Informant stated that WHITE had expected the amount to be somewhere in the neighborhood of $25,000.00.
NY 4008-C-TE advised on January 14, 1965 that ASH RESNICK, BING WEINSTEIN and a third individual known to the informant as "TIPPY" would be arriving on Monday, January 18, 1965 at Kennedy Airport from Las Vegas and that RESNICK would be bringing money for CHARLIE WHITE. Informant learned this from BING WEINSTEIN's wife, JUNE, who was presently in New York City. According to informant this represents the balance of the amount due WEINSTEIN.
It should be noted that the informant advised the above three individuals had postponed their proposed trip to New York and had rescheduled it for January 20, 1965.
According to the informant WHITE seems to meet a lot of people who come from New Jersey. Informant has been with WHITE during meetings with some of these people.
and the majority of them indicated they were living in New Jersey.
NY 4008-C-TE furnished the following information to SAS PATRICK J. MOYNIHAN and DONALD A. RIVERS on January 27, 1965.
A WAYNE HOUSER has had a falling out with WHITE. HOUSER apparently owes WHITE money, and it has not been paid. WHITE also has threatened SHERMAN SHARWELL, a big gambler, who is presently residing at the Beekman, in New York City, because SHARWELL has not made good on the money owed to WHITE. BING WEINSTEIN, JUNE WEINSTEIN's husband, is also in debt to WHITE.
WHITE has indicated that he is furious over the fact that he and JOE NESLINE have not been able to bust into any of the action at the casino at the Lucayan Beach Hotel, on the Grand Bahama Island. WHITE has stated he intends to take a trip there and "start knocking heads". According to the informant, WHITE may start a "small war" down there with MAX COURTNEY and FRANK REED. WHITE has no use for COURTNEY, who is presently a very sick man.
NESLINE has just returned to the United States from Spain, and he and WHITE are to appear within the next two weeks before a Federal Grand Jury in Washington, D.C., investigating interstate gambling violations.
Informant stated that WHITE, while in New York City, conducts the majority of his business through contacts and associates living in New Jersey.
ASH RESNICK is presently staying at the City Squire Hotel, New York City, with another individual known to the informant only as "TIPPY". RESNICK had arrived in New York City on January 20, 1965, with skim money for WHITE.
On January 29, 1965, NY 4008-C-TE advised that WHITE had indicated in the presence of the informant that he
would hire an unknown subject, for the purpose of tapping the telephone of the U.S. Attorney who is prosecuting the 91 case on his son DELMONICO in Indiana. WHITE will pay $20,000.00 for this service, and his purpose is to try to obtain information that would put the U.S. Attorney in a compromising position, with the view towards forcing the prosecution into dropping the case.
Informant added that in the past weeks WHITE has on many occasions referred to the fact that he will go to any expense to get his son free.
(Bureau was advised of the above)
On February 12, 1965 NY 4008-C-TE furnished the following information to SAs PATRICK J. MOYNIHAN and DONALD A. RIVERS:
WHITE intends to leave New York City early in the coming week to go to Miami, where he will stay in his residence there. WHITE's length of stay in Miami is unknown to the informant. During the past few weeks, ASH RESNICK from Las Vegas has been staying in WHITE's residence in Florida. RESNICK is returning to New York City, and after a few days will return to Las Vegas. He will be accompanied by BING WEINSTEIN.
INFORMANTS
MM T-1 is JOSEPH CARUSO, PCI.
MM T-2 is ANN MC MANUS, switchboard operator, 40 Central Park South, New York City (by request)
MM T-3 is MAX GOLDPIN (by request)
MM T-4 is NY 3610-C-TE.
MM T-5 is MM 986-PC.
MM T-6 is WF 1308-C-TE.
MM T-7 is former WF 1633-C-TE.
MM T-8 is MM 934-C.
MM T-9 is JAMES POULOS, Detective, New York City Police Department (by request)
MM T-10 is NY 4008-C-TE.
A confidential source abroad advised on January 18, 1965 that telephone number WH 7914, London, England, is listed to ROBERT GOLDSTEIN, 10 Bury Street, London, S. W. 1. Source advised that GOLDSTEIN is an executive of 20th Century Fox and his name was mentioned in the CHRISTINE KEELER - PROFUMO affair as an individual who was at least known to KEELER and MANDY RICE DAVIES, the two prominent call girls.
The following Miami telephone numbers were among numbers called from the subject's New York City residence:
WI 5-7525
UN 5-2404
PL 9-3553
Bresser's Cross Index Directory for Miami and Dade County indicates WI 5-7525 is the telephone for Wolfie's Sandwich Shop, 1390 N. E. 163rd Street, N. Miami Beach, Florida; UN 5-2404 is subscribed to by BARBARA JEAN GALL, 725 78th Street, Miami Beach; and PL 9-3553 is subscribed to by SHIRLEY CAMPO, 1090 N. E. 110th Street, Miami.
It is noted SHIRLEY CAMPO is the wife of EUGENE CAMPO a known associate of the subject's.
|
Modification of Carbon Nanofibres for the Immobilization of Metal Complexes: A Case Study with Rhodium and Anthranilic Acid
T. G. Ros, A. J. van Dillen, J. W. Geus, and D. C. Koningsberger*†[a]
Abstract: The immobilisation of the rhodium/anthranilic acid complex onto fishbone carbon nanofibres (CNFs) was executed by means of the following steps: 1) surface oxidation of the fibres, 2) conversion of the oxygen-containing surface groups into acid chloride groups, 3) attachment of anthranilic acid and 4) complexation of rhodium by the attached anthranilic acid. The immobilisation process was followed and the resulting surface species were characterised by IR, X-ray absorption fine structure (XAFS) and X-ray photoelectron spectroscopy (XPS), and by molecular modelling. Anthranilic acid bonds to the CNFs by an amide linkage to the carboxyl groups that are present after surface oxidation of the fibres. The immobilised anthranilic acid coordinates to rhodium through the nitrogen atom and the carboxyl group. The as-synthesised Rh$^{II}$ complex itself is not active in the liquid-phase hydrogenation of cyclohexene. Reduction with sodium borohydride yields small particles ($d = 1.5–2$ nm) of rhodium metal that are highly active. The results indicate that different activation procedures for the immobilised Rh/anthranilic acid system should be applied, such as reduction with a milder reducing agent or direct complexation of the rhodium in the Rh$^I$ state.
Keywords: amino acids · carbon · EXAFS spectroscopy · immobilization · nanostructures
Introduction
The immobilisation of homogeneous catalysts onto solid supports has been studied extensively since the 1970s.[1–4] The objective is to combine the advantages of homogeneous catalysts, such as a high selectivity and metal efficiency, with the simple recovery of the catalyst. A number of immobilisation methods can be distinguished, such as encapsulation or entrapment, supported liquid phase, and immobilisation by covalent bonding.
In general, immobilisation by covalent bonding (“immobilisation”) of homogeneous hydrogenation catalysts is executed by first attaching one ligand to the support. This ligand is usually the most important ligand, for example, a bulky bidentate ligand that causes the immobilised catalyst to be enantioselective. Next, the metal complex is coordinated to the immobilised ligand by a ligand-exchange reaction. Other immobilisation procedures are used as well, such as reaction of a metal salt with the immobilised ligand, which may be followed by a further coordination of other ligands to the metal atom, and direct reaction of the metal centre with a surface atom of the support. Most immobilisation techniques try to maintain the structure of the immobilised catalyst as close as possible to that of its homogeneous precursor. The first 15 years of research into immobilisation have demonstrated that simple immobilisation of a successful homogeneously active, metal-complex catalyst to a functionalised support generally yields an inferior version of the homogeneous catalyst.[2] However, it has also been shown that by careful design, supported metal-complex catalysts can be prepared that have advantages not exhibited in homogeneous catalysis. Although some of the immobilised metal-complex catalysts have properties that are as good or even better than their homogeneous counterparts, in 1998 none had an industrial application.[4] This is correlated to the frequent observation of significant metal leaching, although other important factors, such as the mechanical and macroscopic properties of the support, are also decisive. The problem of leaching is often directly related to the mechanism of the catalytic reaction. For instance, immobilised phosphine-containing metal complexes will leach under hydrogenation conditions, because during catalysis one phosphine ligand must desorb to allow the required coordination of the olefin. This problem can sometimes be circumvented by the use of multi-dentate ligands.
An alternative approach could be to change the electronic and steric features of the complex by immobilisation. This approach has been exemplified by Holy, who studied the immobilisation of rhodium $N$-phenyl anthranilic acid...
(RhPAA) and rhodium anthranilic acid (RhAA) onto polystyrene.\[5-7\] The DMF-soluble (DMF = \(N,N\)-dimethylformamide) Rh\(^{I}\) complex of \(N\)-phenyl anthranilic acid (PAA) was known as a remarkably active hydrogenation catalyst.\[8-11\] However, upon immobilisation onto polystyrene, the catalyst no longer displayed any activity. In contrast, when anthranilic acid was immobilised, a highly active Rh\(^{I}\) catalyst was obtained after reduction of the metal complex with NaBH\(_4\). Monovalent rhodium is able to participate in the oxidative addition/reductive elimination cycle that is required to catalyse hydrogenation reactions. The rhodium/anthranilic acid complex was not known as a homogeneous catalyst. In addition, the immobilised complex of anthranilic acid with nickel showed hydrogenation activity.\[12\] This example demonstrates that the catalytic properties of immobilised complexes often differ from those of their non-supported counterparts. Therefore, the best approach could be to design immobilised catalysts that do not have homogeneous analogues. A weak ligand in solution, for example, an amine ligand, which may easily dissociate from the metal to result in the reduction of the metal under hydrogenation conditions, could be a suitable ligand for an immobilised system.
Although the Rh/anthranilic acid complex has been successfully immobilised on polystyrene, the relatively low thermal and mechanical stability, the swelling properties and the microporous structure of the polystyrene support complicate the use of the catalyst in an industrial environment. Carbon nanofibres (CNFs), on the other hand, are mechanically very strong and have a macroporous structure. The hydrophobicity of the fibres can be modified by surface oxidation.\[13\] Furthermore, the CNFs obtained after synthesis are very pure and can be grown at low cost.\[14\] These advantages make the CNFs a suitable support for catalytic reactions in the liquid phase. Consequently, we decided to immobilise RhAA onto fishbone-type carbon nanofibres.
To the best of our knowledge, the immobilisation (i.e., by covalent bonding) of metal complexes onto carbon nanofibres or carbon nanotubes has not yet been explored. Some reports describe the impregnation or adsorption of homogeneous catalysts on these new carbon materials.\[15-17\] Moreover, only few articles have been published regarding the immobilisation of metal complexes on carbon in general. Kagan et al.\[18\] synthesised a chiral rhodium catalyst on partially graphitised carbon, while McCabe and Orchin\[19\] attached cobalt and tin complexes to bituminous coal. The inertness of CNFs and, to a lesser extent, of carbon in general makes immobilisation with these materials much more difficult than with polymers or oxidic supports, such as silica or alumina. However, already in the 1970s, Miller and co-workers\[20\] modified graphite electrodes for the attachment of amines onto their surface. The electrodes were kept at a high temperature in air, the resulting carboxyl groups were converted into acid chloride groups by the action of thionyl chloride, and these groups were subsequently treated with an amine. The approach of Miller has been used by several other researchers for the attachment of molecules or even metal–ligand systems to carbon electrodes, although the experimental conditions for oxidation and attachment of amine varied considerably.\[21-24\] Smalley and co-workers\[25\] and, more importantly, Haddon and co-workers\[26, 27\] employed the method of converting carboxyl groups on carbons to acid chloride groups and subsequent attachment of amines to shortened single-walled carbon nanotubes (SWNTs). Haddon et al. used this procedure to solubilise the SWNTs by bonding long-chain molecules to its surface. Raw SWNTs were oxidised and shortened by prolonged sonication in a mixture of nitric and sulfuric acids. The carboxyl groups, which were presumably positioned at the edges of the tubes with a length of 300 nm, were converted to acid chloride groups by refluxing in thionyl chloride (containing some DMF) for 24 h. The DMF acts as a catalyst in the formation of acid chloride groups by SOCl\(_2\).[28] Haddon et al. emphasised that the reaction of the SOCl\(_2\)-treated SWNTs with octadecylamine (ODA) in toluene was unsuccessful. The key to the preparation was the reaction of the tubes with pure molten ODA for an extended period of time. It is evident that severe reaction conditions are required for derivatisation of carbon nanomaterials.
In this investigation, we have studied the immobilisation of the rhodium/anthranilic acid complex onto fishbone carbon nanofibres. The reports by Haddon and co-workers and by Holy served as a basis to develop an immobilisation scheme. It will be shown that the Rh\(^{III}\)/anthranilic acid system immobilised on carbon nanofibres can indeed be synthesised. The complex is bonded to the fibres by an amide linkage of the CNF carboxyl groups and the amine functionality of anthranilic acid. The immobilised Rh\(^{III}\) complex is not active in the hydrogenation of cyclohexene. Reduction with NaBH\(_4\) results in a very active catalyst that consists of extremely small rhodium particles. Milder activation procedures may lead to an active immobilised Rh\(^{I}\) complex.
**Results**
**Infrared spectroscopy**: Figure 1 shows the infrared spectra of CNF-U, CNF-OX, CNF-AA and of a physical mixture of

carbon nanofibres and anthranilic acid (for explanations of the codes for the intermediate products and final catalysts see Table 1). To allow comparison, transmission levels of all spectra were kept approximately the same. It was established that, within the transmission window used, the intensity of the bands did not depend upon the transmission level of the spectra. Table 2 summarises the band assignments. The assignment of the bands of untreated fibres\textsuperscript{[29]} and of oxidised
Table 1. Abbreviations for the intermediate products and the final immobilised catalysts.
| Abbreviation | Description |
|--------------|-------------|
| **intermediate products** | |
| CNF-U | untreated fibres |
| CNF-OX | oxidised fibres |
| CNF-SOCl\textsubscript{2} | SOCl\textsubscript{2}-treated fibres |
| CNF-AA | anthranilic-acid-treated fibres |
| **final catalysts** | |
| RhAA/CNF(17) | dried for 17 h after SOCl\textsubscript{2} treatment |
| RhAA/CNF(17,BH) | dried for 17 h, reduction with NaBH\textsubscript{4} |
| RhAA/CNF(5) | dried for 5 h after SOCl\textsubscript{2} treatment |
| RhAA/CNF(17,DMF) | dried for 17 h, reaction AA in DMF |
Table 2. Assignments of IR absorptions found for CNF-U, CNF-OX, CNF-AA and a physical mixture of CNFs and anthranilic acid.
| Wavenumber [cm\textsuperscript{-1}] | Assignment | References |
|-----------------------------------|------------|------------|
| 1724 | C=O stretching carbonyl + carboxyl | \textsuperscript{[30, 31, 34, 36]} |
| 1633 | adsorbed water | \textsuperscript{[30, 31]} |
| 1582 | aromatic ring stretching | \textsuperscript{[31, 34, 35]} |
| 1384 | nitrate | \textsuperscript{[32, 33]} |
| 1200 | C–C stretching | \textsuperscript{[31, 34]} |
| 746 | aromatic C–H bending anthranilic acid | \textsuperscript{[37]} |
fibres\textsuperscript{[13]} have already been reported and discussed in separate papers. The oxidation of carbon nanofibres has been extensively discussed in reference [13]. With regards to the untreated fibres, the absorptions at approximately 1630 and at 1384 cm\textsuperscript{-1} are attributed to adsorbed water and nitrate, respectively, on the KBr and can be discounted.\textsuperscript{[30–33]} The minimum at 1582 cm\textsuperscript{-1} originates from an aromatic ring vibration, while the broad 1200 cm\textsuperscript{-1} absorption is attributed to the C–C stretching vibration.\textsuperscript{[31, 34, 35]} Upon oxidation, an additional band appears at 1724 cm\textsuperscript{-1}. This peak originates from the C=O stretching vibration of carboxyl and/or carbonyl groups.\textsuperscript{[30, 31, 34, 36]} Furthermore, the intensity of the 1200 cm\textsuperscript{-1} absorption intensifies as a result of CO stretching and OH bending vibrations that occur in this region.\textsuperscript{[34]} The intensity of the 1582 cm\textsuperscript{-1} band also increases because, upon oxidation, symmetry restrictions are relieved for more aromatic rings.\textsuperscript{[29]}
A comparison of the IR spectrum of a physical mixture of anthranilic acid and CNFs (AA-phys) with that of CNF-AA indicates that, as a result of the strong IR absorbance of CNFs, many bands visible in the spectrum of pure AA diminish, shift or even disappear upon mixing with carbon nanofibres. Therefore, the aromatic C–H bending mode at 746 cm\textsuperscript{-1} (indicative for four adjacent H atoms\textsuperscript{[37]}) occurring in the spectrum of CNF-AA appears to be the best indication for the presence of anthranilic acid on the surface of the CNFs. In addition, other absorptions are visible in the spectrum that can be attributed to anthranilic acid. Furthermore, in the spectrum of CNF-AA, the carboxyl C=O stretching vibration at 1724 cm\textsuperscript{-1} of CNF-OX has almost disappeared, indicating that an amide bond between the carboxyl groups on the CNFs and the amine groups of anthranilic acid has been formed.
The spectrum of CNF-SOCl\textsubscript{2} (not shown) was identical to that of CNF-OX. The spectrum of the fibres that were treated with anthranilic acid in DMF showed only a weak band at 753 cm\textsuperscript{-1}. Furthermore, the C=O stretching vibration at 1724 cm\textsuperscript{-1} was only slightly diminished.
**X-ray photoelectron spectroscopy (XPS):** The process of immobilisation was also followed with XPS. Figure 2A and B shows the Cl\textsubscript{2p} and the N\textsubscript{1s} regions, respectively, of CNF-OX, CNF-SOCl\textsubscript{2}, CNF-AA, and RhAA/CNF(S). Oxidised fibres do not contain chlorine; however, this element is clearly present after treatment with SOCl\textsubscript{2}. Upon reaction with anthranilic acid, the chlorine is removed, whereas after complexation of Rh (with RhCl\textsubscript{3}·2H\textsubscript{2}O) it reappears on the surface of the CNFs. The presence of nitrogen on oxidised fibres is not detected with XPS. The small peak appearing in the spectrum of CNF-SOCl\textsubscript{2} can be attributed to small traces of DMF that is used during this synthesis step. The spectrum of CNF-SOCl\textsubscript{2} in Figure 2B has been obtained from CNF-SOCl\textsubscript{2} fibres that were dried for only five hours in a vacuum. The CNF-AA fibres clearly contain nitrogen, which remains present after complexation of Rh. Reduction with NaBH\textsubscript{4} or measurement of the catalytic activity does not affect the intensity of the N\textsubscript{1s} peak (spectra not shown).
The presence and the oxidation state of Rh was studied by monitoring the Rh\textsubscript{3d} peak. Figure 3 displays the results. The Rh\textsubscript{3d}\textsubscript{5/2} and Rh\textsubscript{3d}\textsubscript{3/2} transitions are both clearly visible. Only the Rh\textsubscript{3d}\textsubscript{5/2} peaks were used for further analysis. The spectrum of RhAA/CNF(17) shows one broad Rh\textsubscript{3d}\textsubscript{5/2} peak centred at 308.9 eV. Literature values for the Rh\textsubscript{3d}\textsubscript{5/2} peak in Rh\textsubscript{2}O\textsubscript{3} and RhCl\textsubscript{3} are 308.2 – 308.8 eV and 310.1 eV, respectively.\textsuperscript{[38]} Therefore, we can conclude that the Rh is in the trivalent oxidation state in this complex. Upon reduction of the RhAA/CNF(17) complex with NaBH\textsubscript{4} (denoted as RhAA/CNF(17,BH)), a second peak at $\approx$307.4 eV becomes the most
intense one and the peak at 308.9 eV is visible as a shoulder. As the Rh$_{3d5/2}$ peak for rhodium metal lies at 307.2 eV\textsuperscript{[38]} this new maximum indicates the presence of Rh$^0$. Clearly, a large portion of the Rh is in the zerovalent state after reduction with NaBH$_4$. A Rh$^1$ complex would have given a peak at $\approx 308$ eV\textsuperscript{[38]}
**X-ray absorption fine structure (XAFS) spectroscopy**
*X-ray absorption near-edge structure (XANES):* The X-ray absorption near-edge spectra of RhAA/CNF(17) and RhAA/CNF(17,BH) are shown in Figure 4 together with the data for Rh foil and Rh$_2$O$_3$. By comparison of the spectra of the RhAA/CNF(17) and RhAA/CNF(17,BH) samples with the reference compounds, information can be obtained about the oxidation state of the rhodium. The absorption edge of RhAA/CNF(17) is almost identical to that of Rh$_2$O$_3$. Thus, we can conclude that the Rh in RhAA/CNF(17) is in the trivalent oxidation state. No Rh–Rh interactions are present.
The position of the lower half of the absorption edge of RhAA/CNF(17,BH) is at a lower energy than that for RhAA/CNF(17) and closer to that of Rh foil. The increase in absorption intensity in this region of the absorption edge is typical of zerovalent rhodium. This suggests that part of the rhodium is in the zerovalent oxidation state.
*Extended X-ray absorption fine structure (EXAFS) analysis:* Figure 5 shows the experimental EXAFS data of RhAA/CNF(17) (Figure 5A) and RhAA/CNF(17,BH) (Figure 5C). The data quality is excellent. The signal-to-noise ratio at $k = 4.4 \text{ Å}^{-1}$ amounts to approximately 150 for RhAA/CNF(17) and 75 for RhAA/CNF(17,BH). It can clearly be seen that the EXAFS intensity of sample RhAA/CNF(17,BH) at larger values of $k$ ($k > 10 \text{ Å}^{-1}$) is higher than for RhAA/CNF(17), indicating the presence of a high $Z$ scatterer\textsuperscript{[39]}. The $k^2$ Fourier transform of the EXAFS data of RhAA/CNF(17) and RhAA/CNF(17,BH) are displayed in Figure 5B and D, respectively. Whereas the peak at 1.6 Å in both samples originates from low $Z$ scatterers, such as oxygen, the peak at 2.4 Å in the uncorrected Fourier transform of the EXAFS data of RhAA/CNF(17,BH) is attributed to an Rh–Rh contribution. Also, the intensity of the first shell region at 1.6 Å is lower, indicating a change in coordination after reduction.
Figure 6A shows the Fourier transform (FT) of the experimental EXAFS data of RhAA/CNF(17) together with the FT of the total fit. The fit quality is excellent for the range...
of fits used ($\Delta R = 1–2.8$ Å). In RhAA/CNF(17), Rh–O/N, Rh–Cl and Rh–C contributions could be determined. The phase-corrected Fourier transforms of the fits of each individual contribution together with their corresponding difference files are shown in Figure 6B, C and D for the Rh–O/N, Rh–Cl and Rh–C contributions, respectively. The fit quality of each individual contribution is very good, indicating that reliable fit results for the weaker EXAFS contributions have been obtained as well. Table 3 presents the fit parameters of the EXAFS spectra resulting from the multiple shell fits in $R$ space and the variances of the imaginary and absolute part of the fits for all catalysts. All variances obtained are well below 1%, indicating that excellent fit results for all catalysts have been acquired.\[39\] Furthermore, the calculated number of independent data points [See Eq. (3) later] is 14, showing that a fit with three shells is possible. Because the contributions of carbon, oxygen or nitrogen in RhAA/CNF(17) cannot be easily distinguished, all these interactions were fitted with the Rh$_2$O$_3$ reference. The rhodium atom in the immobilised complex has five oxygen or nitrogen neighbours at a distance of 2.05 Å. The presence of carbon at such a distance is unlikely. One chlorine atom is also present in the coordination sphere of Rh at a distance of 2.28 Å. At a larger distance of 2.94 Å, three neighbours are fitted, which are most likely carbon atoms. No Rh–Rh interactions could be fitted.
The fit results of RhAA/CNF(17,BH) are also shown in Table 3. Again, very good results are obtained for the total fit and the three separate Rh–Rh, Rh–O and Rh–C contributions. There are 4.5 Rh neighbours situated at a distance of 2.68 Å. The coordination number of oxygen neighbours amounts to 3.4 at 2.05 Å. Finally, 1.6 C atoms are visible at a distance of 2.25 Å. The fit parameters of RhAA/CNF(5) displayed in Table 3 show that, besides oxygen and chlorine contributions ($N_O = 4.1$, $R_O = 2.07$, $N_{Cl} = 2.6$, $R_{Cl} = 2.28$), RhAA/CNF(5) contains some Rh–Rh interactions ($N = 2.0$, $R = 2.69$). A reasonable fit can also be obtained with less chlorine. RhAA/CNF(17,DMF) contains almost exclusively Rh–Rh interactions ($N = 8.2$, $R = 2.69$), but some Rh–O interactions are present as well ($N = 1.3$, $R = 2.05$).
It should be noted that with all catalysts, edge steps were obtained that were equal to the theoretical edge step for 1 wt% Rh on carbon. In other words, all catalysts indeed contain 1 wt% of Rh. The total discolouration of the water phase (containing the equivalent of 1 wt% Rh) after complexation of rhodium supports these observations.
**Molecular modelling:** The outcome of the EXAFS fit results of RhAA/CNF(17), namely the number and nature of
| Catalyst | Atom | $N \pm 5\%$ | $R$ [Å] $\pm 1\%$ | $\Delta \sigma^2$ [Å$^2$] $\pm 5\%$ | $\Delta E_0$ [eV] $\pm 10\%$ | $k^2$ Variance [%] |
|-------------------|------|-------------|------------------|-------------------------------------|-------------------------------|-------------------|
| | | | | | Im | Abs |
| RhAA/CNF(17) | O/N | 5.0 | 2.05 | 0.001 | – 3.1 | 0.12 | 0.05 |
| | Cl | 1.0 | 2.28 | 0.004 | 4.4 | | |
| | C | 2.9 | 2.94 | 0.003 | – 10.6 | | |
| RhAA/CNF(17,BH) | Rh | 4.5 | 2.68 | 0.006 | 9.9 | 0.25 | 0.12 |
| | O | 3.4 | 2.05 | 0.001 | – 4.4 | | |
| | C | 1.6 | 2.25 | 0.008 | – 8.6 | | |
| RhAA/CNF(5) | Rh | 2.0 | 2.69 | 0.006 | 7.0 | 0.06 | 0.03 |
| | O | 4.1 | 2.07 | 0.006 | – 9.2 | | |
| | Cl | 2.6 | 2.28 | 0.004 | 10.7 | | |
| RhAA/CNF(17,DMF) | Rh | 8.2 | 2.69 | 0.000 | 5.6 | 0.10 | 0.04 |
| | O | 1.3 | 2.05 | 0.004 | – 8.5 | | |
[a] $\Delta k = 3.9–14$ Å$^{-1}$, $\Delta R = 1–2.8$ Å, $k^2$ weighted.
neighbours of Rh, were used as the input for the molecular modelling study. It was assumed that the rhodium is in the trivalent oxidation state and that, for reasons of charge compensation, both the carboxyl and the amide group of anthranilic acid are ionised. The result is shown in Figure 7; the carbon nanofibre structure is depicted on the left-hand side of the figure. Anthranilic acid is bonded to the fibre through an amide bond with a carboxyl group on the surface of the CNF. Rh is coordinated to anthranilic acid through one oxygen of the carboxyl group (of AA) and the nitrogen atom. In addition to one chloride anion, the coordination sphere of the rhodium atom is completed by three water molecules (the last synthesis step was performed in water). Thus, the Rh atom is in an octahedral environment. Table 4 gives the distances of several atoms to the central Rh atom. For comparison, the EXAFS results have been included as well. The oxygen atoms of the three water molecules are situated at a distance of 1.9 Å. The oxygen atom of the carboxyl group of anthranilic acid is found at 2.0 Å, as is the AA nitrogen atom. Chloride resides at a distance of 2.3 Å. Three carbon atoms are found at distances of 2.8 Å (C of carboxyl group AA), 3.0 Å (carbon of benzyl ring) and 3.0 Å (C of carboxyl group CNF). These distances agree very well with the EXAFS results.
**Table 4. Distances between Rh and several atoms in the molecular model of RhAA/CNF(17). The EXAFS results have been included for comparison.**
| Atom | No. of atoms | Modelling | EXAFS |
|------|--------------|-----------|-------|
| | | Distance [Å] | N | R[Å] |
| O\textsubscript{H2O} | 3 | 1.9 (3 × ) | 5.0 | 2.05 |
| O\textsubscript{carb} | 1 | 2.0 | 1.0 | 2.28 |
| N | 1 | 2.0 | 2.9 | 2.94 |
| Cl | 1 | 2.3 | | |
| C | 3 | 2.8, 3.0, 3.0 | | |
**Figure 7. Outcome of the molecular modelling study of RhAA/CNF(17).**
**Figure 8. H\textsubscript{2} Flow and H\textsubscript{2} uptake curves of RhAA/CNF(17,BH).**
**Catalytic measurements:** The catalytic measurements resulted in a plot of the H\textsubscript{2} flow against time (see Figure 8 for RhAA/CNF(17,BH)). This curve was integrated to get an H\textsubscript{2} uptake pattern, which, in turn, was used to calculate the normalised initial activity (i.e., the activity at \( t = 0 \)). The results are summarised in Table 5. Both RhAA/CNF(17) and RhAA/CNF(5) do not display any hydrogenation activity. The catalyst reduced with NaBH\textsubscript{4}, on the other hand, is highly active. The activity of RhAA/CNF(17,BH) amounts to 1.1 mol H\textsubscript{2} g\textsubscript{cat}^{-1} h\textsuperscript{-1}. RhAA/CNF(17,DMF) shows an intermediate activity of 0.3 mol H\textsubscript{2} g\textsubscript{cat}^{-1} h\textsuperscript{-1}.
**Table 5. Initial activities for the different catalysts.**
| Catalyst | Initial activity[mol H\textsubscript{2} g\textsubscript{cat}^{-1} h\textsuperscript{-1}] |
|-------------------|-------------------------------------------------------------------------------------|
| RhAA/CNF(17) | 0.0 |
| RhAA/CNF(5) | 0.0 |
| RhAA/CNF(17,BH) | 1.1 |
| RhAA/CNF(17,DMF) | 0.3 |
**Discussion**
**Infrared spectroscopy:** The appearance of the aromatic C–H bending vibration of anthranilic acid at 746 cm\textsuperscript{-1} and the strong decrease in intensity of the C=O stretching band at 1724 cm\textsuperscript{-1} in the IR spectrum of CNF-AA strongly indicate that anthranilic acid has been bonded to the carboxyl groups of the oxidised carbon nanofibres through an amide linkage. Reaction of anthranilic acid through the carboxyl group is highly unlikely because it would result in an acyclic acid anhydride which is very unstable.\textsuperscript{[40]} Unfortunately, no amide C=O stretching vibration could be observed. This band would be expected to be located at \( \approx 1650 \) cm\textsuperscript{-1}.\textsuperscript{[24, 26, 27, 37]} In this region, the adsorbed water peak and bands originating from anthranilic acid occur as well and, therefore, no clear statement concerning the presence or absence of this vibration can be made.
The IR spectrum of CNF-SOCl\textsubscript{2} is identical to that of oxidised fibres. No evidence for acid chloride groups could be found. However, it is possible that the acid chloride groups are not stable and that they are converted back to carboxylic acid groups under the influence of water vapour in the air. Another explanation could be that anthranilic acid reacts directly with the carboxyl groups of the CNFs under the conditions used (usage of pure molten AA). To investigate this, oxidised
carbon nanofibres were treated with pure molten anthranilic acid without prior treatment with SOCl$_2$. The IR spectrum of these CNFs is very similar to that of CNF-AA. In other words, the anthranilic acid 746 cm$^{-1}$ band is clearly visible and the 1724 cm$^{-1}$ C=O stretching vibration of the carboxyl group has almost disappeared. This result proves that anthranilic acid can react directly with the CNF carboxyl groups. This reaction was also investigated by Pittman et al.,[41] who treated polyacrylonitrile (PAN) carbon fibres oxidised with HNO$_3$ with tetraethylenepentamine at 190°C. An esterification reaction between the carboxyl group of anthranilic acid and an hydroxyl group on the surface of the oxidised CNFs is also possible; however, this would not lead to a strong decrease in intensity of the 1724 cm$^{-1}$ C=O stretching band. In conclusion, water-sensitive acid chloride groups may be formed on the surface of the CNFs, but their formation is not necessary to achieve the bonding of AA to carbon nanofibres.
**X-ray photoelectron spectroscopy (XPS):** The N$_{1s}$ XPS spectra show the appearance of nitrogen on the surface of the CNFs after reaction with anthranilic acid. This nitrogen atom is not removed by any of the further treatments that are performed with the CNF-AA fibres (complexation of Rh, catalytic reaction and/or reduction with NaBH$_4$), indicating that it must be tightly bound to the carbon nanofibre surface. XPS also shows that, after treatment with SOCl$_2$, the fibres should be dried under vacuum for a long period of time in order to remove all DMF from the surface of the fibres. The effect of traces of DMF on the surface of the CNFs is dealt with later in this section.
The XPS Cl$_{2p}$ results also provide valuable information on the subsequent immobilisation sequence. Chlorine appears on the surface of the fibres after reaction with SOCl$_2$ and disappears again after the bonding of anthranilic acid. This suggests that acid chloride groups have been formed that are subsequently used for the bonding of AA. If the acid chloride groups are unstable in air, the XPS Cl signal then probably originates from HCl that is formed during the decomposition reaction. After complexation of Rh, chlorine is present again, implying that this element is present near the rhodium.
As has previously been shown and discussed, the C$_{1s}$ peak did not change shape upon oxidation of the CNFs.[13] Furthermore, all other chemical modifications did not show a change in the C$_{1s}$ peak shape. This observation can be explained by taking account of the probing depth of XPS of $\approx$ 2 nm for carbon materials. This means that a large fraction of the carbon atoms detected by XPS are situated below the surface of the carbon nanofibres. These atoms are not modified and, hence, do not contribute to a change in the shape of the C$_{1s}$ peak.
**X-ray absorption fine structure (XAFS) spectroscopy:** The outcome of the analysis of the EXAFS data of RhAA/CNF(17) demonstrates that the central rhodium atom has six nearest neighbours (typical for Rh$^{III}$) and that one of these neighbours is a chlorine atom. Furthermore, the EXAFS fit reveals an interaction with three carbon atoms at a longer distance. The EXAFS results imply that, for reasons of charge compensation of the Rh$^{III}$, the carboxyl group as well as the nitrogen atom of the former amine group of anthranilic acid are ionised. The three carbon atoms may originate from the benzyl ring of anthranilic acid or from the carbon nanofibre support.
The EXAFS data also shed more light on the structure of the NaBH$_4$-reduced catalyst RhAA/CNF(17,BH). Because no reduction prior to the EXAFS measurement was carried out, the result of the analysis of the EXAFS data indicates the presence of small rhodium particles that are partly oxidised. The particles probably have a core of rhodium metal surrounded by rhodium oxide. As the Rh atoms also interact with the carbon support, it is most likely that the rhodium particles are semi-spherical. Because the rhodium particles are partially oxidised, it is not possible to directly estimate the particle size from the coordination number. However, the particles should be small, because an interaction with the support is still observed and the catalyst displays a high hydrogenation activity. We roughly estimate the particle size to lie in the range 1.5–2 nm.
The complex RhAA/CNF(5) contains some Rh–Rh interactions. As this complex probably contains a mixture of different structures, a reliable interpretation of the EXAFS data cannot be given. The RhAA/CNF(17,DMF) catalyst consists of Rh particles that have an oxidised outer surface. These particles are fairly large: no interaction with the CNF support can be distinguished.
**General discussion:** All characterisation techniques agree very well with each other. IR spectroscopy showed the presence of an amide bond between the carboxyl groups on the surface of the carbon nanofibres and anthranilic acid, whereas the XPS N$_{1s}$ region showed the presence of tightly bound nitrogen on the CNF surface after attachment of AA. Furthermore, XPS showed that the rhodium in the as-synthesised RhAA/CNF(17) complex is in the trivalent state and that reduction with NaBH$_4$ leads to the formation of zerovalent Rh. The XANES results are in full agreement with these observations.
Although the EXAFS data provide very valuable information about the nature and number of neighbours of Rh and their separation, a structure is not easily visualised. Therefore, the molecular modelling program was used. The constructed model of RhAA/CNF(17) agrees very well with the EXAFS results. The six nearest neighbours have distances that are in agreement with the EXAFS data. More importantly, the three nearest carbon atoms in the model have an average distance that matches the EXAFS distance. As one of these carbon atoms belongs to the former carboxyl group of the carbon nanofibre, this is another strong indication for the covalent bonding of the complex to the fibres. The model also shows that, although the complex is closely bonded to the CNF surface and steric hindrance is expected, the Rh in the Rh/anthranilic acid complex is able to obtain a regular octahedral environment.
The catalytic data show that when the Rh in the Rh/anthranilic acid complex is in the trivalent oxidation state, no catalytic activity is observed. Only after formation of rhodium metal particles is the catalyst active in the hydrogenation of
cyclohexene. It is known that Rh\(^0\) is able to perform the oxidative addition/reductive elimination cycle that is required for catalysis of hydrogenation reactions by metal complexes. Therefore, we attempted to reduce the Rh\(^{III}\) in RhAA/CNF(17) to the univalent oxidation state by the action of NaBH\(_4\). This reducing agent has already been used for this purpose with the polystyrene-immobilised Rh\(^{III}–AA\) complex.\(^{[8]}\) The XPS and XANES data have shown that a large part of the rhodium in RhAA/CNF(17,BH) has become Rh\(^0\) after reduction. Clearly, the reduction of the Rh\(^{III}\) to Rh\(^0\) cannot be achieved with NaBH\(_4\). Its reducing properties are probably too strong. Therefore, the activation of the immobilised Rh/anthranilic acid complex should be executed differently. One option could be the use of a milder reducing agent than NaBH\(_4\). Another alternative might be to introduce the rhodium already as Rh\(^0\) and not as Rh\(^{III}\). A risky reduction step would then be circumvented.
As was already demonstrated above with XPS, treatment of the oxidised fibres with SOCl\(_2\)/DMF results in the presence of DMF on the fibres if the drying procedure under vacuum is carried out for only a few hours. The final complex RhAA/CNF(5) contains some Rh–Rh interactions. Likewise, the attempt to use a solution of anthranilic acid in DMF for the attachment of this ligand to the fibres, ultimately leads to the formation of large Rh particles (RhAA/CNF(17,DMF)). As Rh\(^0\) is formed in both cases, we postulate that zerovalent rhodium is formed as a result of the reducing properties of DMF. When DMF is present on the surface of the carbon nanofibres, some or all of the rhodium chloride is reduced to Rh\(^0\). Only when DMF is totally removed, which presumably was achieved with the RhAA/CNF(17) complex after evacuation for 17 h, the rhodium can fully coordinate to anthranilic acid.
**Conclusion**
The immobilisation of Rh/anthranilic acid onto fishbone carbon nanofibres was executed by means of 1) surface oxidation of the fibres, 2) conversion of the carboxyl groups into acid chloride groups, 3) attachment of anthranilic acid and 4) complexation to rhodium(III). We have demonstrated that anthranilic acid bonds to the CNFs by an amide linkage with carboxyl groups that are present after surface oxidation of the fibres. The immobilised anthranilic acid coordinates to rhodium through the nitrogen atom and the carboxyl group. The coordination sphere of the trivalent Rh atom is further occupied by three water molecules and a chloride ion. To the best of our knowledge, this is the first example of a metal–ligand complex immobilised by covalent bonding on carbon nanofibres or carbon nanotubes.
The as-synthesised complex is not active in the hydrogenation of cyclohexene. Therefore, we reduced the rhodium with sodium borohydride in order to obtain monovalent rhodium. Rh\(^0\) can participate in the oxidative addition/reductive elimination cycle that is required for catalysis. However, the reduction procedure with NaBH\(_4\) results in the formation of small rhodium particles with an estimated diameter of 1.5–2 nm on the CNFs. The hydrogenation activity of these particles is very high. We conclude that another activation procedure for the immobilised Rh/anthranilic acid system must be designed, for example, reduction with a milder reducing agent or complexation of the rhodium in the Rh\(^0\) state. Solvents with reducing properties should be used carefully during the immobilisation sequence. We have shown that traces of DMF on the surface of the carbon nanofibres already results in the formation of some zerovalent rhodium. The use of anthranilic acid in DMF also yields rhodium metal particles. The formation of acid chloride groups by the action of SOCl\(_2\) is not necessary to bring about the bonding of anthranilic acid to the CNFs. Under the conditions used, anthranilic acid can react directly with the carboxylic groups of the fibres.
**Experimental Section**
**Growth of fishbone carbon nanofibres:** Carbon nanofibres of the fishbone type were produced by catalytic decomposition of CH\(_4\) on a Ni/Al\(_2\)O\(_3\) catalyst.\(^{[42–48]}\) The Ni/Al\(_2\)O\(_3\) catalyst with 30 wt% Ni metal loading was synthesised by the deposition–precipitation technique:\(^{[49]}\) alumina (Alon-C, Degussa) was suspended in an acidified aqueous solution of nickel nitrate (Acros, 99%) and diluted ammonia was injected over a period of 2 h at room temperature under vigorous stirring until the pH had reached a value of 8.5. The mixture was stirred overnight, and the resulting suspension was filtered, washed and dried at 120 °C. Finally, the catalyst was calcined at 600 °C in stagnant air for 3 h.
For the production of fishbone CNFs, the 30 wt% Ni/Al\(_2\)O\(_3\) catalyst (0.5 g) was reduced at 600 °C in 14% H\(_2\)/N\(_2\) (flow rate 350 mL·min\(^{-1}\)) in a vertical tubular reactor (diameter 3 cm) for 2 h. After decreasing the temperature to 570 °C, methane (50% in N\(_2\), flow rate 450 mL·min\(^{-1}\)) was passed through the catalyst bed for 6.5 h. The yield of fibres amounted to approximately 12 g.
**Immobilisation of Rh/anthranilic acid onto fishbone carbon nanofibres:** Scheme 1 illustrates the experimental procedures used for the immobilisation of Rh/anthranilic acid onto fishbone CNFs: 1) The carbon nanofibres were first oxidised in a mineral acid mixture in order to create carboxyl groups on the surface and to remove the original Ni/Al\(_2\)O\(_3\) growth catalyst. 2) These carboxyl groups were converted into acid chloride groups by the action of thionyl chloride and DMF. 3) Anthranilic acid was bonded to the fibres by reaction with these acid chlorides. 4) Subsequently, the immobilised anthranilic acid was complexed to rhodium by refluxing the fibres in an aqueous solution of rhodium(III) chloride. Finally, the Rh/anthranilic acid complex was reduced with sodium borohydride (procedure not shown in Scheme 1). Steps 2 and 3 (generation of acid chloride groups and attachment of anthranilic acid) were carried out under a nitrogen atmosphere with standard Schlenk techniques.
The fishbone carbon nanofibres were oxidised in a mixture of concentrated nitric and sulfuric acid. The CNFs (5 g) were boiled under reflux in HNO\(_3\)/H\(_2\)SO\(_4\) 1:1 for 30 min (80 mL; HNO\(_3\); Lomers & Pleuger, 65%, pure;
H$_2$SO$_4$: Merck, 95–97%, p.a.). Upon cooling and dilution with demineralised water, the suspension was filtered over a Teflon membrane filter (pore diameter of 0.2 µm), washed with demineralised water until the washings showed no significant acidity and dried at 120 °C for 16 h.
Oxidised fibres (3 g) were treated with SOCl$_2$/DMF (50:1, 50 mL) by boiling under reflux for 24 h in a N$_2$ atmosphere (SOCl$_2$: Acros, > 99.5%; DMF: Merck, p.a.). After cooling, the reaction mixture was decanted from the fibres, which were dried under vacuum at 40 °C for 5 h. Another batch of fibres was dried under vacuum for 17 h at room temperature to make sure that all DMF was removed from the SOCl$_2$-treated fibres.
Anthranilic acid (20 g; Acros, > 98%) was dried by vacuum suction at 40 °C for 2 h and transferred under N$_2$ to the SOCl$_2$-treated fibres. This mixture was kept at a temperature above the melting point of anthranilic acid (144–148 °C) for 96 h. Upon cooling, the resulting purple liquid suspension was filtered over a Teflon membrane filter (pore diameter of 0.2 µm), thoroughly washed with ethanol, and dried under vacuum at room temperature for 17 h. The SOCl$_2$-treated fibres were also treated with a solution of anthranilic acid in DMF. The fibres were boiled under reflux for 96 h in a 1 m solution of dried anthranilic acid in molseive-dried DMF (20 mL). These fibres were washed with DMF and treated identically in subsequent experiments.
Anthranilic acid-reacted fibres (2 g) were dispersed in demineralised water (30 mL). RhCl$_3$·2H$_2$O (48 mg, 0.2 mmol) was added, and the suspension was boiled under reflux for 72 h. Upon cooling, the Rh-loaded fibres (max. 1 wt %) were separated from the colourless water fraction by filtration. The fibres were washed with demineralised water and dried under vacuum at room temperature for 17 h.
Reduction of the immobilised Rh/AA complex was carried out by dispersing Rh/AA/CNFs (1 g) in ethanol (10 mL; Merck, p.a.) and by adding NaBH$_4$ (55 mg; Alfa, 99%) and ethanol (10 mL). The evolution of gas indicated an immediate reaction. After stirring for 30 minutes at room temperature, the reduced catalyst was filtered from the colourless solution, washed with ethanol and dried under vacuum at room temperature for 17 h.
**Characterisation:** Characterisation of the samples was carried out by infrared spectroscopy (IR), X-ray photoelectron spectroscopy (XPS), X-ray absorption fine structure (XAFS) spectroscopy, and by molecular modelling.
**Infrared spectroscopy:** Transmission IR spectra were recorded on a Perkin–Elmer 2000 spectrometer equipped with an air dryer to remove the water vapour and carbon dioxide. A total of 100 scans were co-added at a resolution of 8 cm$^{-1}$ and a boxcar apodisation. Samples were prepared by thoroughly mixing a small amount of ground nanofibres with pre-dried KBr. Tablets were pressed at 4 tons/cm$^2$ in a vacuum for 2 min. The concentrations of the nanofibres ranged from 0.1–1% (m/m). All transmission spectra were baseline corrected.
**X-ray photoelectron spectroscopy (XPS):** The XPS data were obtained with a Vacuum Generators XPS system and a CLAM-2 hemispherical analyser for electron detection. Non-monochromatic Al$_{K\alpha}$ X-ray radiation was obtained by means of an anode current of 20 mA at 10 keV. The pass energy of the analyser was set at 20 eV.
Because of the overlap of the Rh$_{60}$ signal with the broad C$_{60}$ peak, the C$_{60}$ signal of CNF-AA was subtracted from the C$_{60}$/Rh$_{60}$ peak of the rhodium-loaded catalysts, resulting in the isolation of the Rh signal. It was ascertained that the error in the intensity of the subtracted spectrum at the position of the maximum of the C$_{60}$ peak was not more than the intensity of the Rh signal and that the baseline exhibited zero intensity after subtraction.
**X-ray absorption fine structure (XAFS) spectroscopy**
**XAFS data collection:** XAFS spectra at the Rh$_{L3}$ edge (23220 eV) were obtained at the HASYLAB (Hamburg, Germany) synchrotron beamline X11.1, which was equipped with a Si(311) double crystal monochromator. The monochromator was detuned to 50% of the maximum intensity to minimise the presence of higher harmonics. All measurements were performed in the transmission mode with ionisation chambers filled with an Ar/N$_2$ mixture to have an absorbance of 20 and 80% in the first and the second ionisation chamber, respectively.
The CNF samples (300 mg) were pressed into self-supporting wafers and mounted into an in-situ EXAFS cell equipped with Be windows.[47] Because of the low concentration of Rh (1 wt %) and the low absorption by carbon, the calculated edge step was about 0.2. The Rh$_2$O$_3$ and RhCl$_3$ reference compounds (Aldrich, 99.8% and 98%, respectively) were diluted with boron nitride (Alfa, 99.8%) and pressed into self-supporting wafers (calculated edge step 1). The rhodium reference foil (Aldrich, 99.9%, 25 µm thickness) was mounted on the sample holder with Kapton tape. The EXAFS cell was flushed with purified and dried He for 15 minutes and cooled down to liquid nitrogen temperature prior to measurement.
**XAFS data processing:** Extraction of the EXAFS data from the measured absorption spectra was performed with the XDAP code developed by Vaarkamp et al.[48] Two or three scans were averaged and the pre-edge was subtracted by means of a modified Victoreen curve.[49] The background was subtracted by means of cubic spline routines with a continuously adjustable smooth parameter.[50, 50] Normalisation was performed by dividing the data by the height of the absorption edge at 50 eV, leading to normalised EXAFS data.
**EXAFS phase-shifts and backscattering amplitudes:** Data for phase shifts and backscattering amplitudes were obtained from the reference compounds: Rh foil for Rh – Rh, RhCl$_3$ for Rh – Cl, and Rh$_2$O$_3$ for Rh – O, Rh – C and Rh – N contributions.[50] Table 6 gives the crystallographic data and the forward and inverse Fourier transform ranges used to create the EXAFS references. Both the reference spectra and the samples were measured at liquid nitrogen temperature. This means that no temperature effect has to be included in the difference in Debye–Waller factor ($\Delta \alpha^2$) between sample and reference as obtained in the EXAFS data analysis.
| Atom pair | Ref. compd | Ref. | $N^{ref}$ | $R^{ref}$ [Å] | $k^n$ | Forward FT $\Delta K$ [Å$^{-1}$] | Inverse FT $\Delta R$ [Å] |
|-----------|------------|------|-----------|--------------|-------|----------------------------------|--------------------------|
| Rh – Rh | Rh foil | [51] | 12 | 2.69 | 3 | 2.7–22.7 | 1.4–3.1 |
| Rh – O | Rh$_2$O$_3$[a] | [52] | 6 | 2.05 | 1 | 3.7–16.3 | 0.0–2.3 |
| Rh – Cl | RhCl$_3$ | [53] | 6 | 2.30 | 1 | 3.1–18.0 | 0.0–2.9 |
[a] It was established with XRD that the Rh$_2$O$_3$ was in the hexagonal modification.
**R space fitting, difference file technique and weight factor $k^n$:** The EXAFS data-analysis program XDAP allows one to perform multiple-shell fitting in $R$ space by minimising the residuals between both the magnitude and the imaginary part of the Fourier transforms of the data and the fit. $R$ space fitting has important advantages compared to the generally applied fitting in $k$ space and is extensively discussed in a recent paper by Koningsberger et al.[50]
The difference file technique was applied together with phase-corrected Fourier transforms to resolve the different contributions in the EXAFS data.[50] The difference file technique allows one to optimise each individual contribution with respect to the other contributions present in the EXAFS spectrum. If the experimental spectrum is composed of different contributions then Equation (1) is valid:
$$\text{Exp.Data} = \sum_{i=1}^{N} (\text{Fit}_i), \quad (1)$$
whereby $(\text{Fit}_i)$ represents the fitted contribution of coordination shell $i$. For each individual contribution then Equation (2) should then logically be valid:
$$\langle \text{Fit}_i \rangle = \text{Exp.Data} - \sum_{i=1 \text{ and } i \neq i}^{N} (\text{Fit}_i), \quad (2)$$
The right-hand side of this equation is further denoted as the difference file of shell $i$. A good fit is only obtained if the total fit and each individual contributing coordination shell correctly describe the experimental EXAFS and the difference file, respectively. In this way not only the total EXAFS fit, but also the individual fits of all separate contributions can be determined reliably.
Both high Z (e.g. metal–metal) and low Z (e.g. metal–oxygen) contributions are present in the EXAFS data collected on metal complexes dispersed on supports of a high surface area. The low Z contributions may arise from the ligands and/or from the support. A $k^n$ weighting emphasises
the high $Z$ contributions to the spectrum, since high $Z$ elements have more scattering power at high values of $k$ than low $Z$ elements. However, the use of a $k^3$-weighted EXAFS spectrum or Fourier transform makes the analysis much less sensitive to the presence of low $Z$ contributions in the EXAFS data. In this study, the EXAFS fits have been checked by applying $k^1$, $k^2$ and $k^3$ weightings in order to be certain that the results are the same for all weightings.
**Number of independent data points, variance and statistical significance**: The number of independent data points ($N_{\text{indp}}$) was determined as outlined in the “Reports on Standard and Criteria in XAFS Spectroscopy” [Eq. (3)]:
$$N_{\text{indp}} = \frac{2\Delta k \Delta R}{\pi} + 2$$
(3)
The variances of the magnitude and imaginary part of the Fourier transforms of fit and data were calculated according to Equation (4):
$$\text{Variance} = \frac{\int |k^c(\text{FT}_{\text{model}}^m(R) - \text{FT}_{\text{exp}}^m(R))|^2 \, dR}{\int |k^c(\text{FT}_{\text{exp}}^m(R))|^2 \, dR} \times 100$$
(4)
In this study the statistical significance of a contribution has been checked by a comparison of the amplitude of (Fit) with the noise level present in the difference file (the noise in the difference file is essentially the same as the noise in the experimental data).
**Molecular modelling**: Molecular modelling was executed with the force-field based program Cerius$^2$. The universal force-field routine was used. A (part of a) carbon nanofibre was simulated by taking several unit cells of graphite.
**Catalytic test experiments**: The catalytic activity of the immobilised Rh complexes in the hydrogenation of cyclohexene was tested in a semi-batch slurry reactor, operated at a constant pressure of 1200 mbar H$_2$. The thermostated, double-walled reaction vessel was equipped with vertical baffles and a gas-tight stirrer with a hollow shaft and blades for gas recirculation. The stirrer was operated at 2000 rpm. During the reaction, the hydrogen consumption was automatically monitored by a mass flow meter. It was assured that, under the conditions used, the rate of dissolution of H$_2$ in the solvent was higher than the maximum measurable rate of H$_2$ uptake.
In a typical experiment, the reaction vessel was filled with catalyst (100 mg) and ethanol (100 mL; Merck p.a.). The reactor was then thermostatted at 25°C, evacuated, filled with H$_2$, and pressurised. The reaction was started by injection of 1 mL cyclohexene (Acros 99%) with a syringe.
**Acknowledgements**
We wish to acknowledge A. Mens for the generation of the XPS spectra and Dr. O. Gijzeman for the discussions concerning the XPS data. We would like to thank the HASYLAB (Hamburg, Germany) for the opportunity of performing the EXAFS measurements at beamline station XI.1. We are grateful for the collection of the EXAFS data by the EXAFS measurement team. This work was supported by the Netherlands’ Organisation for Scientific Research (NWO).
---
[1] C. U. Pittman, Jr. in *Comprehensive Organometallic Chemistry, Vol. 8* (Eds.: G. Wilkinson, F. G. A. Stone, E. W. Abel), Pergamon, Oxford, 1982, p. 553.
[2] F. R. Hartley, *Supported Metal Complexes*, Reidel, Dordrecht, 1985.
[3] Yu. I. Yermakov, L. N. Arzamaskova, *Stud. Surf. Sci. Catal.* 1986, 27, 459.
[4] A. Choplin, F. Quignard, *Coord. Chem. Rev.* 1998, 180, 1677.
[5] N. L. Holy, *Tetrahedron Lett.* 1977, 42, 3703.
[6] N. L. Holy, *Fundam. Res. Homogeneous Catal.* 1979, 3, 691.
[7] N. L. Holy, *CHEMTech* 1980, 366.
[8] O. N. Efimov, M. L. Khidekel’, V. A. Avilov, P. S. Chekrii, O. N. Eremenko, A. G. Ovcharenko, *Russ. J. Gen. Chem. USSR* 1968, 38, 2581.
[9] V. A. Avilov, Yu. B. Borod’ko, V. B. Panov, M. L. Khidekel’, P. S. Chekrii, *Kinet. Catal.* 1968, 9, 582.
[10] O. N. Efimov, O. N. Eremenko, A. G. Ovcharenko, M. L. Khidekel’, P. S. Chekrii, *Bull. Acad. Sci. USSR, Div. Chem. Sci.* 1969, 778.
[11] V. A. Avilov, M. L. Khidekel’, O. N. Eremenko, O. N. Efimov, A. G. Ovcharenko, P. S. Chekry, *US patent* 1973, 3755194.
[12] E. N. Frankel, J. P. Friedrich, T. R. Bessler, W. F. Kwolke, N. L. Holy, *J. Am. Oil Chem. Soc.* 1980, 349.
[13] T. G. Ros, A. J. van Dillen, J. W. Geus, D. C. Koningsberger, *Chem. Eur. J.* 2002, 8, 1151.
[14] K. P. de Jong, J. W. Geus, *Catal. Rev.-Sci. Eng.* 2000, 42, 481.
[15] A. A. Keterling, A. S. Lisitsyn, V. A. Likhlobov, A. A. Gall’, A.S Trachum, *Kinet. Catal.* 1990, 31, 1273.
[16] Yu. Yu. Volodin, A. T. Teleshev, V. V. Morozova, A. V. Tolkachev, Yu. S. Mardashhev, E. E. Nifantyev, *Russ. Chem. Bull.* 1999, 48, 899.
[17] Y. Zhang, H.-B. Zhang, G.-D. Lin, P. Chen, Y.-Z. Yuan, K. R. Tsai, *Appl. Catal. A* 1999, 187, 213.
[18] H. B. Kagan, T. Yamagishi, J. C. Motte, R. Setton, *Isr. J. Chem.* 1978, 17, 274.
[19] M. V. McCabe, M. Orchin, *Fuel* 1976, 55, 266.
[20] B. F. Watkins, J. R. Behling, E. Kariv, L. L. Miller, *J. Am. Chem. Soc.* 1975, 97, 3549.
[21] M. Fujihira, A. Tamura, T. Osa, *Chem. Lett.* 1977, 361.
[22] L. Horner, W. Brich, *Liebigs Ann. Chem.* 1977, 1354.
[23] J. C. Lennox, R. W. Murray, *J. Am. Chem. Soc.* 1978, 100, 3710.
[24] T. Aotoguchi, A. Aramata, A. Kazusaka, M. Enyo, *J. Electroanal. Chem.* 1991, 318, 309.
[25] J. Liu, A. G. Riniker, H. Dai, J. H. Hafner, R. K. Bradley, P. J. Boul, A. Lu, T. Iverson, K. Shelnov, C. B. Huffman, F. Rodriguez-Macias, Y.-S. Shon, T. R. Lee, D. T. Colbert, R. E. Smalley, *Science* 1998, 280, 1253.
[26] J. Chen, M. A. Hamon, H. Hu, Y. Chen, A. M. Rao, P. C. Eklund, R. C. Haddon, *Science* 1998, 282, 95.
[27] M. A. Hamon, J. Chen, H. Hu, Y. Chen, M. E. Itkis, A. M. Rao, P. C. Eklund, R. C. Haddon, *Adv. Mater.* 1999, 11, 834.
[28] J. S. Pizey in *Synthetic Reagents, Vol. 1* (Ed.: J. S. Pizey), Ellis Horwood, Chichester, 1974, p. 321.
[29] T. G. Ros, A. J. van Dillen, J. W. Geus, D. C. Koningsberger, *ChemPhysChem* 2002, 3, 209.
[30] M. S. P. Shaffer, X. Fan, A. H. Windle, *Carbon* 1998, 36, 1603.
[31] ‘Colloids and Surfaces in Reprographic Technology’: W. M. Prest, R. A. Mosher, ACS Symp. Ser. 1982, 200, 225.
[32] K. Nakamoto, *Infrared and Raman Spectra of Inorganic and Coordination Compounds*, 4th ed., Wiley, New York, 1986, p. 124.
[33] U. Zielke, K. J. Hüttinger, W. P. Hoffman, *Carbon* 1996, 34, 983.
[34] P. Painter, M. Starsinic, M. Coleman in *Fourier Transform Infrared Spectroscopy, Applications to Chemical Systems*, Vol. 4 (Eds.: J. R. Ferraro, L. J. Basile), Academic Press, Orlando, 1985, p. 169.
[35] D. B. Mawhinney, V. Naumenko, A. Kuznetsova, J. T. Yates, *J. Am. Chem. Soc.* 2000, 122, 2383.
[36] H. P. Boehm, *Carbon* 1994, 32, 759.
[37] B. S. Furniss, A. J. Hannaford, P. W. G. Smith, A. R. Tatchell, *Vogel’s Textbook of Practical Organic Chemistry*, 5th ed., Longman, London, 1989, p. 254.
[38] J. F. Moulder, W. F. Stickle, P. E. Sobol, K. D. Bomben, *Handbook of X-ray Photoelectron Spectroscopy*, Perkin–Elmer Corporation (USA), 1992, p. 117.
[39] D. C. Koningsberger, B. L. Mojet, G. E. van Dorssen, D. E. Ramaker, *Top. Catal.* 2000, 10, 143.
[40] M. A. Fox, J. K. Whitesell, *Organic Chemistry*, 2nd ed., Jones and Bartlett, Boston, 1997, p. 629.
[41] C. U. Pittman, Jr., G.-R. He, B. Wu, S. D. Gardner, *Carbon* 1997, 35, 317.
[42] M. S. Hoogenraad, PhD Thesis, Utrecht University (NL), 1995.
[43] M. S. Hoogenraad, M. F. Onwezen, A. J. van Dillen, J. W. Geus, *Stud. Surf. Sci. Catal.* 1995, 101, 1331.
[44] J. W. Geus, M. S. Hoogenraad, A. J. van Dillen in *Synthesis and Properties of Advanced Catalytic Materials* (Eds.: E. Iglesia, P. W. Lednor, D. A. Nagaki, L. T. Thompson), Materials Research Society Pittsburgh, Pittsburgh, 1995, p. 87.
[45] W. Teunissen, PhD Thesis, Utrecht University (NL), 2000.
[46] J. W. Geus, *Stud. Surf. Sci. Catal.* 1983, 16, 1.
[47] M. Vaarkamp, B. L. Mojet, F. S. Modica, J. T. Miller, D. C. Koningsberger, *J. Phys. Chem.* **1995**, *99*, 16067.
[48] Website: http://www.xsi.nl.
[49] M. Vaarkamp, I. Dring, R. J. Oldman, E. A. Stern, D. C. Koningsberger, *Phys. Rev. B* **1994**, *50*, 7872.
[50] J. W. Cook, Jr., D. E. Sayers, *J. Appl. Phys.* **1981**, *52*, 5024.
[51] R. W. G. Wyckhoff, *Crystal Structures, Vol. I.*, 2nd ed., Wiley, New York, **1963**, p. 10.
[52] J. M. D. Coey, *Acta Crystallogr. B* **1970**, *26*, 1876.
[53] H. Bärnighausen, B. K. Handa, *J. Less-Common Metals* **1964**, *6*, 226.
[54] D. C. Koningsberger, *Jpn. J. Appl. Phys.* **1993**, *32*, 877.
Received: October 22, 2001 [F3633]
|
Psychoacoustic analysis of HVAC noise with equal loudness
Silke HOHLS\textsuperscript{1}; Thomas BIERMEIER\textsuperscript{2}; Ralf BLASCHKE\textsuperscript{3}; Stefan BECKER\textsuperscript{4}
\textsuperscript{1,4} University of Erlangen-Nuremberg, Germany
\textsuperscript{2,3} Audi AG, Germany
ABSTRACT
In order to guarantee maximal comfort inside vehicles, noise pollution has to be minimized. When considering developments especially in electro mobility, one major sound source - the combustion engine - is eliminated. Hence, the ancillary units as sound sources become increasingly unmasked and thus, prevalent. The sound field developed by the heating, ventilation and air conditioning system (HVAC) then essentially affects the perceptible sound field inside the car cabin. For identifying the relevant psychoacoustic parameters for assessing the sound quality of HVAC noise, a listening test, using the preference paired-comparison technique, was performed on seven sound samples of different vehicles in the defrost mode. The sounds were equalized in their loudness on an average level. Thus, the aim of this study was to analyze the correlation between the listeners’ preference and additional parameters beside the dominant parameter loudness. It was found that the sharpness, the articulation index and the roughness determine a preference decision when the loudness is eliminated from the sound samples.
Keywords: HVAC, Psychoacoustics I-INCE Classification of Subjects Number(s): 63.7
1. INTRODUCTION
The disturbance by noise pollution in a human’s daily routine steadily increases. While driving, a person is subjected to plenty of sound sources within the vehicle. The main sound sources in a car are the powertrain, including the combustion engine and the transmission, the air flow around the vehicle and noise induced by the tires as well as the ancillary units such as the heating, ventilation and air conditioning (HVAC) system. Taking into account the continuous acoustical optimization of those sound sources and new technologies, such as the start stop system, the noise induced by the HVAC system becomes increasingly prevalent. As shown by Biermeier (1), the HVAC system in a vehicle with combustion engine dominates interior car noise in high frequencies, which leads to the need of investigating and optimizing the sound quality of HVAC noise.
The perception of sounds and furthermore of sound quality is subjective. A sound is considered as noise if it is disruptive towards personal silence and welfare. This again depends on personal and cultural characteristics, as well as on the situation and mood when perceiving the respective sound. Thus, acoustical optimization of HVAC noise cannot be accomplished by using the overall sound pressure level as the only evaluation criterion. The aim is to develop a HVAC sound which is perceived pleasantly for the occupants and hence, to reduce noise pollution. In order to reach this goal, suitable evaluation criteria have to be identified.
In former investigations, the acoustic parameter articulation index as well as the psychoacoustic parameters loudness, sharpness and roughness have been derived as relevant evaluation criteria for assessing HVAC noise. This was shown by analyzing preference values and measures of acoustic as well as psychoacoustic parameters in (2). In this study, the loudness was identified as the dominant evaluation criterion, which means that the perception of other characteristics within the sounds was possibly masked. Moreover, other psychoacoustic parameters might be influenced by the loudness. Figure 1 exemplarily shows possible interdependencies of loudness and roughness. Figure 1a shows the identified correlation between roughness and the measured sound quality in terms of listener’s preference. In addition to that figure 1b shows the interdependencies between loudness and roughness for the class of HVAC noise. It becomes obvious that the roughness decreases with increasing values of loudness, which is caused by the mechanisms of sound generation in the HVAC system. Thus, the cause-effect relationship of roughness on perceived sound quality needs further investigation. In this study, different HVAC sounds with equal loudness were investigated using listening tests.
\email@example.com
2. EXPERIMENTAL SETUP
In order to investigate subjective preferences concerning HVAC noise, different test sounds were assessed in a listening test with seven sound samples (A-G) which were recorded in seven different vehicles of the same class. The chosen operating mode was a defrosting mode with a mass flow of approximately 7 kg/min. After analyzing the individual loudness $N$ of each test sound, an average level for the loudness of $N = 17.5$ sone was derived. All sound samples have been equalized on this loudness by iteratively scaling the time signal.
Due to the equal loudness of the test sounds, the hearing impression of the sounds become more similar. In order to evaluate the differences between the sounds, the test methodology has to be as easy as possible for the test persons. Hence, the preference paired-comparison technique was used.
The listening test was performed with 34 test persons (average age 28.5 years; standard deviation 4.9 years) under laboratory conditions. The test sounds were presented via calibrated headphones.
3. RESULTS
The ordinal-scaled data that result from preference judgements firstly were transformed into interval-scaled data using the Law of Comparative Judgement (LCJ) by Thurstone (3) and into a ratio scale using the Bradley-Terry-Luce Model (BTL) (4, 5). In the scale according to LCJ, the differences between the values can be interpreted while the BTL scale has an absolute zero and therefore, the ratio between scale values are interpretable. For both methods, high values are in line with high preference and vice versa.
In a first step an analysis of the applicability of the sound pressure level for quality judgements was carried out. The results are summarized in table 1. It shows the values of loudness, sound pressure level - unweighted (SPL) and A-weighted (SPL(A)) - as well as the preference values according to LCJ and BTL for all sound samples A-G. The corresponding rank orders of each measure is displayed right beside the measure, respectively. The rank 1 marks the best and rank 7 the worst test sound assessed using the corresponding measure.
As mentioned above, all test sounds were equalized to an average loudness of 17.5 sone. Although all sounds belong to the same sound class of HVAC noise and thus, have similar frequency spectra, a difference of 5.5 dB between the sounds with the highest and lowest unweighted SPL exist. The difference of the maximal and minimal A-weighted SPL is 1.3 dB(A) and hence, the A-weighted SPL is more suitable for evaluating the sound quality of HVAC noise. Nevertheless, when comparing the rank orders of the A-weighted SPL and the preference values derived from the judgements in the paired comparison (LCJ and BTL), a discrepancy is noticeable. This indicates that both SPL are no reliable parameters to evaluate the sound quality of HVAC noise.
The rank orders derived from the preference values out of the LCJ and BTL-Model are identically. As shown in figure 2a, a further correlation analysis between both scales shows a high correlation. Hence, both models deliver same results and are both applicable for analyzing correlations between psychoacoustic parameters and
Table 1 – Summary of different measures for test sounds A-G and corresponding rank orders.
| | loudness /sone | SPL /dB | rank order | SPL /dB(A) | rank order | preference LCJ | rank order | preference BTL | rank order |
|---|----------------|---------|------------|------------|------------|----------------|------------|----------------|------------|
| A | 17.5 | 72.8 | 4 | 63.7 | 6 | 4.60 | 3 | 1.19 | 3 |
| B | 17.5 | 72.7 | 3 | 63.2 | 3 | 0.00 | 7 | 0.58 | 7 |
| C | 17.5 | 70.3 | 1 | 63.4 | 4 | 2.74 | 5 | 0.88 | 5 |
| D | 17.5 | 72.2 | 2 | 62.4 | 1 | 2.97 | 4 | 0.91 | 4 |
| E | 17.5 | 73.3 | 5 | 63.7 | 6 | 5.69 | 2 | 1.40 | 2 |
| F | 17.5 | 75.8 | 7 | 63.5 | 5 | 10.00 | 1 | 2.77 | 1 |
| G | 17.5 | 74.8 | 6 | 62.4 | 1 | 0.49 | 6 | 0.62 | 6 |
| minimum | 17.5 | 70.3 | - | 62.4 | - | - | - | - | - |
| maximum | 17.5 | 75.8 | - | 63.7 | - | - | - | - | - |
| max - min | - | 5.5 | - | 1.3 | - | - | - | - | - |
subjective preference. For better comparability with former results, the preference values from the LCJ, as shown in figure 2b have been used. It is remarkable that six of the seven test sound have preference values in the lower half of the scale while test sound F has a much higher preference.
(a) Correlation of preference values according to BTL and LCJ.
(b) LCJ.
Figure 2 – Preference scales: BTL vs. LCJ (a) and visualization of LCJ (b)
In order to physically explain the subjective preference values, a correlation analysis between the preference values correspondent to LCJ and different acoustic and psychoacoustic parameters was performed. Table 2 shows the correlation coefficients $r$ of the particular parameter and the corresponding error probabilities $\alpha$. A correlation was considered as relevant for error probabilities of less than $\alpha = 5\%$.
In accordance to the comparisons of the rank orders above, the unweighted and A-weighted SPL do not show a linear correlation and therefore, have a low correlation coefficient $r$. Moreover, the fluctuation strength and tonality are not correlated with the listener’s preferences. Significant correlations to the preference exist for both calculation methods of sharpness and the articulation index as well as for roughness.
The sharpness was analyzed according to DIN 45692 (6) as well as Aures (7). The basic equation for calculating the sharpness $S$ is given in equation 1, where $N'$ is the specific loudness and $g$ is a weighting function. Both measures depend on the critical band rate $z$, which reflect the 24 critical bands of hearing.
$$S = 0.11 \cdot \frac{\sum_{z=0}^{z=24}\text{Bark} N'(z) \cdot g(z) \cdot z/\text{Bark} \cdot dz}{\sum_{z=0}^{z=24}\text{Bark} N'(z) \cdot dz} \quad (1)$$
The difference between both methods is the weighting function $g$. While the weighting function of DIN (6)
Table 2 – Correlation analysis between preference after LCJ and physical parameters.
| | \( r \) | \( \alpha \) |
|------------------------|-----------|--------------|
| \( SPL \) | 0.45 | 0.3095 |
| \( SPL(A) \) | 0.55 | 0.2049 |
| roughness | 0.78 | **0.0403** |
| fluctuation strength | -0.03 | 0.9523 |
| tonality | -0.10 | 0.8284 |
| sharpness (DIN) | -0.93 | **0.0024** |
| sharpness (Aures) | -0.93 | **0.0024** |
| articulation index | 0.97 | **0.0018** |
is independent of the test sound’s loudness the weighting function of Aures (7) considers the loudness. Both weighting functions are displayed in equation 2 for DIN and equation 3 for Aures.
\[
g_{DIN}(z) = \begin{cases}
1 & \text{if } z \leq 15.8 \text{ Bark} \\
0.15 \cdot e^{0.42(z/\text{Bark} - 15.8)} + 0.85 & \text{if } z > 15.8 \text{ Bark}
\end{cases}
\]
(2)
\[
g_{Aures}(z) = 0.078 \cdot \frac{e^{0.171 \cdot z/\text{Bark}}}{z/\text{Bark}} \cdot \frac{N/\text{sone}}{\ln(0.05 \cdot N/\text{sone} + 1)}
\]
(3)
Due to the equalization of all test sounds on the same loudness level, the weighting function by Aures has no specific influence on the sharpness. Therefore, the correlation coefficients in table 2 have the same values and thus, only the DIN sharpness was considered.
The articulation index is a measure for speech intelligibility. In this study the closed articulation index (AI) according to equation 4 was used. It can take values between 0 and 1, where 1 means perfect and 0 no speech intelligibility. It is determined by summing up the weighted (\( g_i \)) differences of useful sound and background noise (\( L_i \)) for all third-octave bands between 200 and 5000 Hz. In this case, the background noise is induced by the HVAC system. The useful sound are typical levels of speech, which are standardized in ANSI S3.5-1969 (8). Figure 3 shows the correlations of DIN sharpness and the AI with the preference.
\[
AI = \sum_{i=1}^{n} (g_i \cdot \Delta L_i)
\]
(4)
(a) Sharpness.
(b) Articulation index. The encircled data point was not considered for the regression line.
Figure 3 – Preference as a function of DIN sharpness (a) and closed articulation index (b).
The correlation between preference and sharpness (figure 3a) has a negative slope, which means that the preference decreases with increasing sharpness. The coefficient of determination is $R^2 = 0.87$ and thus, 87% of the variance within the preference values are explainable by this linear regression model. The data points of the articulation index display test sound G as an outlier. This data point was therefore encircled and was not considered for determining the slope of the regression line and the coefficient of determination. It becomes obvious that the preference increases with higher values for the articulation index. The coefficient of determination is $R^2 = 0.93$.
The third identified parameter for assessing HVAC noise of equal loudness is the roughness. The roughness describes the hearing sensation that is induced by modulations in frequency or amplitude of the sound. The sensation of roughness is generated for modulation frequencies between 20 and 250 Hz. Lower modulation frequencies are perceived as fluctuation strength and modulation frequencies above 250 Hz are not audible. In this analysis, the roughness was calculated as introduced by Fastl (9). The connectedness between roughness and preference is displayed in figure 4.

It does not show a real linear correlation but a trend with a positive slope. Hence, the preference increases with increasing values of roughness. Moreover, it is remarkable that test sounds A-E and G form a cluster on a similar level of roughness. The roughness of test sound F differs towards higher roughness from the cluster. It is remarkable that the preference of test sound F and its roughness stands out in a similar way. This could mean that a high roughness generates a high perceived quality of the sound. Due to relatively small absolute differences in roughness, this finding needs further investigation.
**Spectral analysis**
In a next step, the characteristics of the frequency spectra were compared in order to identify the crucial frequency ranges that influence the identified parameters. For reasons of better clarity, only four spectra of the investigated seven sound samples are displayed in figure 5. The test sounds were chosen by forming preference clusters on the LCJ scale. Test sounds B and G form one group with the lowest preferences near 0. The second group was formed by test sounds C and D with a preference of 2 - 3. The third group includes test sounds A and E, with preference values around 5 and the fourth plotted spectrum is the one of test sound F with the highest preference of 10. The test sounds with the higher preference within their group are plotted in figure 5.
Basically, all test sounds show a qualitatively similar spectrum, which is expectable for sounds of the same sound class. In a medium frequency range between circa 100 and 3000 Hz, the shape of all spectra is relatively similar and due to the equalization of comparable SPL. Moreover, the spectra cross each other in this frequency range and thus, no obvious influence on preference can be derived. Hence, the high and very low frequencies seem to have an impact on the perceived quality in terms of preference values. It becomes apparent that the sound pressure levels of sound F above 3000 Hz are approximately 5 dB(A) lower than the ones of sound D and G and about 2-3 dB(A) lower than sound E. Furthermore, it is remarkable that the sound pressure levels show the same rank order (from low to high) as the preference values. Conversely, this is valid for low frequencies between 40 and 100 Hz. In this range, sound F shows considerably higher levels of about 5 dB(A) except for the peak at 60 Hz of sound D, which marks the rotations per second of the HVAC system’s fan. Again, the order of the spectra matches with the order of preference values. In this frequency range high
levels generate high preference.
The characteristics of the frequency spectra related to the preference values correspond to the definition of sharpness and the identified correlation above. Due to high levels in low frequencies as well as low levels in high frequencies, sound F has a low sharpness and thus, a high preference.
4. CONCLUSIONS
In this study, a listening test was performed on sound samples of HVAC noise with equalized loudness. Due to the high similarity of the test sounds, the preference paired-comparison technique was used.
Different rank orders of various evaluation criteria were compared. It was shown that neither the unweighted nor the A-weighted sound pressure level is suitable for assessing sound quality of HVAC noise. Both preference scales - according to Law of Comparative Judgement (LCJ) and Bradley-Terry-Luce Model (BTL) - show highly correlated results and the same rank order.
The subsequent correlation analysis between the preference values and psychoacoustic parameters showed that the parameters sharpness, roughness and the acoustic parameter articulation index determine the sound quality for HVAC noise of equal loudness. Sound F, which stands out due to its very high preference within the test group, also shows much higher values in all identified parameters.
The conclusive spectral analysis supports the importance of sharpness for determining sound quality on HVAC noise. High frequencies above 3 kHz and low frequencies below 100-200 Hz seem to have a major impact on sound quality of HVAC noise. Nevertheless, the eliminated parameter loudness still constitutes the dominant parameter and needs to be considered in acoustic optimization of HVAC noise.
ACKNOWLEDGEMENTS
Extensive support by AUDI AG in the form of the INI.FAU cooperation is gratefully acknowledged.
REFERENCES
1. Biermeier T, Becker S, Risch P. Acoustic Investigations of HVAC Systems in Vehicle. SAE Technical Paper; 2012.
2. Hohls S, Biermeier T, Blaschke R, Becker S. Psychoacoustic evaluation of HVAC noise. Proceedings of Forum Acusticum Krakow. 2014:.
3. Thurstone LL. A Law of Comparative Judgement. Psychological Review; 34.
4. Bradley RA, Terry ME. Rank Analysis of Incomplete Block Designs: I. The Method of Paired Comparisons. Biometrika. 1952;39(3/4):pp. 324–345.
5. Luce RD. Individual Choice Behavior: A Theoretical Analysis. New York: Wiley; 1959.
6. DIN 45692 - Messtechnische Simulation der Hörempfindung Schärfe [DIN Standard]; 2009.
7. Aures W. Berechnungsverfahren für den sensorischen Wohllklang beliebiger Schallsignale. Acta Acustica united with Acustica. 1985;59(2).
8. Methods for the Calculation of the Articulation Index [American National Standard]; 1969.
9. Zwicker E, Fastl H. Psychoacoustics - Facts and Models; 2007.
|
The costs and benefits of carbon tariffs have been extensively discussed in terms of competitiveness and carbon leakage. This Commentary argues that global welfare should be the focus. EU tariffs against developing country exports would increase global welfare and the proceeds from the tariffs could help poorer exporting countries reduce the carbon intensity of their economies.
The costs and benefits of carbon border measures have been extensively discussed in the rapidly growing body of literature on the economics of climate change mitigation policies, but most studies concentrate on competitiveness (of energy-intensive industries) and carbon leakage. Only a few studies examine the international trade impacts of a so-called ‘carbon border tax’ and none seems to look at the welfare implications from a global point of view. See Veenendaal & Manders (2008), McKibben & Wilcoxen (2008), Majocchi & Missaglia (2001), and the Vox columns by John Whalley (2008, 2009).
The global perspective, however, is the correct one. Climate change policy, even when implemented at the national level, is motivated by a concern for global (as opposed to national) welfare. It is thus important to adopt the same point of view when discussing so-called ‘border measures’. An important, but often overlooked, issue is the distinction between plain import tariffs (on the carbon content of goods imported) and the combination of import tariffs plus export rebates. I focus here on the case where there is no export rebate.
**A simple illustration of the welfare gain from the introduction of a carbon tariff**
This illustration relies on the most standard case, using a partial equilibrium approach, with linear demand and supply curves to show graphically the impact of a carbon tax (i.e. a tariff on the carbon content of imports) on global welfare.
There is only one good (of which the home country is a net importer). As usual the world is divided into two actors, an importing country (or group of importing countries) and the rest of the world (RoW). But the two have identical supply and demand curves and the same carbon intensity of production!
Figure 1 shows the global demand and supply curves of the good in question. Production of the good leads to emissions of CO$_2$ (at a given per unit rate). Private producers do not take into account the cost of emissions; hence the private supply curve is below the social supply curve, which takes into account the external impact of emissions. However, unless there is further government intervention, the international...
price of the good in question is determined by the intersection of private supply and demand, point O on the graph.
Consider the simplest case, which is when the home country introduces a cap-and-trade system such as the Emissions Trading Scheme (ETS) in Europe. In this case, the global supply curve is kinked at the quantity at which the Emissions Trading Scheme limits the amount that can be produced in the home country (given the limit on emissions and the per unit emissions factor). After the introduction of the Emissions Trading Scheme, the world price of the good in question is determined then at point A by the intersection between the ('kinked', private) supply curve subject and the global demand curve. It is apparent that the price is higher than before.
**Figure 1. Equilibrium without tariff**
What happens when the home country also introduces an import tariff? That reduces domestic demand and hence global demand, since it does not affect demand in the rest of the world. Assume the import tariff is specific, not ad valorem, because it is supposed to correct the externality that arises in production. Figure 2 shows the resulting equilibrium. With the introduction of the tariff, the demand curve shifts down, and the new equilibrium price is lower (equilibrium shifts from point A to point E).
The fall in the international price implies that foreign producers will produce less. Given that domestic production is limited by the EU ETS, global production must fall as well. The sum of domestic plus foreign consumption must thus fall as well. But this is achieved by a rise in foreign consumption (since the price falls abroad) and a fall in domestic consumption of an even larger magnitude. This shift in consumption is due to the fact that domestic and foreign consumers face a different price if there is an import tariff.
What are the welfare implications of the tariff? The standard welfare loss caused by a tariff is the usual triangle (consumer plus producer loss) enclosed by points ADE. As is well known, this welfare loss is of second order for any ‘small’ tariff.
In this case, however, there is also a gain due to the global externality in production. It is enclosed by the points ABCE (a parallelogram). The net welfare gain from imposing a tariff is given by the trapezoid enclosed by the points ABCD. It follows that a small carbon tariff must always improve global welfare.
The intuition behind this result is clear. As long as the tariff is small, the reallocation of consumption from consumers at home to consumers abroad causes only a loss of second-order importance. But the gain to global welfare from lower foreign production is of first-order importance. This argument is completely independent of the size of carbon leakage. Thus, those who oppose carbon taxes on grounds of lost sectoral competitiveness (e.g. Gurria, 2009) miss the key issue.
**Policy implications**
The practical policy implications of this analysis are clear; the world would benefit from the imposition of a (small) carbon import tariff by the EU (the only significant region in the world with a cap-and-trade system in operation).\(^1\) The justification for the tariff would, however, be completely different from the one usually advanced by politicians (and industry). It would not be to ‘level the playing field’ for EU industry, but to protect the global environment. This is a crucial difference since this implies that the tariff would be compatible with WTO rules, whose Article XX allows for exemptions if the aim is to protect a global natural resource.
**How high should the tariff be?**
In a fully specified model (Gros, 2009a), I show that indeed there is a tariff that maximises global welfare (under the assumption that the home country has a cap-and-trade system but the rest of the world does not). The optimal tariff is approximately equal to the externality in production abroad.
\(^1\) Institutionally it would be straightforward. The EU has exclusively competence for all matters concerning the customs union. Any decision to impose carbon border tax would have to start with an initiative by the Commission which then needs to be approved by Council and the European Parliament. Approval in the Council requires only a qualified majority.
Some rough, preliminary calculations suggest that a carbon tariff by the EU (or the US) could be much higher than the average most favoured nation tariff rates of the EU (on average about 3-4%). Recent calculations by Weber et al. (2008) suggest that the total CO$_2$ embodied in Chinese 2005 exports was around 1,670 million tonnes of CO$_2$, or over 30% of all Chinese emissions (in jargon this is the so-called ‘embodied emissions in exports’ measure).
This percentage corresponds roughly to the share of exports in the Chinese economy (around 35%). Given total Chinese exports in 2005 of around $760 billion, this implies an average carbon intensity of a little more than two tonnes of CO$_2$ per $1,000 of exports. Table 1 in Gros (2009b) provides further evidence on differences in carbon intensities, showing a similar order of magnitude for other countries, such as India and Russia.
The final piece of information needed is the domestic carbon price. It is tempting to use the price under the present Emissions Trading Scheme (around $20-$25 per tonne of CO$_2$). However, this price refers only to the present Kyoto regime and cannot serve as a guide to what would result under the post-Kyoto regime. In the post-2012 period, the constraint will be likely to be much tighter. The Kyoto commitment (of an 8% reduction compared to 1990 levels of emissions) did not represent a tight constraint for the EU15, since the industrial collapse of the former East Germany reduced emissions by 3-4% and the switching of fuel from coal to gas in the UK (as a result of electricity and gas market liberalisation) also led to significant reductions.
**EU ‘green tariffs’ of 9% on average against Chinese exports**
The Commission has estimated that a carbon price of around the €40-50 per tonne would be required to reach the EU’s 2020 commitments. This would imply, at current exchange rates, about $50-70 per tonne. This might be too high for the US, where $30-$40 per tonne has been estimated to be the politically feasible limit. At $40 (€30) per tonne, a border carbon tax on Chinese exports (to the EU) would be a bit more than two times $40 per $1,000 of exports, or approximately 9% on average. Rates would be much higher for energy-intensive products and lower for most others.
As China upgrades the sophistication of its exports, the average rate might come down, but under current conditions the average carbon tax could thus be very significant, much higher than the most-favoured-nation tariffs currently applied by the EU, and certainly an order of magnitude larger than the modest tariff reductions that were contemplated under the Doha round.
**Summary**
The economics of a carbon import tariff is clear. The politics is rather messy. A massive increase in EU tariffs against developing-country exports would certainly make them feel disadvantaged. While global welfare would increase, they might lose. However, there is an easy way out of the political problems. The EU could simply promise to use the proceeds from the tariff to help poorer exporting countries reduce the carbon intensity of their economies.
The Chinese government has recently announced that it is taking the unilateral commitment “that by 2020 China’s carbon dioxide emissions per unit of GDP will be dropped by between 40-45% compared with 2005.” At first sight, this appears to constitute a major commitment. It is not clear however, whether this implies a major departure from the baseline. The emissions intensity of the Chinese economy should fall in any case, as services become relatively more important. It is thus difficult to say whether this target implies a meaningful price for carbon. Moreover, the Chinese plan foresees no carbon pricing in the manufacturing sector. The target is mainly to be reached by massive investments in alternative power generation. This implies that the economic argument for a carbon tariff (that production and hence pollution will move abroad) remains fully valid.
Other major emitters (India, Russia) have not come forward with any target at all. Border measures to support the fight against global climate change should thus remain on the agenda.
References
Gros, Daniel (2009a), *Global Welfare Implications of Carbon Border Taxes*, CEPS Working Document No. 315, Centre for European Policy Studies, Brussels, July.
Gros, Daniel (2009b), *Why a cap-and-trade system can be bad for your health*, VoxEU.org, 5 December.
Gurria, Angel (2009), “*Carbon has no place in global trade rules*”, *Financial Times*, 4 November.
Majocchi, A. and M. Missaglia (2002), “Environmental taxes and border tax adjustment”, Societa Italiana Economia pubblica (SIEP), Working Paper No. 127/2002.
McKibben, W.J. and P. Wilcoxen (2008), “*The Economic and Environmental Effects of Border Tax Adjustments for Climate Policy*”, Brookings Global Economy and Development Conference, Brookings Institution, Washington, DC.
Veenendaal, P. and T. Manders (2008), *Border tax adjustment and the EU-ETS, a quantitative assessment*, CPB Document No. 171, Central Planning Bureau, The Hague.
Weber, Christopher L., Glen Peters, Dabo Guan and Klaus Hubacek (2008), “*The contribution of Chinese exports to climate change*”, *Energy Policy*, Vol. 36, No. 9, pp. 3572-3577.
Whalley, John (2008), *Carbon, trade policy, and carbon free trade areas*, VoxEU.org, 25 November.
Whalley, John (2009), *International trade and the feasibility of global climate change agreements*, VoxEU.org, 9 April.
|
DOCUMENT MADE AVAILABLE UNDER THE PATENT COOPERATION TREATY (PCT)
International application number: PCT/US2019/063215
International filing date: 26 November 2019 (26.11.2019)
Document type: Certified copy of priority document
Document details:
Country/Office: US
Number: 62/920,613
Filing date: 09 May 2019 (09.05.2019)
Date of receipt at the International Bureau: 13 December 2019 (13.12.2019)
Remark: Priority document submitted or transmitted to the International Bureau in compliance with Rule 17.1(a),(b) or (b-bis)
TO ALL TO WHOM THESE PRESENTS SHALL COME:
UNITED STATES DEPARTMENT OF COMMERCE
United States Patent and Trademark Office
December 12, 2019
THIS IS TO CERTIFY THAT ANNEXED HERETO IS A TRUE COPY FROM THE RECORDS OF THE UNITED STATES PATENT AND TRADEMARK OFFICE OF THOSE PAPERS OF THE BELOW IDENTIFIED PATENT APPLICATION THAT MET THE REQUIREMENTS TO BE GRANTED A FILING DATE.
APPLICATION NUMBER: 62/920,613
FILING DATE: May 09, 2019
RELATED PCT APPLICATION NUMBER: PCT/US19/63215
THE COUNTRY CODE AND NUMBER OF YOUR PRIORITY APPLICATION, TO BE USED FOR FILING ABROAD UNDER THE PARIS CONVENTION, IS US62/920,613
Certified by
[Signature]
Under Secretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office
## INVENTOR(S)
| Given Name (first and middle [if any]) | Family Name or Surname | Residence (City and either State or Foreign Country) |
|--------------------------------------|------------------------|-----------------------------------------------------|
| Tonya | Tremitiere | Bradenton, Florida |
| John | Cooper | Henderson, North Carolina |
| Ashley | Nace | Sarasota, Florida |
| Mark | Cummings | Lakewood Ranch, Florida |
| Mandy | Smith | North Port, Florida |
Additional inventors are being named on the 1 separately numbered sheets attached hereto.
### TITLE OF THE INVENTION (500 characters max):
Dye Sublimation Ink Composition and Process for use with Pad Printers
### CORRESPONDENCE ADDRESS
- **The address corresponding to Customer Number:**

- **Firm or Individual Name:** Esprix Technologies
- **Address:** 6780 Matoaka Road
- **City:** Sarasota
- **State:** FL
- **Zip:** 34243
- **Country:** USA
- **Telephone:** +1 315 6320
- **Email:** firstname.lastname@example.org
### ENCLOSED APPLICATION PARTS (check all that apply)
- [ ] Application Data Sheet. See 37 CFR 1.76.
- [ ] CD(s), Number of CDs _______________________
- [ ] Drawing(s) Number of Sheets _________________
- [ ] Other (specify) ____________________________
- [ ] Specification (e.g., description of the invention) Number of Pages ________________
### Fees Due:
Filing Fee of $280 ($140 for small entity) ($70 for micro entity). If the specification and drawings exceed 100 sheets of paper, an application size fee is also due, which is $400 ($200 for small entity) ($100 for micro entity) for each additional 50 sheets or fraction thereof. See 35 U.S.C. 41(a)(1)(G) and 37 CFR 1.16(f).
### METHOD OF PAYMENT OF THE FILING FEE AND APPLICATION SIZE FEE FOR THIS PROVISIONAL APPLICATION FOR PATENT
- [✓] Applicant asserts small entity status. See 37 CFR 1.27.
- [✓] Applicant certifies micro entity status. See 37 CFR 1.29. Applicant must attach form PTO/58/15A or Bor equivalent.
- [✓] A check or money order made payable to the Director of the United States Patent and Trademark Office is enclosed to cover the filing fee and application size fee (if applicable).
- [ ] Payment by credit card. Form PTO-2038 is attached.
- [ ] The Director is hereby authorized to charge the filing fee and application size fee (if applicable) or credit any overpayment to Deposit Account Number: _______________________
### TOTAL FEE AMOUNT ($) $140
---
**USE ONLY FOR FILING A PROVISIONAL APPLICATION FOR PATENT**
This collection of information is required by 37 CFR 1.51. The information is required to obtain or retain a benefit by the public which is to file (and by the USPTO to process) an application. Confidentiality is governed by 35 U.S.C. 122 and 37 CFR 1.11 and 1.14. This collection is estimated to take 10 hours to complete, including gathering, preparing, and submitting the completed application form to the USPTO. Time will vary depending upon the individual case. Any comments on the amount of time you require to complete this form and/or suggestions for reducing this burden, should be sent to the Chief Information Officer, U.S. Patent and Trademark Office, U.S. Department of Commerce, P.O. Box 1450, Alexandria, VA 22313-1450. DO NOT SEND FEES OR COMPLETED FORMS TO THIS ADDRESS. SEND TO: Commissioner for Patents, P.O. Box 1450, Alexandria, VA 22313-1450.
If you need assistance in completing the form, call 1-800-PTO-9199 and select option 2.
The invention was made by an agency of the United States Government or under a contract with an agency of the United States Government. (NOTE: Providing this information on a provisional cover sheet, such as this Provisional Application for Patent Cover Sheet (Form PTO/SB/16), does not satisfy the requirement of 35 U.S.C. 202(c)(6), which requires that the specification contain a statement specifying that the invention was made with Government support and that the Government has certain rights in the invention.)
☐ No.
☐ Yes, the invention was made by an agency of the U.S. Government. The U.S. Government agency name is:
☐ Yes, the invention was made under a contract with an agency of the U.S. Government.
The contract number is: ____________________________
The U.S. Government agency name is: ____________________________
In accordance with 35 U.S.C. 202(c)(6) and 37 CFR 401.14(f)(4), the specifications of any United States patent applications and any patent issuing thereon covering the invention, including the enclosed provisional application, must state the following:
"This invention was made with government support under [IDENTIFY THE CONTRACT] awarded by [IDENTIFY THE FEDERAL AGENCY]. The government has certain rights in the invention."
WARNING:
Petitioner/applicant is cautioned to avoid submitting personal information in documents filed in a patent application that may contribute to identity theft. Personal information such as social security numbers, bank account numbers, or credit card numbers (other than a check or credit card authorization form PTO-2038 submitted for payment purposes) is never required by the USPTO to support a petition or an application. If this type of personal information is included in documents submitted to the USPTO, petitioners/applicants should consider redacting such personal information from the documents before submitting them to the USPTO. Petitioner/applicant is advised that the record of a patent application is available to the public after publication of the application (unless a non-publication request in compliance with 37 CFR 1.213(a) is made in the application) or issuance of a patent. Furthermore, the record from an abandoned application may also be available to the public if the application is referenced in a published application or an issued patent (see 37 CFR 1.14). Checks and credit card authorization forms PTO-2038 submitted for payment purposes are not retained in the application file and therefore are not publicly available.
SIGNATURE ____________________________ DATE 5/6/19
TYPED OR PRINTED NAME Tonya Premitiere
TELEPHONE 941-315-6320
REGISTRATION NO. _______________
(if appropriate)
DOCKET NUMBER ____________________________
Privacy Act Statement
The Privacy Act of 1974 (P.L. 93-579) requires that you be given certain information in connection with your submission of the attached form related to a patent application or patent. Accordingly, pursuant to the requirements of the Act, please be advised that: (1) the general authority for the collection of this information is 35 U.S.C. 2(b)(2); (2) furnishing of the information solicited is voluntary; and (3) the principal purpose for which the information is used by the U.S. Patent and Trademark Office is to process and/or examine your submission related to a patent application or patent. If you do not furnish the requested information, the U.S. Patent and Trademark Office may not be able to process and/or examine your submission, which may result in termination of proceedings or abandonment of the application or expiration of the patent.
The information provided by you in this form will be subject to the following routine uses:
1. The information on this form will be treated confidentially to the extent allowed under the Freedom of Information Act (5 U.S.C. 552) and the Privacy Act (5 U.S.C 552a). Records from this system of records may be disclosed to the Department of Justice to determine whether disclosure of these records is required by the Freedom of Information Act.
2. A record from this system of records may be disclosed, as a routine use, in the course of presenting evidence to a court, magistrate, or administrative tribunal, including disclosures to opposing counsel in the course of settlement negotiations.
3. A record in this system of records may be disclosed, as a routine use, to a Member of Congress submitting a request involving an individual, to whom the record pertains, when the individual has requested assistance from the Member with respect to the subject matter of the record.
4. A record in this system of records may be disclosed, as a routine use, to a contractor of the Agency having need for the information in order to perform a contract. Recipients of information shall be required to comply with the requirements of the Privacy Act of 1974, as amended, pursuant to 5 U.S.C. 552a(m).
5. A record related to an International Application filed under the Patent Cooperation Treaty in this system of records may be disclosed, as a routine use, to the International Bureau of the World Intellectual Property Organization, pursuant to the Patent Cooperation Treaty.
6. A record in this system of records may be disclosed, as a routine use, to another federal agency for purposes of National Security review (35 U.S.C. 181) and for review pursuant to the Atomic Energy Act (42 U.S.C. 218(c)).
7. A record from this system of records may be disclosed, as a routine use, to the Administrator, General Services, or his/her designee, during an inspection of records conducted by GSA as part of that agency's responsibility to recommend improvements in records management practices and programs, under authority of 44 U.S.C. 2904 and 2906. Such disclosure shall be made in accordance with the GSA regulations governing inspection of records for this purpose, and any other relevant (i.e., GSA or Commerce) directive. Such disclosure shall not be used to make determinations about individuals.
8. A record from this system of records may be disclosed, as a routine use, to the public after either publication of the application pursuant to 35 U.S.C. 122(b) or issuance of a patent pursuant to 35 U.S.C. 151. Further, a record may be disclosed, subject to the limitations of 37 CFR 1.14, as a routine use, to the public if the record was filed in an application which became abandoned or in which the proceedings were terminated and which application is referenced by either a published application, an application open to public inspection or an issued patent.
9. A record from this system of records maybe disclosed, as a routine use, to a Federal, State, or local law enforcement agency, if the USPTO becomes aware of a violation or potential violation of law or regulation.
Dye Sublimation Ink Composition and Process for use with Pad Printers
What is claimed is a non-drying, environmentally friendly disperse dye sublimation ink composition to be used in pad printers where the sublimation ink is imaged onto an intermediate substrate and the dye transferred to a dye-receptive surface using a suitable combination of heat, pressure and time. Such an image would appear as a reverse image. Alternatively, the ink could be deposited onto a secondary surface such that the image is reversed, and that reversed image transferred to a final surface such that the image would appear right-side. The composition comprised from about 50 to 90% water, from about 5 to 45% water miscible solvent, from about 2 to 20% disperse dye particles of approximately 200 nm or smaller, from 0.1 to 5% pigment dispersant and optionally additional components such as biocides, surfactants, viscosity adjustors, and UV light stabilizers.
Background:
Disperse dye inks have been known for at least 75 years. For example, U.S. 4,062,644 issued to Graphic Magicians discloses a transfer ink to be used with a felt tip pen. U.S. 4,082,467 to Kaplan also discloses a felt-tip marker pen and dye dispersion of vaporizable disperse dyes. In this example the dye is first dispersed with linseed oil and includes 40-50% of a soluble resin such as hydrogenated resin. This dispersion is then diluted with water, additional soluble resin and 80-90% of a polyalcohol. U.S. 4,211,528 also discloses use of sublimable disperse dyes in felt tip marker pens. In this example the inventor formulates a solution rather than a dispersion of the disperse dye particles. This was accomplished by use of chlorinated solvents which are environmentally unfriendly and would not be accepted in today’s commercial market.
Disperse dye-based sublimation ink compositions are known in the literature, although not for stamp pad use. U.S. 4,725,849 to Koike describes a disperse dye composition for inkjet printing directly onto cloth that has been treated with a hydrophilic resin. The particular ink compositions contained 20% or higher concentrations of solvent and this would be undesirable for stamp pads. U.S. 5,642,141 to Hale describes a process for inkjet printing heat activated inks onto an intermediate substrate and subsequently heat transferring that ink to a substrate similar to the Graphic Magicians earlier patent. The ‘141 patent claims a broad range of ink compositions suitable for the inkjet printing but interestingly does not reference the earlier ‘849 patent that includes the same compositions. There are numerous additional inkjet ink patents included in the reference material where the claims relate to disperse dye compositions with varying types of surfactants, dispersants or solvents.
Sublimable dye-based ink compositions mentioned in the disclosed patents each suffer from a number of disadvantages if the ink is to be used in a consumer product such as a stamp pad. First, they must be designed to be toxicologically and environmentally safe, be non-irritant, and preferably provide a negative Ames test. The Ames test was developed by Professor Bruce Ames in the 70’s as a convenient means of determining if a chemical poses a potential mutagenic hazard. The test uses different strains of bacteria
to predict probabilities of a compound to cause DNA mutations. Although a positive Ames test in itself does not mean that an ink is necessarily harmful to humans, it can create a negative perception by users of such a product. A good reference to Ames test and imaging materials can be found in Peter Gregory’s publication *Chemistry and Technology of Printing and Imaging Systems*. Some disperse dyes mentioned in earlier patents are not Ames negative. A second issue is the use of solvents and chemicals that are now considered either toxic or environmentally unfriendly. A third issue is the stability of the ink compositions. It is difficult to maintain long-term dispersion of pigment-based disperse dye sublimation inks. If the dye particles aggregate the inks will not print consistently. A fourth issue relates to maintaining an environmentally friendly solvent mixture that will not dry prematurely. A fifth issue is formulating an ink with the proper viscosity such that the ink is wicked at a desired rate but not so low that the ink will puddle when it first contacts the substrate. Typically inks designed for inkjet application have relatively low viscosities and are not suitable for stamp pad use. A sixth issue is the inclusion of polymer components where the polymer can soften and adhere to the decorated item. A seventh issue particularly related to inkjet inks is the inclusion of certain specialty chemicals necessary for proper long-term operation of ink jet pens but not required or desired for the disclosed application. An eighth issue is the percentage of water in the composition. It is desirable for the ink to be primarily aqueous based, but this is not the case with most industrial use inkjet inks. It should be readily apparent that for broad consumer use, especially with children, the chemical composition criteria of the sublimation inks will necessarily be more stringent than for commercial or industrial inkjet sublimation inks.
Rubber stamp pads are commonly used by the craft industry for diverse decorating applications. A typical construction would include a pad to hold the specific ink and an image-based rubber applicator. The applicator is inked, and the image transferred to a surface such as paper, plastic, wood, etc. Requirements for the ink fluid would include suitable fluidity, ability to adhere to the rubber surface but be easily released from the surface, and resistance to premature drying. In addition, the ink must be environmentally friendly and have limited or no volatile organic compounds. Additional safety issues would apply when the ink is to be used by children.
Stamp pad inks fluids are usually based on water with limited use of water miscible solvents such as alcohols and glycerin. The colorant is usually a dye that is soluble in the fluid system. Additional components could include surfactants to adjust viscosity and biocides to prevent mold. Pigment-based stamp pad inks are also known where improved light stability is desired and for those inks a pigment dispersant is probably required. Disperse or sublimation inks would have the characteristics of pigment inks where the disperse dye is insoluble in the fluid matrix.
A pending application from the same inventors describes specific sublimation dye-based inks for use in nib-based markers. While those compositions are suitable for use in marking pens, they must be modified for use in stamp pads. Specifically, higher ink viscosities are necessary to prevent ink drying and this is accomplished by appropriate concentrations of water-miscible cosolvents such as glycerin or propylene glycol as well
as suitable surfactants. Wetting agents are also useful for adhesion of the ink to the rubber stamp portion. The specific dye concentration may be less than for typical ink jet inks as the higher stamp pad ink viscosity results in higher ink lay down on a substrate.
Preparation of the stamp pad inks is similar to that mentioned in pending application for the sublimation markers. The disperse dye is typically milled with a type and quantity of water and dispersant required to produce a particle size dispersion that once combined with additional ink components will provide a dye dispersion that will remain in a stable dispersed form for an extensive time period even under varying environmental conditions. The preferred average particle will be in the 5 to 200 nm size range. The particular technique used to mill the dye particles can be one common to the pigment milling industry and could include (for example) ball mills, attritors, or continuous media mills. The invention is not limited to specific disperse dyes and could include ones that are typically used to decorate textile fibers of coated novelty items. Preferably the dyes are free of impurities and toxic components and are environmentally friendly. It is preferred that the particular dyes pass an AMES test for potential mutagenicity. Within the scope of this invention is the option for having the dye particles encapsulated in a polymer. The quantity and type of dispersant will depend on the specific disperse dye and could range (typically) from 0.5 to 50% of the weight of pigment. The specific dispersant is limited only to one that provides the desired pigment dispersion stability and could include such materials (for example) as polymeric acrylic acids, ethoxylated compounds, block and graft polymers, and sulfonate compounds. Additional ink components could be included during the process of preparing the dye dispersion or alternatively added during dilution of the dispersion.
The above dispersion is then diluted with water, co-solvents and additional ink components such that the final dye concentration will be in the 1 to 10%, depending on the particular dye and its tinctorial strength. The final concentration of water is 30 to 60% of the total ink. A secondary water-miscible solvent or mixture of solvents is used to reduce evaporation and prevent premature ink drying. The total quantity of water-miscible co-solvents is typically in the 40 – 70% range. Examples of suitable co-solvents include alcohols, glycols, glycerin, and pyrrolidine. The ink may also include additional components such as pH adjusters, surfactants, biocides, viscosity modifiers, defoamers and light stabilizers.
An example of a composition usable for stamp pad sublimation would include components in the following ranges:
| Component | Range |
|--------------------|---------|
| Dye | 2-10% |
| Dispersant | 0-5% |
| Propylene Glycol | 0-30% |
| Glycerin | 0-25% |
| Polyethylene Glycol| 0-14% |
| Aquazol | 0-4% |
| Miscellaneous | 0-5% |
Additional Inventors being named on Patent application titled:
**Dye Sublimation Ink Composition and Process for use with Pad Printers**
Jeff Morgan, Sarasota, Florida
Paula Smith, Sarasota, Florida
Jana Petrova, Ellenton, Florida
|
Efficient and stable $\text{CH}_3\text{NH}_3\text{PbI}_3$-sensitized ZnO nanorod array solid-state solar cells
Dongqin Bi,$^a$ Gerrit Boschloo,$^a$ Stefan Schwarzmüller,$^b$ Lei Yang,$^a$ Erik M. J. Johansson$^a$ and Anders Hagfeldt*,$^a$
We report for the first time the use of a perovskite ($\text{CH}_3\text{NH}_3\text{PbI}_3$) absorber in combination with ZnO nanorod arrays (NRAs) for solar cell applications. The perovskite material has a higher absorption coefficient than molecular dye sensitizers, gives better solar cell stability, and is therefore more suited as a sensitizer for ZnO NRAs. A solar cell efficiency of 5.0% was achieved under 1000 W m$^{-2}$ AM 1.5 G illumination for a solar cell with the structure: ZnO NRA/$\text{CH}_3\text{NH}_3\text{PbI}_3$/spiro-MeOTAD/Ag. Moreover, the solar cell shows a good long-term stability. Using transient photocurrent and photovoltage measurements it was found that the electron transport time and lifetime vary with the ZnO nanorod length, a trend which is similar to that in dye-sensitized solar cells, DSCs, suggesting a similar charge transfer process in ZnO NRA/$\text{CH}_3\text{NH}_3\text{PbI}_3$ solar cells as in conventional DSCs. Compared to $\text{CH}_3\text{NH}_3\text{PbI}_3$/TiO$_2$ solar cells, ZnO shows a lower performance due to more recombination losses.
Introduction
Dye-sensitized solar cells (DSCs) have been recognized as one of the most promising alternatives to conventional silicon solar cells.$^{1,2}$ As they combine the advantages of low-cost, high-throughput, and low-tech fabrication techniques, they have been shown to be much more competitive in terms of cost, energy-payback time and environmental impact. Recently, DSCs with a liquid redox electrolyte have been developed containing one-electron cobalt-complex redox mediators, with a record efficiency ($\eta$) up to 12.3%.$^{3,4}$ However, it would be more practical to replace liquid electrolytes with noncorrosive, nonvolatile materials to eradicate most problems related to manufacturing and production lifetime.$^5$
To date, the efficiency of solid-state DSCs (ssDSCs) is still lower than the conventional DSCs based on a liquid electrolyte.$^6$ Generally, the lower performance of ssDSCs is attributed to incomplete pore-filling of the mesoporous TiO$_2$ with the solid hole transporting material (HTM) and to limited light harvesting because of the use of relatively thin mesoporous TiO$_2$ films.$^7,8$ One approach to improve the filling sensitized films with solid HTMs is to replace the nanoparticles in the mesoporous film with vertically ordered (1D) nanostructures, providing a direct pathway for electron transport and a straight channel for filling the pores of the sensitized film with the HTM solution.$^9$ Among the 1D nanostructures, vertically aligned ZnO nanorod arrays have attracted considerable interest because of their unique material properties as well as easy availability.$^{10}$ In particular, single crystalline ZnO nanowires enable fast electron transport and have been used for ssDSCs.$^{11,12}$ Gao et al. reported efficiencies of 1.7% and 5.65% by using 12 µm and 50 µm multilayers of ZnO nanorod arrays.$^{11,13}$ Usually, ZnO nanorod arrays are much shorter (less than 1 µm) and the reported efficiencies are less than 1%.$^{11,12,14,15}$ This can be attributed to the insufficient area for dye adsorption, and also aggregation of dye molecules in the ZnO surface and formation of Zn$^{2+}$–dye complexes, which retards electron injection from the dye to the semiconductor.$^{16,17}$ Therefore, using inorganic sensitizers may be a good alternative for ZnO NRA solar cells.$^{18,19}$ Kim et al. investigated P3HT/CdSe/Cds/ZnO NRA solar cells, and obtained an efficiency of 1.5%.$^{20}$ In Table 1, the reported solar cell performance of solid state ZnO NRA solar cells is shown.$^{9,11–13,15,18–23}$
Using $\text{CH}_3\text{NH}_3\text{PbI}_3$ as a sensitizer appears to be a promising way to achieve better efficiency in ZnO NRA based solar cells. The lead perovskite material has a direct band gap, a large absorption coefficient ($1.5 \times 10^4$ cm$^{-1}$ at 550 nm) and a high carrier mobility, which make it very attractive as a light harvester in heterojunction solar cells.$^{24,25}$ Recently, $\text{CH}_3\text{NH}_3\text{PbI}_3$ was applied in hybrid solar cells where it was found to act as a sensitizer,$^{24}$ as an electron conductor,$^{25}$ and as a hole conductor.$^{26}$ Many questions related to charge transport and detailed structures at the interface are still open.$^{25}$ In this paper, we report on an easy processable spiro-MeOTAD/$\text{CH}_3\text{NH}_3\text{PbI}_3$/ZnO NRA solar cell, and compare the effect of the nanorod length on the solar cell performance. Furthermore, we investigate the electron transport and recombination processes and the long-term stability of such devices.
Table 1 Performance of solid-state solar cells based on ZnO nanorod arrays
| Device structure [reference] | NRA length | $\eta$/% | $V_{oc}$/V | $J_{sc}$/mA cm$^{-2}$ | FF |
|-----------------------------|------------|---------|-----------|-----------------------|----|
| Spiro-MeOTAD/D102/ZnO NRAs$^{12}$ | 600 nm | 0.093 | 0.47 | 0.73 | — |
| Spiro-MeOTAD/D102/ZnO–MgO/NRAs$^{12}$ | 600 nm | 0.156 | 0.49 | 1.12 | — |
| Spiro-MeOTAD/D102/ZnO–ZrO$_2$/NRAs$^{12}$ | 600 nm | 0.283 | 0.47 | 2.14 | — |
| Spiro-MeOTAD/D149/ZnO NRAs$^{12}$ | 600 nm | 0.088 | 0.47 | 0.71 | — |
| Spiro-MeOTAD/D149/ZnO–MgO/NRAs$^{12}$ | 600 nm | 0.278 | 0.58 | 1.53 | — |
| Spiro-MeOTAD/D149/ZnO–ZrO$_2$/NRAs$^{12}$ | 600 nm | 0.596 | 0.57 | 3.02 | — |
| P3HT/Cds/ZnO NRAs$^{20}$ | 800 nm | 0.24 | 0.34 | 1.6 | 0.43 |
| P3HT/CdSe/Cds/ZnO NRAs$^{20}$ | 800 nm | 1.5 | 0.675 | 4.2 | 0.52 |
| P3HT/N3/ZnO NRAs$^{21}$ | 250 nm | 0.13 | 0.46 | 0.72 | 0.38 |
| P3HT/Z907/ZnO NRAs$^{15}$ | 110 nm | 0.2 | 0.30 | 1.73 | 0.39 |
| P3HT/Z907/ZnO NRAs$^{22}$ | 500 nm | 0.2 | 0.23 | 2.0 | — |
| MEH-PPV/Z907/ZnO NRAs$^{11}$ | 170 nm | 0.61 | 0.29 | 6.53 | 0.32 |
| P3HT/mercurochrome/ZnO NRAs$^{23}$ | 300 nm | 0.13 | 0.34 | 0.91 | 0.43 |
| P3HT/Cds/ZnO$^{18}$ | 180 nm | 0.11 | 0.60 | 0.39 | 0.48 |
| MEH-PPV/Cds/ZnO NRAs$^{19}$ | 200–300 nm | 0.65 | 0.78 | 2.87 | 0.29 |
| CuSCN/NF19/ZnO NRAs$^{13}$ | 11–12 µm | 1.7 | 0.57 | 8.0 | — |
| Spiro-MeOTAD/Z907/ZnO NRAs–TiO$_2$$^9$ | 50 µm | 5.65 | 0.788 | 12.2 | 0.51 |
**Experimental**
Fluorine-doped tin oxide (FTO)-coated glass (Pilkington TEC15, 15 Ω $\square^{-1}$) was coated with ZnO colloids as described elsewhere, providing a seeding layer for the growth of the ZnO nanorods as well as an electronic blocking underlayer. A solution of Zn(NO$_3$)$_2$·6H$_2$O (0.025 M), polyethylenimine (branched) (3 mM) and hexamethyleneetetramine (HMTA) (0.025 M) in distilled water$^{27}$ was used for hydrothermal growth of the nanorods. A beaker containing this solution was placed with FTO substrates facing down in an oil bath maintained at 85 °C during the whole growth process.$^{27}$ The perovskite sensitizer CH$_3$NH$_3$PbI$_3$ was prepared according to the reported procedure.$^{24}$ Hydroiodic acid (30 mL, 57 wt% in water) was stirred with methylvamine (27.8 mL, 0.273 mol, 40% in methanol) at 0 °C for 2 h. The resulting solution was evaporated and the resulting methylammonium iodide (CH$_3$NH$_3$I) was readily available for further processing. To prepare CH$_3$NH$_3$PbI$_3$, equal molar amounts of CH$_3$NH$_3$ and PbI$_2$ were mixed in γ-butyrolactone at 60 °C and stirred overnight. The ZnO nanorod array substrates were sintered in air at 400 °C for 30 minutes. A 40 wt% perovskite precursor solution was dispensed onto the ZnO nanorod array film via spin-coating at 1500 rpm for 30 seconds, followed by heating at 100 °C for 10 min on a hot-plate. The composition of the spin-coating solution for the hole transport material (HTM) was 0.170 M 2,2',7,7'-tetrakis-(N,N-di-p-methoxyphenyl-amine)-9,9'-spirobifluorene (spiro-MeOTAD), 0.064 M bis(trifluoromethane)sulfonylimide lithium salt (LiTFSI) and 0.198 M 4-tert-butylpyridine (TBP) in chlorobenzene. The CH$_3$NH$_3$PbI$_3$-sensitized ZnO films were coated with the HTM solution using a spin-coating method at 4000 rpm. A silver contact with a thickness of 200 nm was deposited onto the solar cell by thermal evaporation.
Current–voltage ($I-V$) characteristics were measured using a Keithley 2400 source per meter and a Newport solar simulator (model 91160) giving light with AM 1.5 G spectral distribution, which was calibrated using a certified reference solar cell (Fraunhofer ISE) at an intensity of 1000 W m$^{-2}$, or with the help of a neutral density filter at 100 W m$^{-2}$. A black mask (0.2 cm$^2$) was applied on top of the cell to avoid significant additional contribution from light falling on the device outside the active area.
Incident photon to current conversion efficiency (IPCE) spectra were recorded using a computer-controlled setup consisting of a xenon light source (Spectral Products ASBXE-175), a monochromator (Spectral Products CM110), and a Keithley 2700 Multimeter, and calibrated using a certified reference solar cell (Fraunhofer ISE). The electron lifetime and transport time were measured using a white LED (Luxeon Star 1W) as the light source. Voltage and current traces were recorded with a 16-bit resolution digital acquisition board (National Instruments) in combination with a current amplifier (Stanford Research Systems SR570) and a custom-made system using electromagnetic switches.
The cross-sections of the solar cell devices were imaged using a scanning electron microscope (SEM). Samples were scribed on the substrate (glass) side and cracked prior to acquisition of the SEM-images (Zeiss LEO1550 high resolution SEM). The acceleration voltage (EHT) was 10 kV and the working distance (WD) ranged from 12 to 13.5 mm. Entire cross-sections were imaged at a magnification of 50 000.
**Results and discussion**
In Fig. 1, cross-sectional scanning electron microscopy (SEM) images of the ZnO nanorod array samples are shown. Fig. 1a–f show the bare ZnO nanorod samples obtained at different reaction times. The thickness of the nanorod array layer varied between 400 nm and 1400 nm, while the diameter of the nanorods was almost constant (~50 nm). Upon longer growth of the ZnO nanorod array, the orientation of the rod becomes more perpendicularly aligned with respect to the substrate. Fig. 1g–l show the samples after deposition of CH$_3$NH$_3$PbI$_3$ and...
spiro-MeOTAD. The ZnO NRA appears to be unaffected by the deposition. It can be seen that CH$_3$NH$_3$PbI$_3$ effectively penetrates into the interspaces of the ZnO nanorod arrays. The degree of filling can be controlled by varying the concentration of the spin-coating solution.\textsuperscript{28} If the concentration is very high, a large degree of filling will be found, and the formation of a capping layer on top of the filled structure by the excess material is expected.\textsuperscript{25} For the CH$_3$NH$_3$PbI$_3$/ZnO nanorod array electrodes investigated here, however, there was no evidence for the existence of a significant capping layer of the lead perovskite material. Also visible on the SEM images is the HTM spiro-MeOTAD, which forms a capping layer of about 400 nm on top of the CH$_3$NH$_3$PbI$_3$/ZnO NRA structure. It is not clear whether the spiro-MeOTAD infiltrates into the CH$_3$NH$_3$PbI$_3$/ZnO nanorod structure, but this is expected to occur if some porosity is remaining after CH$_3$NH$_3$PbI$_3$ deposition.
UV-Vis spectra of CH$_3$NH$_3$PbI$_3$/ZnO nanorod samples show a broad absorption spectrum ranging from 400 nm to 800 nm, see Fig. 2. The absorbance increased with the length of the nanorods, indicating that more of the organic lead perovskite material is deposited. This is in agreement with the SEM analysis.
Fig. 3 shows the effect of the ZnO nanorod length on the key photovoltaic performance parameters of spiro-MeOTAD/CH$_3$NH$_3$PbI$_3$/ZnO nanorod array solar cells. The short-circuit current density ($J_{sc}$), fill factor (FF) and power conversion efficiency ($\eta$) first increase and then decrease, the open circuit voltage ($V_{oc}$) decreases steadily with increasing nanorod length. This trend is similar to that described for ZnO NRA/Cds/CdSe solar cell devices, the difference is that the $J_{sc}$ in the perovskite solar cell is roughly 3 times higher than that in the Cds/CdSe system ($J_{sc} = 4.23$ mA cm$^{-2}$).\textsuperscript{29} $J_{sc}$ is strongly dependent on the nanorod length; it increases from 8.9 to 12.7 mA cm$^{-2}$ as the nanorod length increases from 400 to 1000 nm. This can be attributed to the higher light harvesting efficiency since a larger amount of CH$_3$NH$_3$PbI$_3$ was loaded into the ZnO nanorod structure as the ZnO surface area increased (Fig. 1b). However, after reaching a maximum of 12.7 mA cm$^{-2}$, $J_{sc}$ decreases slightly. This may be caused by the increased recombination of charges before collection at the contacts, as will be discussed later. The $V_{oc}$ decreased from 0.75 V to 0.42 V as the nanorod length increased from 400 nm to 1400 nm. This strong dependence of $V_{oc}$ on the nanorod length can be attributed to the larger interfacial area: this gives rise to the increased charge separation (and hence photocurrent), but also to increased recombination. The latter dominates at ZnO nanorod lengths larger than one micrometer.
The $J-V$ curve of our best spiro-MeOTAD/CH$_3$NH$_3$PbI$_3$/ZnO nanorod solar cell is shown in Fig. 4. The top efficiency was 5.0% under 1000 W m$^{-2}$ AM 1.5 G illumination with a $J_{sc}$ of 12.7 mA cm$^{-2}$, a $V_{oc}$ of 0.68 V and a fill factor of 0.58. At 10% of this light intensity an efficiency of 5.2% was recorded. Note that the $V_{oc}$ and $J_{sc}$ values found here are much higher than typical values for dye-sensitized ZnO nanorod array solar cells with a rod length ranging between 200 nm and 1800 nm ($V_{oc} = 0.25–0.59$ V, $J_{sc} = 0.31–2.0$ mA cm$^{-2}$). The CH$_3$NH$_3$PbI$_3$/ZnO nanorod system exhibits a broad IPCE spectrum from 400 to 800 nm with maximum values above 60% in the wavelength range of 400–540 nm (Fig. 2), which is much better than comparable values found in ssDSCs based on ZnO nanorod arrays.
The light intensity dependence of $J_{sc}$ and $V_{oc}$ was investigated in the CH$_3$NH$_3$PbI$_3$/ZnO nanorod devices, see Fig. 5a and b. A linear dependence of $J_{sc}$ on incident light intensity $I$ is found (the fit of $J_{sc} \propto I^\alpha$ yielded $\alpha = 1.0$), which is typical for well-behaved solar cells. The slope in $V_{oc}$ versus intensity varied slightly with the nanorod length: it was about 170 mV per decade for 600 nm and 1.0 μm, and 118 mV per decade for 1.4 μm length. Notably, it is significantly higher than the value of 59 mV per decade that is expected in dye-sensitized solar cells, when recombination of a conduction band electron from the metal oxide to the redox electrolyte shows first order kinetics. It seems to be a reasonable approximation that the doping level of the spiro-MeOTAD does not change much when the light intensity is varied, and that the Fermi level in the HTM can be considered as a constant. Deviation from the first order recombination kinetics can be attributed to trap-assisted recombination, inhomogeneous recombination due to the variations in the thickness of the CH$_3$NH$_3$PbI$_3$ absorber, as well as from a poor blocking layer at the FTO contact.
Photovoltage transient measurements under open-circuit conditions were used to determine the electron lifetime ($\tau_e$) in DSCs. The measured $\tau_e$ is shown as a function of $V_{oc}$ (Fig. 5c). As expected, $\tau_e$ decreases with increasing $V_{oc}$ (and increasing light intensity). Moreover, the nanorod length affects the lifetime very strongly. The lifetime values in Fig. 5d are similar to those found in comparable ssDSCs, indicating that the enhanced solar cell performance found here mainly comes from the enhanced light harvesting. For optimized solar cell performance, a ZnO nanorod length of about 800 nm is sufficient: longer nanorods do not lead to significantly more light harvesting (note that the devices have a reflective metal back contact), but does lead to significantly faster electron–hole recombination.
Photocurrent response times were determined under short-circuit conditions using small square wave modulation of the light intensity, see Fig. 5d. We attribute the photocurrent response time to the electron transport time ($t_{tr}$) in the ZnO nanorods, as the HTM is highly doped. Interestingly, the dynamics of electron transport are insensitive to light intensity, which differs from the typical enhancement of transport dynamics with increasing light intensity in traditional DSCs. In that case, it is assumed that most electrons are trapped in states that are distributed over a range of energies. Electron transport is thought to occur by thermal detrapping, followed by rapid movement through the conduction band until the next trapping event and the electron is largely immobilized. Multiple trapping/detrapping does not seem to occur in the ZnO NRA, which may be attributed to the fact that each nanorod is a single crystal. Similar results for ZnO NRAs in DSCs have been reported before. The inset of Fig. 5d shows that the transport time scales with the square of the nanorod length, suggesting that electron transport occurs by diffusion. Using $D = L^2/(2.47t_{tr})$, an electron diffusion coefficient ($D$) of $5.4 \times 10^{-5}$ cm$^2$ s$^{-1}$ is calculated, which is much lower than expected for electron diffusion in a ZnO single crystal.
The $t_{1/2}$ and $\tau_e$ dependence on the nanorod length is similar to that in DSCs based on ZnO NRAs, indicating that the injected electrons are transported though the ZnO nanorod, which is different from the reported Al$_2$O$_3$/CH$_3$NH$_3$PbI$_3$Cl solar cell where electron transport occurs in the perovskite layer. The electron transport time in the ZnO NRAs is slightly faster than that in mesoporous TiO$_2$ films of similar thickness. The spiro-MeOTAD/CH$_3$NH$_3$PbI$_3$/ZnO nanorod solar cells show lower performance compared to similar devices with mesoporous TiO$_2$ as the metal oxide. The difference mainly stems from the difference in $V_{oc}$, being about 200 mV higher in TiO$_2$-based devices. This is surprising, since TiO$_2$ (anatase) and ZnO are expected to have a similar conduction band edge potential. In DSCs similar $V_{oc}$ values are indeed found. Origin of the lower $V_{oc}$ seems to be the relatively fast recombination in ZnO NRA/perovskite solar cells, which is most evident for the devices with relatively long nanorods.
The stability of spiro-MeOTAD/CH$_3$NH$_3$PbI$_3$/ZnO nanorod devices was investigated by storing them in air at room temperature without further encapsulation. At selected times, the devices were characterized under 1000 W m$^{-2}$ AM1.5 illumination, see Fig. 6. After 500 hours of storage the overall conversion efficiency was only slightly decreased from a maximum value of 5.0% to 4.35%. This result is much better than previous reports on stability of DSCs based on ZnO nanostructures.
**Conclusions**
In conclusion, a spiro-MeOTAD/CH$_3$NH$_3$PbI$_3$/ZnO nanorod array solar cell was developed and characterized in this report. The ZnO nanorods offer a fast electron transport pathway and the CH$_3$NH$_3$PbI$_3$ results in efficient visible light harvesting. The optimized solar cell exhibited an efficiency of 5.0% under 1000 W m$^{-2}$ AM1.5 illumination. The fabricated perovskite solar cell exhibited promising initial stability. The better solar cell performance compared to other solar cells based on ZnO nanorod arrays is attributed to the enhanced light harvesting. The dependence of the electron transport and recombination on the nanorod length is similar to that found in dye-sensitized solar cells based on ZnO NRAs. Compared to CH$_3$NH$_3$PbI$_3$/TiO$_2$ solar cells, ZnO shows a lower performance due to more recombination losses.
**Acknowledgements**
We thank the Swedish Energy Agency, the Swedish Research Council (VR), the Göran Gustafsson Foundation, the STandUP for Energy program, and the Knut and Alice Wallenberg Foundation for financial support.
**References**
1 A. Hagfeldt, G. Boschloo, L. Sun, L. Kloo and H. Pettersson, *Chem. Rev.*, 2010, **110**, 6595–6663.
2 B. O’Regan and M. Grätzel, *Nature*, 1991, **353**, 737–740.
3 A. Yella, H. W. Lee, H. N. Tsao, C. Y. Yi, A. K. Chandiran, M. K. Nazeeruddin, E. W. G. Diau, C. Y. Yeh, S. M. Zakeeruddin and M. Grätzel, *Science*, 2011, **334**, 629–634.
4 S. M. Feldt, E. A. Gibson, E. Gabrielsson, L. Sun, G. Boschloo and A. Hagfeldt, *J. Am. Chem. Soc.*, 2010, **132**, 16714–16724.
5 U. Bach, D. Lupo, P. Comte, J. E. Moser, F. Weissortel, J. Salbeck, H. Spreitzer and M. Grätzel, *Nature*, 1998, **395**, 583–585.
6 J. Burschka, A. Dualeh, F. Kessler, E. Baranoff, N.-L. Cevey-Ha, C. Yi, M. K. Nazeeruddin and M. Grätzel, *J. Am. Chem. Soc.*, 2011, **133**, 18042–18045.
7 P. Docampo, A. Hey, S. Guldin, R. Gunning, U. Steiner and H. J. Snaith, *Adv. Funct. Mater.*, 2012, 1–10.
8 L. Schmidt-Mende, S. M. Zakeeruddin and M. Grätzel, *Appl. Phys. Lett.*, 2005, **86**, 013504.
9 C. K. Xu, J. M. Wu, U. V. Desai and D. Gao, *Nano Lett.*, 2012, **12**, 2420–2424.
10 D. Q. Bi, F. Wu, W. J. Yue, Y. Guo, W. Shen, R. X. Peng, H. A. Wu, X. K. Wang and M. T. Wang, *J. Phys. Chem. C*, 2010, **114**, 13846–13852.
11 D. Q. Bi, F. Wu, Q. Y. Qu, W. J. Yue, Q. Cui, W. Shen, R. Q. Chen, C. W. Liu, Z. L. Qiu and M. T. Wang, *J. Phys. Chem. C*, 2011, **115**, 3745–3752.
12 N. O. V. Plank, I. Howard, A. Rao, M. W. B. Wilson, C. Ducati, R. S. Mane, J. S. Bendall, R. R. M. Louca, N. C. Greenham, H. Miura, R. H. Friend, H. J. Snaith and M. E. Welland, *J. Phys. Chem. C*, 2009, **113**, 18515–18522.
13 U. V. Desai, C. K. Xu, J. M. Wu and D. Gao, *Nanotechnology*, 2012, **23**, 205401.
14 Y. M. Lee and H. W. Yang, *J. Solid State Chem.*, 2011, **184**, 615–623.
15 A. M. Peiro, P. Ravirajan, K. Govender, D. S. Boyle, P. O’Brien, D. D. C. Bradley, J. Nelson and J. R. Durrant, *J. Mater. Chem.*, 2006, **16**, 2088–2096.
16 J. A. Anta, E. Guillén and R. Tena-Zaera, *J. Phys. Chem. C*, 2012, **116**, 11413–11425.
17 T. Horiuchi, H. Miura, K. Sumioka and S. Uchida, *J. Am. Chem. Soc.*, 2004, **126**, 12218–12219.
18 L. D. Wang, D. X. Zhao, Z. S. Su, B. H. Li, Z. Z. Zhang and D. Z. Shen, *J. Electrochem. Soc.*, 2011, **158**, H804–H807.
19 E. D. Spoerke, M. T. Lloyd, E. M. McCready, D. C. Olson, Y. J. Lee and J. W. P. Hsu, *Appl. Phys. Lett.*, 2009, **95**, 3232231.
20 H. Kim, H. Jeong, T. K. An, C. E. Park and K. Yong, *ACS Appl. Mater. Interfaces*, 2013, **5**, 268–275.
21 T. H. Lee, H. J. Sue and X. Cheng, *Nanoscale Res. Lett.*, 2011, **6**, 517.
22 P. Ravirajan, A. M. Peiro, M. K. Nazeeruddin, M. Graetzel, D. D. C. Bradley, J. R. Durrant and J. Nelson, *J. Phys. Chem. B*, 2006, **110**, 7635–7639.
23 T. H. Lee, H. J. Sue and X. Cheng, *Nanotechnology*, 2011, **22**, 285401.
24 H. S. Kim, C. R. Lee, J. H. Im, K. B. Lee, T. Moehl, A. Marchioro, S. J. Moon, R. Humphry-Baker, J. H. Yum, J. E. Moser, M. Gratzel and N. G. Park, *Sci. Rep.*, 2012, **2**, 591.
25 M. M. Lee, J. Teuscher, T. Miyasaka, T. N. Murakami and H. J. Snaith, *Science*, 2012, **338**, 643–647.
26 L. Etgar, P. Gao, Z. Xue, Q. Peng, A. K. Chandiran, B. Liu, M. K. Nazeeruddin and M. Grätzel, *J. Am. Chem. Soc.*, 2012, **134**, 17396–17399.
27 (a) M. Law, L. E. Greene, J. C. Johnson, R. Saykally and P. D. Yang, *Nat. Mater.*, 2005, **4**, 455–459; (b) L. E. Greene, M. Law, J. Goldberger, F. Kim, J. C. Johnson, Y. Zhang, R. J. Saykally and P. D. Yang, *Angew. Chem., Int. Ed.*, 2003, **42**, 3031–3034.
28 P. Docampo, A. Hey, S. Guldin, R. Gunning, U. Steiner and H. J. Snaith, *Adv. Funct. Mater.*, 2012, **22**, 5010–5019.
29 A. J. Frank, N. Kopidakis and J. v. d. Lagemaat, *Coord. Chem. Rev.*, 2004, **248**, 1165–1179.
30 E. Galoppini, J. Rochford, H. H. Chen, G. Saraf, Y. C. Lu, A. Hagfeldt and G. Boschloo, *J. Phys. Chem. B*, 2006, **110**, 16159–16161.
31 J. Nissfolk, K. Fredin, J. Simiyu, L. Haggman, A. Hagfeldt and G. Boschloo, *J. Electroanal. Chem.*, 2010, **646**, 91–99.
32 M. Quintana, T. Edvinsson, A. Hagfeldt and G. Boschloo, *J. Phys. Chem. C*, 2006, **111**, 1035–1041.
|
The Artesyn Embedded Technologies MVME7100, featuring the system-on-chip MPC864xD processor, offers a growth path for VMEbus customers with applications on the previous generation of VME, specifically the MPC74xx processors. The system-on-chip implementation offers power/thermal, reliability, and lifecycle advantages not typically found in alternative architectures.
The Artesyn MVME7100 single-board computer (SBC) helps OEMs of industrial, medical, and defense/aerospace equipment add performance and features for competitive advantage while still protecting the fundamental investment in VMEbus and related technologies. Customers can keep their VMEbus infrastructure (chassis, backplanes, and other VMEbus and PMC boards) while improving performance and extending the lifecycle. Also, the extended lifecycle of Artesyn computing products helps reduce churns in development and support efforts resulting from frequent product changes.
The faster processor and 2eSST VMEbus interface combine to offer significant performance improvement. New cost-effective peripherals can be integrated easily using USB interfaces.
Extended temperature (-40 °C to +71 °C) variants support a wide range of operating and storage temperatures in addition to increased tolerances for shock. This enables the boards to operate in harsh environments while maintaining structural and operational integrity.
Overview
VMEBUS 2ESST PERFORMANCE
The 2eSST protocol offers an available VME bus bandwidth of up to 320MB/s, an increase of up to 8x over VME64, while maintaining backward compatibility with VME64 and VME32. The combination of the latest Texas Instruments VMEbus transceivers and the Tundra Tsi148 VMEbus bridge’s legacy protocol support allows customers to integrate the MVME7100 series into their existing infrastructure providing backward compatibility and thereby preserving their investment in existing VMEbus boards, backplanes, chassis and software.
BALANCED PERFORMANCE
The MVME7100 series provides more than just faster VMEbus transfer rates; it provides balanced performance from the processor, memory subsystem, local busses and I/O subsystems. This coupled with a wealth of I/O interfaces make the MVME7100 series ideal for use as an application-specific compute blade, or an intelligent I/O blade/carrier. The NXP MPC864xD system-onchip (SoC) processor, running at speeds up to 1.3GHz, is well-suited for I/O and data-intensive applications. The integrated SoC design creates an I/O intensive state-of-the-art package that combines dual low power processing cores with on-chip L2 cache and dual integrated DDR2 memory controllers, PCI Express, DMA, Ethernet and local device I/O. The on-chip PCI Express interface and dual DDR2 memory busses are well matched to the processor. To ensure optimal I/O performance, the 8x PCI Express port is connected to a five-port PCI Express switch. Three 4x PCI Express ports are connected to PCI Express-to-PCI-X bridges which provide independent PCI-X busses for the two PMC-X sites and VME bus interface. A 1x PCI Express port is connected to a PCI Express-to-PCI bridge which is connected to the USB chip (commercial temperature only). The MVME7100 also offers quad Gigabit Ethernet interfaces, USB 2.0, and five (5) RS-232 serial connections. All of this adds up to a set of well-balanced, high-performance subsystems offering unparalleled performance.
Backward Compatibility
The MVME7100 series continues the direction of providing a migration path from Artesyn’s embedded controllers such as the MVME16x, MVME17x, MVME2300/MVME2400, and from Artesyn SBCs such as the MVME2600/2700 to a single platform. The MVME7100 series, like the MVME3100, MVME5100, MVME5500, and MVME6100 series, merged the best features of Artesyn’s embedded controllers and SBCs enabling OEMs to support varying I/O requirements with the same base platform, simplifying part number maintenance, technical expertise requirements, and sparing.
The MVME7100 series offers customers an alternate migration path from the MVME2100, MVME2300, MVME2400, MVME2600, MVME2700, MVME3100, MVME5100, MVME5500 and MVME6100 boards to allow them to take advantage of features such as the integrated MPC864xD SoC processor, DDR2 memory, Gigabit Ethernet, PCI-X, PCI Express, USB, and 2eSST.
PCI EXPANSION
The MVME7100 has an 8x PCI Express connection to support PCI Express expansion carriers such as the Artesyn XMCspan.
TRANSITION MODULES
The MVME7216E transition module provides industry standard connector access to two 10/100/1000BaseTX ports, and four asynchronous serial ports configured as RS-232 DTE. All of these are via RJ-45 connectors. The MVME7216E RTM is designed to directly connect to the VME backplane in chassis’ with an 80 mm deep rear-transition area.
Software Support
FIRMWARE MONITOR
The MVME7100 firmware (known as MOTLoad) is resident in the MVME7100 flash and provides power-on self-test, initialization and operating system booting capabilities. In addition, it provides a debugger interface similar to the time proven “BUG” interface on previous VMEbus boards from Artesyn.
OPERATING SYSTEMS AND KERNELS
The MVME7100 series supports booting a variety of operating systems including a complete range of real-time operating systems and kernels. Artesyn Embedded Technologies Embedded Computing Linux (2.6.25 kernel) is available now. VxWorks BSPs (5.5.1 AMP, 6.8 SMP) are provided and supported by Wind River Systems.
Specifications
PROCESSOR
- Microprocessor: NXP MPC864xD with dual PowerPC e600 cores
- Clock Frequency: 1.06 or 1.3 GHz
- On-chip L1 Cache (I/D): 32 K/32 K per core
- On-chip L2 Cache: 1MB per core
SYSTEM CONTROLLER
- Integrated within MPC864xD
MAIN MEMORY
- Type: Double data rate (DDR2) SDRAM
- Speed: DDR2-533
- Capacity: 1GB or 2GB
- Configuration: Dual memory controller
FLASH MEMORY
- Type: NOR flash, on-board programmable
- Capacity: 128MB
- Write Protection: Hardware via switch, software via register or sector lock
- Type: NAND flash, on-board programmable
- Capacity: 2GB, 4GB or 8GB
- Write Protection: Software via register
- Supported by YAFFS (Linux) or Datalight FlashFX® Pro (VxWorks) under separate license
NON-VOLATILE MEMORY
- Type: SEE PROM, on-board programmable
- Capacity: 128 KB (available for users), 8 KB baseboard Vital Product Data (VPD), and two (2) 256B Serial Presence Detect (SPD)
- Type: MRAM
- Capacity: 512 KB
VMEBUS INTERFACE
- Compliance: ANSI/VITA 1-1994 VME64 (IEEE STD 1014), ANSI/VITA 1.1-1997 VME64 Extensions, VITA 1.5-199x 2eSST
- Controller: Tundra Tsi148 PCI-X to VMEbus bridge with support for VME64 and 2eSST protocols
- DTB Master: A16, A24, A32, A64; D08-D64, SCT, BLT, MBLT, 2eVME, 2eSST
- DTB Slave: A16, A24, A32, A64; D08-D64, SCT, BLT, MBLT, 2eVME, 2eSST, UAT
- Arbiter: RR/PRI
- Interrupt Handler/Generator: IRQ 1-7/Any one of seven IRQs
- System Controller: Yes, switchable or auto detect
- Location Monitor: Two, LMA32
ETHERNET INTERFACE
- Controller: MPC864xD Triple Speed (TSEC) Ethernet Controllers
- Interface Speed: Four @ 10/100/1000Mbps (TSEC)
- Connector: Two Gigabit Ethernet ports routed to front panel RJ-45, two Gigabit Ethernet ports to VMEbus P2 connector, pin out matching MVME7216E RTM
- Indicators: Link status/speed/activity
ASYNCHRONOUS SERIAL PORTS
- Port 1
- Controller: MPC864xD Duart (second port N/C)
- Number of Ports: One 16550 compatible
- Configuration: RS-232 DTE (RxD, TxD, RTS, CTS)
- Async Baud Rate, bps max.: 38.4 K RS-232, 115 Kbps raw
- Connector: One front panel Mini DB-9
- Mini DB-9 to DB-9 adapter cable: SERIAL-MINI-D2
- Ports 2-5
- Controller: Exar ST16C544D Quart
- Number of Ports: Four 16550 compatible
- Configuration: RS-232 (RxD, TxD, RTS, CTS)
- Async Baud Rate, b/s max: 38.4 K RS-232, 115 Kbps raw
- Connector: Via VMEbus P2 connector, pinout matching MVME7216E RTM
- USB Interface (commercial temperature only)
- Controller: NEC µ720101
- Configuration: USB 2.0
- Number of ports: One
- Connector: One powered port routed to front panel
DUAL IEEE P1386.1 PCI MEZZANINE CARD SLOTS
- Address/Data: A32/D32/D64, PMC PN1, PN2, PN3, PN4 connectors (PN4 for PMC1 only)
- PCI Bus Clock: 33 MHz, 66 MHz or 100 MHz PCI/PCI-X
- Signaling: 3.3 V
- Power: +3.3 V, +5 V, ±12 V
- Module Types: Two single-wide or one doublewide, front panel or P2 I/O, PMC and PrPMC support, PMC1 site Pn4 routed to VMEbus P2 connector rows A and C
PCI EXPANSION CONNECTOR FOR INTERFACE TO XMCSPAN BOARDS
- 8x PCI Express interface
- One 76-pin connector located on MVME7100 planar
COUNTERS/TIMERS
- TOD Clock Device: Maxim DS1375 i²C device with battery backup
- Cell Storage Life: 10 years at 25 °C
- Cell Capacity Life: One year at 100% duty cycle, 25 °C
- Removable Battery: Yes
- Real-Time Timers/Counters: Four, 32-bit programmable timers in PLD; four, 32-bit programmable/cascadable timers in MPC864xD
- Watchdog Timer: In PLD
BOARD SIZE AND WEIGHT
- Height: 233.4 mm (9.2 in.)
- Depth: 160.0 mm (6.3 in.)
- Front Panel Height: 261.8 mm (10.3 in.)
- Width: 19.8 mm (0.8 in.)
- Max. Component Height: 14.8 mm (0.58 in.)
- Weight: .68 kg (1.5 lbs.)
POWER REQUIREMENTS
| Board Variant | Power (+5 V ±5%) |
|---------------|------------------|
| MVME7100-0161 | Typical: 40 W |
| | Maximum: 55 W |
| MVME7100-0163 | Typical: 40 W |
| | Maximum: 55 W |
| MVME7100-0171 | Typical: 45 W |
| | Maximum: 60 W |
| MVME7100-0173 | Typical: 45 W |
| | Maximum: 60 W |
ESTIMATED MTBF
Expected field MTBF estimate based on Telcordia SR-332, issue 1, ground fixed, controlled environment, unit ambient air temperature of 40 °C is 1,066,000 hours at 60% confidence level. Contact Artesyn for alternative environments or temperatures.
OTHER FEATURES
- RoHS compliant
- Jumper-less configuration
- On-board temperature sensor (Maxim MAX6649)
- JTAG header for connection of diagnostic tools
FRONT PANEL
- IEEE or Scanbe handles
- Connectors for serial, Gigabit Ethernet and USB ports (commercial temperature only)
- Openings for PMC sites
Transition Modules
I/O CONNECTORS
- MVME7216E
- Asynchronous Serial Ports: Four, RJ-45, labeled as COM2-5
- Ethernet: Two 10/100/1000BaseTX, RJ-45
NON-VOLATILE STORAGE
- 8 KB VPD SEE PROM
BOARD SIZE
- Height: 233.4 mm (9.2 in.)
- Depth: 80.0 mm (3.1 in.)
- Front Panel Height: 261.8 mm (10.3 in.)
- Front Panel Width: 19.8 mm (0.8 in.)
All Modules
ENVIRONMENTAL
| | Commercial | -ET |
|------------------------|------------|-----------|
| Cooling Method | Forced Air | Forced Air|
| Operating Temperature | 0 °C to +55 °C | −40 °C to +71 °C |
| Storage Temperature | −40 °C to +85 °C | −50 °C to +100 °C |
| Vibration Sine | 1G, 5 - 200 Hz | 1G, 15 - 2000 Hz* |
| Vibration Random | N/A | .0007g²/Hz 15 - 2000 Hz* |
| Shock | N/A | 4 g/11 mS* |
| Humidity | 5% to 90% RH | to 100% RH |
| Conformal Coating | Optional | Optional |
*Final ET shock and vibration capabilities TBD. Values shown are minimums.
SAFETY
All printed wiring boards (PWBs) are manufactured with a flammability rating of 94V-0 by UL recognized manufacturers.
ELECTROMAGNETIC COMPATIBILITY (EMC)
- Intended for use in systems meeting the following regulations:
- U.S.: FCC Part 15, Subpart B, Class A (non-residential)
- Canada: ICES-003, Class A (non-residential)
- Artesyn board products are tested in a representative system to the following standards:
- CE Mark per European EMC Directive 89/336/EEC with Amendments; Emissions: EN55022 Class B; Immunity: EN55024
## Ordering Information
| Part Number | Description | Weight |
|----------------------|-----------------------------------------------------------------------------|--------|
| MVME7100-0161 | 1.06 GHz MPC8640D, 1GB DDR2 memory, 4GB NAND flash, SCANBE (ENP1) | 0.57 kg|
| MVME7100-0163 | 1.06 GHz MPC8640D, 1GB DDR2 memory, 4GB NAND flash, IEEE (ENP1) | 0.61 kg|
| MVME7100-0171 | 1.3 GHz MPC8641D, 2GB DDR2 memory, 8GB NAND flash, SCANBE (ENP1) | 0.57 kg|
| MVME7100-0171-2GF | 1.3 GHz MPC8641D, 2GB DDR2 memory, 2GB NAND flash, SCANBE (ENP1) | 0.57 kg|
| MVME7100-0173* | 1.3 GHz MPC8641D, 2GB DDR2 memory, 8GB NAND flash, IEEE (ENP1) | 0.62 kg|
| MVME7100-0173-2GF | 1.3 GHz MPC8641D, 2GB DDR2 memory, 2GB NAND flash, IEEE (ENP1) | 0.62 kg|
| MVME7100ET-0161-2GF | Extended temperature – 1.06 GHz MPC8640D, 1GB DDR2 memory, 2GB NAND flash, SCANBE (ENP2) | 0.60 kg|
## Related Products
| Part Number | Description |
|----------------------|-----------------------------------------------------------------------------|
| XMCSPAN-001 | XMC Expansion, IEEE handles |
| MVME7216E-101 | Rear transition module |
| MVME721ET-101 | Extended temp RTM, new NEW I/O on 5 row P2, two GbE, four serial, PIM, 6E (for use with MVME3100/4100/7100) |
| MVME721ET-102 | Extended temp RTM SCANBE, I/O on 5 row P2, two GbE, four serial, PIM, 6E (for use with MVME3100/4100/7100) |
| SERIAL-MINI-D2 | Serial cable - Micro D sub connector to standard DB-9 |
| ACC/CABLE/SER/DTE/6E | Serial cable, RD 009, 2M, 2 DTE M/D/D, RJ-45 to DB-9 |
*Artesyn announced end of life (EOL) notification on September 25, 2015. Product availability is dependent on Artesyn’s component stock for this board. Please contact your local Artesyn sales manager for further information.
## SOLUTION SERVICES
Artesyn Embedded Technologies provides a portfolio of solution services optimized to meet your needs throughout the product lifecycle. Design services help speed time-to-market. Deployment services include global 24x7 technical support. Renewal services enable product longevity and technology refresh.
## WORLDWIDE OFFICES
| Country | Phone Number | Country | Phone Number |
|--------------|----------------|-------------|----------------|
| United States| +1 888 412 7832| China | +86 400 8888 183|
| Germany | +49 89 9608 2552| Japan | +81 3 5403 2730|
| Hong Kong | +852 2176 3540 | Korea | +82 2 6004 3268|
Artesyn Embedded Technologies, Artesyn and the Artesyn Embedded Technologies logo are trademarks and service marks of Artesyn Embedded Technologies, Inc. NXP and CoriQ are trademarks of NXP B.V. All other names and logos referred to are trade names, trademarks, or registered trademarks of their respective owners. © 2016 Artesyn Embedded Technologies, Inc. All rights reserved. For full legal terms and conditions, please visit www.artesyn.com/legal.
|
The Season of Pentecost
19th Sunday after Pentecost
The Week of October 11th, 2020
He will swallow up death forever!
The Sovereign Lord will wipe away all tears.
Isaiah 25:8
Hope Changes Everything
Welcome to Hope!
We are glad that you are here to worship with us and encourage you to become a part of our fellowship.
May God’s Word richly bless you today!
Hope Lutheran Church & School
Welcome to Worship
I was glad when they said to me, “Let us go to the house of the LORD!”
Psalm 122:1
We are glad to have you in worship and we are glad to worship together. Following governmental directives and in the interest of providing a healthy environment for all who come, we are implementing a number of changes to our worship routine:
- Everyone is encouraged to wear masks as you come inside the building (masks will be provided if you do not have your own)
- Hand sanitizers are available near the entrance of each worship area.
- Please maintain social distance from one another.
- Greeters will not shake hands.
- Please pick up your own bulletin.
Seating:
- Pews in the church are taped off to keep a safe distance. Chairs in the activity center will be distanced, and couples and families can be together.
For Holy Communion:
- Worshippers will approach with masks on and with social distancing.
- The Pastor and elders will sanitize their hands and wear masks.
After the benediction:
- Those closest to the exit will leave first, followed in order by the rest.
Offering plates and Fellowship Registrations will not be passed.
- Please place Fellowship Registrations and Offerings in the plate near the entrance.
Online recordings of each Sunday’s service will continue to be provided, though the actual broadcast will be delayed by one week.
Holy Communion
We welcome to the Lord’s Table all who are truly sorry for their sins, and desire His forgiveness; who believe that as we receive this bread and wine, we are also receiving the true body and blood of Christ; all who desire to strengthen their faith, and intend, with God’s help, to live for Christ and follow Him. Participants may receive the blood of Christ by the Common Cup or individual cups. Non-alcoholic wine (white color) is available for who have sensitivity to alcohol. Gluten free wafers are available – please tell an usher early in the service. Children and those not communing are welcome for a blessing. Cross your arms over your chest to indicate you would like to participate.
We will have Holy Communion at both services throughout the pandemic.
| This Week | Next Week |
|-----------|-----------|
| Reader—8:00 a.m.: | Pastor Neagley | Bob Warrell |
| Bible Readings: | Isaiah 25:6-9 | Isaiah 45:1-7 |
| Philippians 4:4-13 | 1 Thessalonians 1:1-10 |
| Matthew 22:1-14 | Matthew 22:15-22 |
Comm. Assistant: 8:00 a.m.—Rich Gruenhagen | David Fairfield |
Comm. Assistant: 10:30 a.m.—Phil Twietmeyer | Bill Durnin |
This Week At Hope
*Full church calendar may be found on the web at www.hopeLCS.org*
| Sunday | 8:00 am | Traditional Worship | Sanctuary |
|--------|---------|---------------------|-----------|
| | 9:15 am | New Member Class Room 109- Cancelled | *Resumes next week* |
| | 10:30 am | Contemporary Worship Service | Dufendach Center |
| | 11:45 am | AA at Hope | Outside |
| | 7:00 pm | “Return to the Lord” Bible Study | via Zoom |
| Monday | | | |
|--------|---------|---------|---------|
| Tuesday | 4:45 pm | Confirmation III | Rm 102 |
| Wednesday | 6:30 pm | Woman’s Bible Study Online | via Zoom |
| Thursday | 3:45 pm | Confirmation | Rm 123 |
| | 7:00 pm | Celebrate Recovery | Rm 109 |
| Friday | | | |
|--------|---------|---------|---------|
| Saturday | | | |
| Sunday | 8:00 am | Traditional Worship | Sanctuary |
| | 9:15 am | New Member Class | Rm 109 |
| | 10:30 am | Contemporary Worship Service | Dufendach Center |
| | 11:45 am | AA at Hope | Outside |
| | 2:00 pm | Visitation for Ruth Reed | Sanctuary |
| | 2:30 pm | Memorial Service for Ruth Reed | Sanctuary |
| | 7:00 pm | “Return to the Lord” Bible Study | via Zoom |
What’s Happening at Hope…
**Technical Team**: Interested in audio or video, or want to learn about it? We’re rounding up volunteers to help us with our online services. Please call Mark Shockey.
**Hymns on Youtube**: Our own Mark Shockey is recording hymns on the piano and placing them on Youtube. He plans to continue to record new hymns every two weeks. Sing at home or just listen. There is a pdf of the hymn, and a pdf of just the text in the video notes, in case that would be helpful. You can subscribe in order to receive updates but that isn’t necessary. Feel free to share the channel with anyone you think might find it enjoyable. Here is the link… [https://www.youtube.com/channel/UCESNPSmMly9HaFRdMLZliaQ](https://www.youtube.com/channel/UCESNPSmMly9HaFRdMLZliaQ)
**OPERATION CHRISTMAS CHILD – November 16 – 22, 2020**
Empty shoeboxes just waiting to be filled have been placed throughout the school lobby and narthex. Gift suggestions are: school supplies, such as pencils, pens, crayons, notebooks, etc. and non-liquid hygiene items, such as toothbrushes, bar soap, etc. A complete list of gift suggestions can be found by the shoeboxes. At this time, there are no changes from last year as to what can and can’t be packed in shoebox gifts this year. You can choose a boy or girl (ages 2-4, 5-9 or 10-14) to receive your shoebox. Filled shoeboxes can be dropped off at any time and placed in the corner by the steps leading from the narthex to the school. The last day will be Sunday, November 22, when we will set them up at the altar for prayer. If you have any questions, please call: Marjie Rodkey (215-949-1695) or Enola Cook at: (609-464-2366).
**DISASTER RELIEF UPDATE**
Thank you for your generous support. As a result of our special offering Hope members contributed $647 dollars for the victims affected by hurricanes and fires. Hope congregation has matched that gift for a total gift of $1,294 dollars. Many people stand in need today. Your sacrifice makes a difference and demonstrates that God’s people care.
**Look at our Pews:**
Craftsmen came in altered our pews so 6 people with wheelchairs or walkers can worship comfortably in our sanctuary. We have also reduced our mortgage debt. These efforts are made possible through our contributions to the Fulfilling the Mission of Hope Campaign. As contributions come in, we will continue other parts of our campaign, like creating a more user-friendly ramp, calling a youth worker, and adding technology to our sanctuary.
Thank you for your faithful support
**THE LUTHERAN HOUR RADIO & PODCAST**
Lower Bucks County airs Sunday mornings at 7am -97.1 (Bensalem), 107.3 (Phila.), 91.7 (Levittown) 91.9 (Lawrenceville, NJ) at 12 p.m. (Sunday noon) and 6 p.m. (Tuesdays). To access the Daily Devotions OR have the Podcast sent directly to your mobile device, head over to: www.lhm.org/dailydevotions
**MISSED A SUNDAY?**
Sermon audio files are available on Hope’s website at: [www.hopeLCS.org](http://www.hopeLCS.org) --> Click Church --> Then Click Worship --> Then Click Sermons. The entire worship service can also be seen by going to Youtube.com and searching for “Hope Lutheran Levittown” or by going to Hope’s website (shown above) and clicking on the link in the second opening slide or by Clicking Church --> Then Click News.
**SIMPLY GIVING PROGRAM by THRIVENT**
Information available in the narthex. Questions?
Contact Tom Marks 215-757-1702 or Kristina Hoffman 215-946-3467
Words of Hope - Weekly Announcements
Office hours this week: Monday—Friday 8:30 a.m.—4:00 p.m.
215-946-3467 firstname.lastname@example.org
Bible Studies at Hope:
Sundays at 9:15 a.m.- Seven Letters to the Churches in Revelation, taught by Pastor Neagley.
Sundays 7pm via Zoom- Haggai, Zechariah and Malachi – Return to the Lord, taught by Pastor Rich Mokry: https://us02web.zoom.us/j/84090410661?pwd=enxKMTRpUEZ6dTjkWXlRTk2dEpaQT09
Meeting ID: 840 9041 0661
Password: 2YS6bk Click here for Bible Study notes attachment.
Wednesdays 6:30 pm via Zoom- Women’s Study of the book of Acts. Contact Julia Hutchins at 215-741-2685 or email@example.com.
Meeting ID is 891 2812 0133
Password is 990272.
In Pastor Mokry’s Absence...
Today we welcome Pastor Rich Neagley as our preacher and worship leader today. God bless him as he proclaims God’s Good News. Pastor Mokry is on vacation this weekend and next week he will be taking time to continue education. Please keep him in your prayers. Have a pastoral need? Call Elder Rich Gruenhagen at 215-444-6126. The Elders are here to help.
Voters Meeting Sunday November 1 at noon. Join us for a brief, but important meetings we elect new Board members and Officers for positions here at Hope. We are thankful for members who volunteer their time and talents to advance our Lord’s mission. Please show your support by attending this meeting, and if you are interested in serving on any board, please speak to Tom Jeske. All confirmed members over the age of 18 are eligible to vote. Visitors are welcome. Together we are serving our Savior and touching
Face Masks Needed:
We are thankful to members and friends who can make face masks for us. They have been a big help to people who have forgotten their own. We are in need again of face masks. You can be reimbursed for materials. Questions? Elder Rich Gruenhagen at 215-444-6126
What’s Happening at Hope...
Sunday School Update: The Start of Sunday School for children is tentatively scheduled for beginning of November. We desire and a safe and healthy learning experience for our kids. Look out for updates. See Karen Barton or Pastor Rich for information.
Nursery Care: We are un-staffed at the moment. If you know of someone interested in the position, contact Beth Marks or Pastor Rich. The Nursery is located in the Conference Room, #102. Parents are asked to bring and leave with your own toys. Soft Activity bags are located in the Gymnasium Bible rack.
Heartfelt Giving
Heartfelt Giving is a unique online shopping and travel website that gives income dollars back to Hope. Go to Hope’s website (hopelcs.org), click the Church tab, then click the Giving tab (far right), and from the dropdown box select Giving Through Shopping. Then, click on the HeartFeltGiving.net/HopeLutheranChurch link. Afterwards, simply click on the business where you like to shop. You will see a variety of online shopping categories such as Travel and Name Brand Stores with stores which include: Best Buy, Boscovs, Dollar General, JCPenney, Kohls, Macy’s and many more. Questions? Contact Marge Gruenhagen at 267-987-9480 or firstname.lastname@example.org
Communion Assistants and Altar Guild Assistants needed:
This is a service to God and His people here at Hope. Communion Assistants help distribute the communion elements. Altar Guild member prepare and clean the communion trays and set up the altar. Men and women are needed for both, but you can do one or the other. Training will be provided and you can serve on a rotating basis along with others.
Interested? Call Glenn Holmes at 215-932-9754 or email email@example.com
Last Week’s Attendance:
8:00 a.m. - 44
10:30 a.m. - 30
|
September 15, 2017
VIA EMAIL AND REGULAR MAIL
Elizabeth J. Lipari
Administrative Practice Officer
Division of Taxation
Post Office Box 269
50 Barrack Street
Trenton, New Jersey 08695-0269
Re: Comments to Proposed New Rule N.J.A.C. 18:26
Dear Ms. Lipari:
On behalf of the New Jersey State Bar Association (NJSBA), I thank you for the opportunity to submit comments on the proposed new Transfer Inheritance Tax and Estate Tax regulations (the “Regulations”) published on July 17, 2017. We have reviewed the regulations and submit the following comments for your consideration.
N.J.A.C. Title 18, Chapter 26, Subchapter 2.
The proposed title of N.J.A.C. Title 18, Chapter 26, Subchapter 2 is “SUBCHAPTER 2. IMPOSITION AND COMPUTATION OF TAX”. To make it clear that this subchapter refers to inheritance tax, and not estate tax, the NJSBA recommends an amended title to read “SUBCHAPTER 2. IMPOSITION AND COMPUTATION OF TRANSFER INHERITANCE TAX”.
N.J.A.C. 18:26-2.1. Nature of Tax.
The Division proposes to include 18:26-2.1, which provides as follows:
(a) The Act imposes a tax upon transfers of the value of $500.00 or over, or of any interest thereon or income therefrom, held in trust or otherwise, to or for the use of any transferee, as set forth under N.J.S.A. 54:34-1, including, but not limited to, the following:
1. In the case of a resident decedent, where such transfers consist of real or tangible personal property situated in this State or intangible personal property wherever situated, owned by such decedent; and
2. In the case of a nonresident decedent, where such transfers consist of real or tangible personal property owned by such decedent situated in this State at the time of death.
The concern is that 18:26-2.1(a)1 does not limit imposition of tax for resident decedents to transfers at death or within three years of death, as set forth in N.J.S.A. 54:34-1. Thus, the NJSBA recommends that 18:26-2.1(a)1 be revised to include the following bold language: “In the case of a resident decedent, where such transfers consist of real or tangible personal property situated in this State or intangible personal property wherever situated, owned by such decedent at the time of death or within three years of the decedent’s death;”.
N.J.A.C. 18:26-2.10. Distribution by agreement.
The division proposes to revise the wording of this section to delete the phrase “admitted to probate”. The NJSBA recommends that the phrase “admitted to probate” be retained. The probate process is the legal mechanism for validating a document and proving that it is the valid will of the decedent. Removing the requirement that the will be admitted to probate circumvents the right of the Judiciary to determine whether a document is a decedent’s valid will. Two or more unprobated documents may be proffered as a decedent’s will. This would result in an ambiguity. Yet the division would have authority to choose between or among the documents and impose a tax based on a document that has not been proven to be a valid last will. Deleting the phrase “admitted to probate” results in lack of clarity for the administrative process and defeats the purpose of the Regulations to make the process clearer, rather than adding ambiguity.
N.J.A.C. 18:26-3A.2(b) and N.J.A.C. 18:26-3B.2(b). Amount of the tax and certain valuations, and N.J.A.C. 18:26-8.12(b). Partnerships.
The division proposes to add N.J.A.C. 18:26-3A.2(b), N.J.A.C. 18:26-3B.2(b) and N.J.A.C. 18:26-8.12(b) regarding the valuation and calculation of tax on “family limited partnerships” which seeks to define that term and to either deny any discount when valuing such interests or to generally limit any discount to 10 percent, depending on the circumstances. The statute requires that interests in all limited partnerships should be valued based on the “Clear Market Value” of such interest, just like any other assets of the estate. The NJSBA believes this attempt to deny discounts is beyond the scope of the statutory rule that seeks to tax fair value or clear market value, a factual determination made by an appraiser, and there is no authority for the disparate treatment of interests in so called “family limited partnerships” or for the limitation or denial of any valuation discounts. Further, the attempt to limit discounts to a 10 percent maximum is an entirely arbitrary percentage that is not found in the statute or law or fact. Accordingly, the NJSBA recommends these subsections be stricken.
N.J.A.C. 18:26-3A.4 and N.J.A.C. 18:26-3B.3. Reduction of tax; out-of-State property.
The division proposes to add N.J.A.C. 18:26-3A.4 and N.J.A.C. 18:26-3B.3 relating to the computation of tax as reduced by the portion of tax attributable to property located outside New Jersey and related examples. N.J.A.C. 18:26-3A.4(c)4 and N.J.A.C. 18:26-3B.3(c)4 provide the following example:
“4. Mr. J, a nonresident, creates a trust for the benefit of his surviving spouse, Mrs. J, which includes intangible property (stocks and bonds). After Mr. J dies, Mrs. J changes domicile to New Jersey, and dies as a New Jersey resident. The trust proceeds, as intangible personal property, would be considered New Jersey property, not out-of-State property. Therefore, the out-of-State credit calculated under (a) above is not allowable in this instance for the New Jersey estate tax.”
The concern is that the example does not delineate between a nonmarital or credit shelter-type of trust for a spouse, which is not includible in the estate of the surviving spouse upon his or her death, and a marital trust for which a qualified terminable interest property (QTIP) election was made, thereby making it includible in the surviving spouse’s estate upon his or her death. If not clarified, it discourages a surviving spouse for whom an out-of-state trust was created from moving back to New Jersey for fear of tax inclusion of the trust upon the spouse’s death. Moreover, the attempted taxation of a nonresident trust, administered under the laws of another sovereign state, is likely an unconstitutional attempt to tax property over which the state of New Jersey does not have legal authority or control. There is no legal basis to treat a trust that is validly established outside the state of New Jersey, differently from real estate that is situated in another jurisdiction. Accordingly, the NJSBA believes the example is unclear and unnecessary, and recommends that it be stricken from the regulations.
**N.J.A.C. 18:26-3A.6 and N.J.A.C. 18:26-3 B.5. Lien**
N.J.A.C. 18:26-3A.6 and N.J.A.C. 18:26-3 B.5 both provide that the estate tax imposed on the estate of a resident decedent remains a lien on all property of a decedent until paid. The concern is that the tax is a lien “until paid”, so the duration of the lien is unknown. By contrast, proposed N.J.A.C. 18:26-10.2 and N.J.S. 54:35-5 (as amended) provide that unpaid transfer inheritance tax is a lien for a period of 15 years from the death of the decedent.
The NJSBA has proposed a bill to amend N.J.S. 54:38-6 to provide that unpaid estate tax shall remain a lien on all property of the decedent as of the date of the decedent’s death for a period of 15 years. Thus, the NJSBA suggests that N.J.A.C. 18:26-3A.6 and N.J.A.C. 18:26-3 B.5 both be revised to include language similar to N.J.A.C. 18:26-10.2, as follows:
“(a) The New Jersey estate tax whether or not assessed or levied constitutes a lien on all the property owned by the decedent as of the date of death for the period set forth in N.J.S. 54:38-6 (as amended) unless sooner paid or secured by a bond. Except as otherwise provided in this chapter, no property owned by the decedent as of the decedent’s date of death may be transferred without the written consent of the Director.
(b) After the period set forth in N.J.S. 54:38-6 (as amended) has expired no proceeding may be instituted to assess and collect the New Jersey estate tax or any interest or penalties due thereon. No notice or consent to transfer is required for the transfer of any real or personal property and no personal liability remains on any executor, administrator, trustee, grantee, donee, buyer, devisee, legatee, heir, next of kin, or beneficiary; however, this does not affect any right of the State under any certificate of debt, decree, or
judgment for taxes, interest, and penalties duly recorded with the clerk of the Superior Court, or with any county clerk, or to assess and enforce the collection of any tax including any interest and penalties pursuant to the terms of any bond or other agreement securing the payment of the tax, interest, and penalties.”
**N.J.A.C. 18:26-3A.8(d) and N.J.A.C. 18:26-3B.7(d). Filing of tax return and other information.**
The division proposes changes to N.J.A.C. 18:26-3A.8 and the addition of N.J.A.C. 18:26-3B.7. However, the concern is that there are issues regarding the application of N.J.A.C. 18:26-3A.8(d) and N.J.A.C. 18:26-3B.7(d) and the requirement to file a Federal estate tax return in order to make a “Portability Election” even if a Federal estate tax return is not otherwise required to be filed. N.J.A.C. 18:26-3A.8(d) and N.J.A.C. 18:26-3B.7(d) both provide that: “In those cases where a taxpayer makes an election for Federal estate tax purposes, a like election must be made for New Jersey estate tax purposes. Assets and deductions must be treated in the same manner for both Federal and New Jersey estate tax purposes.”
This is commonly known as the “Consistency Rule” and the regulations are intended to provide consistency for tax purposes for married and civil union couples for Federal tax treatment and New Jersey tax treatment, with respect to marital trusts. However, in some circumstances, a Federal estate tax return is not required to be filed, since the estate is under the Federal filing threshold (i.e., $5,490,000 in 2017), but a Federal estate tax return is filed solely to make a Federal “Portability Election” to allow the surviving spouse to utilize the deceased spouse’s unused Federal estate tax exclusion amount. More importantly, the Federal laws related to making a valid QTIP election have changed. Prior to 2016, Revenue Procedure 2001-38 provided that the Internal Revenue Service (Service) would disregard and treat as a nullity for federal estate, gift, and generation-skipping transfer tax purposes any QTIP election made if the election was not necessary to reduce the Federal estate tax liability to zero. However, in 2016, Revenue Procedure 2016-49 modified and superseded Revenue Procedure 2001-38 to eliminate the automatic voiding of any such “unnecessary” QTIP election. Revenue Procedure 2001-38 confirms that a QTIP election made on a return Federal estate tax returned filed solely to make a Federal “Portability Election” will be allowed by the Service and will no longer be disregarded as void.
Based on the change in the law, the NJSBA recommends that N.J.A.C. 18:26-3A.8 and N.J.A.C. 18:26-3B.7 be revised to allow a “New Jersey only QTIP Election” in cases where a Federal estate tax return is not required to be filed, whether or not a Federal estate tax return is actually filed. The NJSBA recommends that N.J.A.C. 18:26-3A.8(d) and N.J.A.C. 18:26-3B.7(d) be revised to add the following sentence at the end: “Provided, however, that if a Federal estate tax return is not required to be filed, a qualified terminable interest property (“QTIP”) Election is permitted for New Jersey estate tax purposes, whether or not a Federal estate tax return is filed.”
**N.J.A.C. 18:26-3C.2. Transfer of property requires waiver.**
The division proposes to add N.J.A.C. 18:26-3C.2, which provides that even though no estate tax will be imposed on the estate of any resident decedent dying after Dec. 31, 2017, N.J.S.A. 54:38-6 requires that property owned by the decedent as of the date of the decedent’s death may be transferred only with the
written consent of the director in compliance with the waiver requirements of N.J.A.C. 18:26-11. N.J.A.C. 18:26-11.15 generally requires a waiver (written consent of the Director) for the transfer of property unless the gross estate of a resident decedent does not exceed $200. N.J.S.A. 54:38-6 relates to assessment and collection of taxes and the liability of administrators, executors, trustees, grantees, donees and vendees, for any and all such taxes until paid. Requiring written consent of the director to transfer property is a mechanism to ensure collection of such taxes. However, if no such estate taxes will be assessed and collected, then it no longer makes sense to require written consent of the director for such transfers. Further, this proposed rule is more onerous than current law and will make administration of small estates (the threshold for which was $675,000 and is now $2,000,000 for 2017 decedents) much more difficult. Moreover, there is no benefit to the state for imposing the requirement since there will be no estate tax imposed. Thus, the NJSBA believes that there is no longer a need for such a waiver requirement if there is no estate tax due for the estate of any resident decedent dying after Dec. 31, 2017.
N.J.A.C. 18:26-7.9. Administration expenses.
The division proposes rewording this section, but makes no substantive changes. The NJSBA again recommends that this section should clarify that if an estate includes a business previously operated by the decedent as a sole proprietorship is to be liquidated, insofar as the value of the estate includes the “Clear Market Value” of that business, there should be allowed as an administrative expense deduction, the reasonable costs of goods sold and selling expenses. Thus, the NJSBA recommends that this section should be revised as follows: “A deduction is allowed for all the reasonable and ordinary expenses of administering a decedent’s estate including reasonable and ordinary fees for executors, administrators and attorneys, reasonable costs of goods sold and selling expenses of liquidating a decedent’s business previously operated as a sole proprietorship, and, in addition, the reasonable cost incurred on an appeal from a determination of the Inheritance Tax Bureau.
N.J.A.C. 18:26-7.10(a). Executor's and administrator's expenses
The current and proposed regulations state that the deduction for executor’s or administrator’s commissions is determined in accordance with the applicable statute, N.J.S. 3B:18-14. It then restates those percentages. The NJSBA recommends not to include the statutory percentage rates in the regulations because including them has the potential to cause confusion or other problems if the statutory rates are amended in the future. Thus, the NJSBA recommends the recitation of the statutory commission percentage rates be stricken from the regulations and that N.J.A.C. 18:26-7.10(a) be revised to read as follows:
“In the absence of a judgment of the court exercising jurisdiction over the probate of an estate, the deduction for executor's or administrator's commissions is determined in accordance with N.J.S.A. 3B:18-14. Where the amount claimed by the executor or administrator or allowed by the court is less than that determined by the application of the rates set forth in N.J.S.A. 3B:18-14, only such amount as claimed or allowed shall be permitted as a deduction.”
N.J.A.C. 18:26-7.10(d). Executor's and administrator's expenses.
The division proposes to make no substantive revisions to this subsection. This subsection states that, for inheritance tax purposes, a deduction will be allowed for executor’s or administrator’s commissions on real estate only if the property is actually sold by the executor, or administrator, or if the property is “expressly directed to be sold by the terms of the decedent’s will.” The NJSBA believes there is no statutory authority for the requirement that the property must be sold as outlined in the regulations and recommends striking this subsection from the regulations.
**N.J.A.C. 18:26-8.9. Fractional interest in real property.**
The division proposes to make no substantive changes to this section regarding the valuation of fractional interests in real property. The statute requires that fractional interests in real property be valued based on the “Clear Market Value” of such property, just like any other assets of the estate. The attempt to change the law through regulation rather than providing clarification is unacceptable. There is no authority for the disparate treatment of fractional interests in real property. Therefore, the NJSBA recommends striking this section from the regulations.
**N.J.A.C. 18:26-8.13 “Close” or “family” corporation**
The division proposes to add this section, which provides that if the stock of a closely held corporation or family corporation is “incapable of being valued on the basis of bona fide sales” then, in addition to any appraisal of such stock, numerous other detailed data for up to five years before the decedent’s date of death must be supplied with the filing of the return. The meaning of “incapable of being valued on the basis of bona fide sales” is unclear, but it seems to require the additional data outlined in proposed N.J.A.C. 18:26-8.13 in almost all cases.
The NJSBA believes it is overly burdensome to require this information in addition to an appraisal. The director may request additional information upon review of the filed return and appraisal, if the director believes it is necessary and appropriate in order to value the stock. Further, the statute requires that the value of the stock of any closely held corporation be based on the “Clear Market Value” of such stock, just like any other assets of the estate. Respectfully, the NJSBA believes there is no authority for the disparate treatment of the stock of a closely held corporation and recommends striking this section of from the regulations.
**N.J.A.C. 18:26-8.14. Assets of close corporation or partnership of known market value.**
The division proposes to revise subsection (a) of this section to provide as follows: “When determining book value of the stock of a closely held corporation or interest in a partnership, no discount will be allowed on assets that have a definite, established, and known daily market value and are readily reducible to cash at that value (that is, stocks and bonds).”
First, it is unclear whether a discount will be disallowed for the value of the entire entity, or just the underlying assets. Further, the value of the stock of a closely held corporation or interest in a partnership is based on the value of the entity, not its separate underlying assets. If this subsection seeks to deny any discount in valuing any closely held corporation or interest in a partnership that owns marketable securities, there is no authority for such a blanket denial of any valuation discounts for a closely held
corporation or partnership, even if it includes marketable securities. Thus, the NJSBA recommends striking this section from the regulations.
**N.J.A.C. 18:26-11.1(c)5(v). Consent to transfer; generally.**
The division proposes to revise subsection (c)5(v) of this section to provide that an affidavit of waiver by a Class "A" transferee cannot be used for “Other circumstances determined by the Director or not specifically allowed in N.J.A.C. 18:26-11 or by statute.” The NJSBA suggests that this statement is ambiguous and overly broad. As such, it recommends that N.J.A.C. 18:26-11.1(c)5(v) be revised as follows: “Other circumstances not specifically allowed in N.J.A.C. 18:26-11 or by statute.”
**N.J.A.C. 18:26-11.21(a)1. Specific Waiver Situations.**
The division proposes to add N.J.A.C. 18:26-11.21(a)1, which requires a waiver for the transfer of any Individual Retirement Account (IRA) in which the funds are held in an institution which would otherwise require a waiver, as specified in N.J.A.C. 18:26-11.1(a). The NJSBA believes there is no authority to treat an IRA any differently from any other qualified retirement asset, such as a pension, which is exempt from the waiver requirement. In addition, an IRA is similar to a trust in that it passes non-probate property to a named beneficiary and thus, should be exempt under N.J.A.C. 18:26-11.13(c). Imposing a waiver requirement on retirement accounts creates a significant administrative burden on taxpayers. Further, the time required to obtain a waiver might cause negative income tax ramifications. For example, an IRA might need to be segregated into separate non-spousal inherited IRA accounts by Oct. 31 of the year following the year of the account owner’s death, and might be prevented from meeting this important tax deadline if a waiver is required and not yet obtained. Also, the beneficiary of an IRA might be prevented from timely taking a required minimum distribution if a waiver is required and not yet obtained at the time the distribution must be made in order to avoid penalties. As the NJSBA believes there is no authority to add this requirement and that it may cause negative income tax ramifications, it recommends that N.J.A.C. 18:26-11.21(a)(1) be stricken from the regulations.
**N.J.A.C. 18:26-11.22. Transfer of stock of a New Jersey corporation.**
The division proposes to add N.J.A.C. 18:26-11.22, which provides that no New Jersey corporation may transfer any of its stock of a resident decedent, even if held in trust for a resident decedent, without the written consent of the director. The NJSBA believes this statement is overly restrictive and could cause negative tax and legal consequences. As such, the NJSBA recommends N.J.A.C. 18:26-11.22 be stricken from the regulations.
**N.J.A.C. 18:26-12.2. Administration of Transfer Inheritance Tax and New Jersey Estate Tax.**
There is a typographical error in N.J.A.C. 18:26-12.2(a)1iii, and the NJSBA recommends that “devise” be revised to read “devisee.”
Thank you for the opportunity to submit these comments and for your consideration of same.
Very truly yours,
Robert B. Hille
President
cc: John E. Keefe, Esq., NJSBA President-Elect
Jill Lebowitz, Esq., chair, NJSBA Real Property, Trust & Estate Law Section
Angela C. Scheck, NJSBA Executive Director
|
Grivory HT3
The durable high-performance polyamide
The next generation of polyphthalamide
Grivory HT3 is a new product line in the polyphthalamide (PPA) product range of EMS-GRIVORY. Its special polymer structure allows a completely new performance spectrum, unique in this material group, to be achieved.
Grivory HT3 stands out due to its very low moisture uptake, which allows components with very high dimensional stability to be manufactured. In addition, this new polymer is extremely resistant to hydrolysis and can be used for applications involving direct contact with water.
The unique property profile of Grivory HT3 opens up unimagined possibilities for highly technical applications in the fields of automotive construction, electro and electronics as well as industry and consumer goods.
Strengths where they are required
Grivory HT3 has a balanced profile! Compared to Grivory HT1, important properties have been improved.
Based on renewable raw materials
Our environmental protection starts right with the granules. Grivory HT3 is based to large extent on renewable raw materials, allowing us to protect our fossil resources. Grivory HT3 stands for responsibility towards our environment.
Extrudable!
With Grivory HT3 we have developed an extrudable PPA. Application fields are varied and range to high-temperature resistant cables, even in contact with liquid media.
Automotive
The Grivory product range sets standards in the field of metal replacement for applications in automotive construction. Even highly stressed components can be lightweight and economically manufactured. Lower weight means lower fuel consumption. The property profile of Grivory HT3 is particularly well suited to applications in automotive construction. Grivory HT3 starts off where other plastic materials reach their limits.
The low water absorption and good dimensional stability allow manufacture of high-precision components - even for high working temperatures. Components made of Grivory HT3 are impact resistant and can be used in direct contact with automotive media.
Electro and electronics
Miniaturisation of components for use in the fields of electro and electronics is continually entering new dimensions. At the same time, specifications for the components are increasing. Under these conditions, the property profile offered by Grivory HT3 has maximum effect. In addition, specially modified Grivory HT3 grades provide optimal flame protection (UL 94 V-0) without addition of halogens or red phosphorus (WEEE and RoHS compatibility).
The low water absorption and dimensional stability of Grivory HT3 also play an important role in electro and electronics and it is approved for lead-free reflow soldering as per JEDEC Class 1. This means that no protection against ambient moisture is needed and production of precision components with narrow manufacturing tolerances is possible. Due to its mechanical properties such as high elongation at break and excellent impact strength, Grivory HT3 is predestined for the manufacture of components for different electronic applications. Specifically developed grades with maximum reflection make Grivory HT3 suitable for production of LED’s (Light Emitting Diodes).
Grivory HT3 provides excellent hydrolysis resistance for applications involving direct contact with cooling water. Thanks to its excellent permeation properties and strength, Grivory HT3 also enables the manufacture of construction components in under-bonnet applications or connectors and connection elements for use in contact with fuel.
| Suitability of Grivory HT3 for applications in contact with automotive media |
|---------------------------------------------------------------|
| | Test temp. [°C] | Grivory HT3 | Grivory HT1 |
|------------------|-----------------|-------------|-------------|
| Petrol E85 | 80/120 | □□ | □□ |
| FAM B | 80/120 | □□ | □□ |
| Diesel | 125/140 | □□□ | □□□ |
| RME (Bio Diesel) | 125 | □□□ | □□□ |
| Hydraulic oil | 150 | □□□ | □□□ |
| SAE 10W40 | 140 | □□□ | □□□ |
□□ good □□□ very good
Grivory HT3 passes JEDEC Class 1 without blistering
| Material | Grivory HT3-GF30 V0 | Standard PPA |
|----------|---------------------|--------------|
| Conditioning | JEDEC 1 85°C 85% r.h. 168h | JEDEC 1 85°C 85% r.h. 168h |
| Result reflow soldering (260°C) | Passed, no blistering | Failed, high blistering |
Industry and consumer goods
The spectrum for polyamide applications in industrial and consumer goods markets is extremely wide. For some time now, special polyamides have provided optimum solutions in these fields. Grivory HT3 is an ideal supplement to the Grivory product range and opens up previously unimagined opportunities.
Grivory HT3 enables the manufacture of extremely resilient components in the fields of sanitary fittings, heating and climate control or household appliances. Due to its good resistance to hydrolysis, Grivory HT3 performs extremely well where high temperatures and direct contact with water are involved. Product approvals for direct contact with foodstuffs (EU, FDA, NSF 51) and drinking water (KTW, W270, ACS, WRAS, NSF 61) are absolutely necessary for this kind of application.
Replacement of die-cast alloys or even thermoset materials with Grivory HT3 is extremely interesting, particularly construction, furniture and tool-making as well as in mechanical engineering or sport and leisure-time activities. The high-precision components often require high dimensional stability without loss of performance with regard to design and colour. Due to its optimal processability, use of Grivory HT3 also allows additional integration of function and resulting savings on part costs.
In the field of medicine, possible applications for Grivory HT3 are strong and stiff operating instruments which can be sterilised, as well as data communication and diagnosis machines with high chemical resistance. Metal replacement for artificial limbs, walking aids or design components for hospital beds and fittings is an important benefit allowing cost reductions in medicinal applications to be achieved.
Compared to Grivory HT1, the mechanical properties of Grivory HT3 remain more constant even after a long period under stress and in contact with hot water. The two graphs show a change already after a few hours, Grivory HT3 remains at a stable level.
Shear modulus
Shear modulus is an important design parameter and describes the working temperature range of plastic materials. The structure of the plastic remains stiff and solid until the glass transition temperature (Tg) is reached. At temperatures above the glass transition temperature the material becomes softer and less resilient.
Processing
Grivory HT3 is characterised by problem-free processing using conventional injection moulding equipment.
**Thermal properties of Grivory HT3-GF50 H**
determined in torsion pendulum tests

Despite its melting point which is 30°C lower than Grivory HT1, in comparison, Grivory HT3 exhibits an only slightly lower glass transition temperature range. Despite its lower melting point, Grivory HT3 can be exposed to practically the same working temperatures.
**Process parameters for Grivory HT3 (injection moulding)**
| | Mould temperature | Melt temperature |
|----------------|-------------------|------------------|
| Grivory | | |
| HT1 | [ ] [ ] [ ] [ ] | [ ] [ ] [ ] [ ] |
| HT3 | [ ] [ ] [ ] [ ] | [ ] [ ] [ ] [ ] |
The low melting point of Grivory HT3 which lies at 295°C, creates a wide working window for this material with melt temperatures ranging from 300 to 340°C. Optimum mould temperatures have been defined as between 120 - 150°C. This signifies that temperature control of injection moulds can be carried out with conventional pressurised-water heating equipment.
EMS-GRIVORY worldwide
www.emsgrivory.com
We introduce ourselves
EMS-GRIVORY is a unit of the business area Performance Polymers of the EMS Group and employs around 760 employees throughout the world.
The largest development and production site is located in Domat/Ems, Switzerland. We also have technology, production and sales facilities in most of the important markets in Europe, Asia and the USA.
Switzerland
EMS-CHEMIE AG
Business Unit EMS-GRIVORY
Reichenauerstrasse
CH-7013 Domat/Ems
Phone +41 81 632 78 88
Fax +41 81 632 76 65
email@example.com
Germany
EMS-CHEMIE (Deutschland) GmbH
Warthweg 14
D-64823 Gross-Umstadt
Phone +49 6078 783 0
Fax +49 6078 783 416
firstname.lastname@example.org
France
EMS-CHEMIE (France) S.A.
73-77, rue de Sèvres
Boîte postale 52
F-92105 Boulogne-Billancourt Cedex
Phone +33 1 41 10 06 10
Fax +33 1 48 25 56 07
email@example.com
Great Britain
EMS-CHEMIE (UK) Ltd
Darlin House, Priestly Court
Staffordshire Technology Park
GB-Stafford ST18 OAR
Phone +44 1785 283 739
Fax +44 1785 283 722
firstname.lastname@example.org
Italy
EMS-CHEMIE (Italia) S.r.l.
Via Visconti di Madrone, 2
I-20122 Milan
Phone 00 800 1100 1122
Fax 00 800 1100 2233
email@example.com
Taiwan
EMS-CHEMIE (Taiwan) Ltd.
36, Kwang Fu South Road
Hsin Chu Industrial Park
Fu Kau Hsiang
Hsin Chu Hsien 30351
Taiwan, R.O.C.
Phone +886 35 985 335
Fax +886 35 985 731
firstname.lastname@example.org
Japan
EMS-CHEMIE (Japan) Ltd.
EMS Bldg., 2-11-20 Higashi-koujiya
Otaku, Tokyo 144-0033
Phone +81 3 5735 0611
Fax +81 3 5735 0614
email@example.com
United States
EMS-CHEMIE (North America) Inc.
2060 Corporate Way
P.O. Box 1717
Sumter, SC 29151, USA
Phone +1 803 481 61 71
Fax +1 803 481 61 21
firstname.lastname@example.org
China
EMS-CHEMIE (China) Ltd.
Room 1908
Far East International Plaza
319 Xian Xia Road
Shanghai 200051
P. R. China
Phone +86 21 6295 7186
Fax +86 21 6295 7870
EMS-GRIVORY, a business unit of the EMS Group
|
The Life Insurance Company Income Tax Act Of 1959: Tax-Exempt Intercorporate Distributions In Consolidated Filing
Follow this and additional works at: https://scholarlycommons.law.wlu.edu/wlulr
Part of the Insurance Law Commons, and the Taxation-Federal Commons
Recommended Citation
The Life Insurance Company Income Tax Act Of 1959: Tax-Exempt Intercorporate Distributions In Consolidated Filing, 27 Wash. & Lee L. Rev. 143 (1970), https://scholarlycommons.law.wlu.edu/wlulr/vol27/iss1/11
This Comment is brought to you for free and open access by the Washington and Lee Law Review at Washington & Lee University School of Law Scholarly Commons. It has been accepted for inclusion in Washington and Lee Law Review by an authorized editor of Washington & Lee University School of Law Scholarly Commons. For more information, please contact firstname.lastname@example.org.
longer be barred where it is alleged that the employer breached some duty to the third party which in turn primarily caused injury to the employee. Actions for contribution, however, would still be barred under the Wiener theory, unless it be found that a special rule of law took precedence, as was the case in Weyerhaeuser.
By permitting the case to be heard on the merits, Bremen offers the most equitable solution to the problem. The third party has the opportunity to prove the duty owed to it by the employer and that the breach of this duty was the main cause of the accident. If these factors can be proved, the Government should hear the blame, for if the United States can escape liability altogether in this situation, the third party has no way to protect himself.\(^{50}\) At the same time, however, the Government is not placed at an unreasonable disadvantage, for if the third party cannot prove these factors, the United States will bear no liability.
John H. West, III
THE LIFE INSURANCE COMPANY INCOME TAX ACT OF 1959: TAX-EXEMPT INTERCORPORATE DISTRIBUTIONS IN CONSOLIDATED FILING
The Life Insurance Company Income Tax Act of 1959\(^1\) provides for the determination of a final tax base to which ordinary corporate rates of taxation are applied. Section 804 divides every item of investment yield into a policyholders' share and a company's share. The policyholders' share is used to meet statutory reserve requirements,\(^2\) the reserve being the amount the state requires that life insurance companies set aside to provide for policyholders' claims against the company. The company's share is simply the remainder of investment yield, which is available to the company for operating expenses and for distribution or retention as profit. Section 804 provides that the policyholders' share is not to be included in taxable investment income. The company's share of each item of investment yield, reduced by the com-
\(^{50}\)This is most vividly illustrated in Bremen where the shipowner, by law, had to submit to the inspection and thus apparently had no choice as to whether or not he would allow the allegedly disabled inspector to board his ship.
\(^{1}\)Pub. L. No. 86-69, § 2, 73 Stat. 112, amending Int. Rev. Code of 1954, ch. 736, §§ 801-13, 68A Stat. 258; hereinafter referred to as the 1959 Act.
\(^{2}\)See, e.g., Conn. Gen. Stat. Ann. § 38-25 (1958), N.Y. Ins. Law § 72 (McKinney 1966), N.C. Gen. Stat. § 58-79 (Cum. Supp. 1967), Va. Code Ann. § 38.1-170 (Cum. Supp. 1968).
pany's share of certain specified items of tax-exempt income, constitutes the company's taxable investment income.
The formula of taxation provided by section 804 can best be explained by an example. Company X has total investment assets of $100,000,000 of which $80,000,000 is held in reserves to meet policyholders' claims as is required by state statute. From total investment assets company X receives $5,000,000 gross investment income. A deduction of $1,000,000 is then allowed for investment expenses and depreciation to arrive at an investment yield of $4,000,000. Assume that the section 804 formula, in this instance, allows a policyholders' share of 75%; then that percentage of every item of investment yield is not included in the company's taxable investment income. The remaining 25% constitutes the company's share. This is reduced by the company's portion of certain specified items of tax-exempt income to arrive at the company's taxable investment income.
The recent case of Jefferson Standard Life Insurance Co. v. United States involved the question of how the dividends from Pilot Life Insurance Company, a wholly owned subsidiary filing on a consolidated basis with Jefferson Standard, should be eliminated in computing taxable investment income. Treas. Reg. § 1.1502-31(b) (1) (i) (1955) provides only that these dividends are to be "eliminated" from taxable income. Neither the Regulation nor the Code specifies where or how they are to be eliminated, or, more specifically, at what stage of the section 804 computations the elimination is to be made. In the years in question, 1958 and 1959, dividends from the subsidiary, Pilot Life, were eliminated from gross income in arriving at investment yield. The dis-
---
3Int. Rev. Code of 1954, § 804(a)(2) eliminates from taxable investment income the company's share of interest from governmental obligations under Section 103; partially tax-exempt interest under Section 242; and dividends received under Sections 243-45.
4Section 802(b) of the 1959 Act provides the formula for determining life insurance company taxable income and involves three phases of computations. Phase one determines taxable investment income. Phase two is devoted to determining gain or loss from operations. Phase three determines the amount subtracted from policyholder's surplus accounts, See generally Pub. L. No. 86-69, § 2, 73 Stat. 112 as amended Int. Rev. Code of 1954, §§ 801-20. Because the problem of intercorporate distributions and tax exemption is dealt with more clearly and directly in phase one, this discussion will be solely concerned with that phase.
5Int. Rev. Code of 1954, § 804(c). Hereinafter Int. Rev. Code of 1954 is referred to in text as the Code.
6Note 3 supra.
7408 F.2d 842 (4th Cir. 1969), petition for cert. filed, 37 U.S.L.W. 3485 (U.S. June 17, 1969) (No. 1506).
8The Pilot Life dividend was eliminated when the investment yields of the two separate companies were combined. "Consolidated filing" might imply that all the figures for both the companies are combined and that each item of income
trict court approved Jefferson Standard's treatment of the consolidated returns and the elimination of the Pilot dividend, thereby insuring the full tax-exemption of such income.\textsuperscript{9} The Fourth Circuit reversed, upholding the government's claim that tax-exempt intercorporate distributions should be treated the same as other exempt income under section 804.\textsuperscript{10} That is, it should be included in investment yield for all computations and divided into a policyholders' share and a company's share. As a result the only dividends rendered non-taxable by consolidated filing are those in the company's share. The immunity does not extend to the policyholders' share which is otherwise non-taxable. The full intercorporate distribution is included in the section 804 computations which determine the shares. The ultimate effect of this inclusion is to increase the company's percentage share of every item of
\textsuperscript{9}Jefferson Standard Life Insurance Co. v. United States, 272 F. Supp. 97 (M.D.N.C. 1967).
\textsuperscript{10}The result is that under the Fourth Circuit formula a life insurance company's taxable investment income is greater than under the district court formula. The following merely illustrates the result and are not the actual figures.
| District Court | Fourth Circuit |
|----------------|---------------|
| **Basic Assumptions:** | |
| Investment assets | $100,000,000 |
| Life insurance reserves | $80,000,000 |
| Assumed interest rate | 3% |
| Income Assumptions: | |
|---------------------|---------------|
| Gross income | 5,200,000 |
| less investment expenses & depreciation | 1,200,000 |
| less intercorporate distributions | 200,000 |
| TOTAL INVESTMENT YIELD | $3,800,000 |
| Adjusted Reserves Rate | 3.8% |
| Adjusted Reserves | 73,600,000 |
| [Life insurance reserves + (10 x assumed interest rate) - (10 x adjusted reserves rate)] | |
| Policyholder's Percentage Share | 73.% |
| [(adjusted reserves x adjusted reserves rate) divided by investment yield] | |
| Reserve Deduction (Policyholder's share) | 2,796,800 |
| Company's Percentage Share | 26.4% |
| Company's Share of Investment Yield | $1,003,200 |
| less Company's Share of Intercorporate Distributions | — |
| TAXABLE INVESTMENT INCOME | $1,003,200 |
Assumptions: current earnings rate equals adjusted reserves rate; pension reserves, other dividends received, and interest paid equal zero.
Figures taken from Kaufman, \textit{The Life Insurance Company Income Tax Act of 1959}, 16 NAT'L TAX. J. 337, 348 (1963).
investment yield, thereby increasing the amount of taxable investment income.\textsuperscript{11}
The question of tax-exempt income and the federal income tax on life insurance companies was first raised in the 1928 case of \textit{National Life Insurance Co. v. United States}.\textsuperscript{12} There the Supreme Court held invalid certain provisions of the Revenue Act of 1921\textsuperscript{13} which not only required a company with tax-exempt interest to pay as much income tax as a company having no exempt interest but also subjected the company with tax-exempt interest to pay more tax per dollar on taxable income before deduction for the reserve.\textsuperscript{14} This was, in effect, a penalty for receiving tax-exempt income from government obligations. The Supreme Court stated that "[o]ne may not be subjected to greater burdens upon his taxable property solely because he owns some that is free."\textsuperscript{15}
The rule of \textit{National Life} was extended in \textit{Missouri ex rel. Missouri Insurance Co. v. Gehner}\textsuperscript{16} which involved the problem of a state property tax on certain insurance companies.\textsuperscript{17} The state court interpreted the statute as requiring that the allowable reserve deduction be re-
\textsuperscript{11}Id.
\textsuperscript{12}277 U.S. 508 (1928).
\textsuperscript{13}Ch. 136, § 245, 42 Stat. 261.
\textsuperscript{14}The 1921 Act taxed only investment income. Insurance companies were allowed to exclude interest from government obligations from their gross income. The deduction from investment income for reserve requirements was set at 4% of the total amount in the reserve fund. However, they were required to subtract from this reserve deduction the amount of the exclusion they had taken for tax-free interest. So that, if two identical companies start with the same amount of gross income, yet company $A$ received tax-exempt interest, company $A$'s taxable income before the reserve deduction is smaller (gross income less exempt interest) than company $B$'s. However, company $A$ must reduce its reserve deduction by the amount of the exclusion for exempt interest. Thus, after the reserve deductions are taken, they each have the same amount of taxable income. Since the recipient of tax-exempt interest had a smaller amount of taxable income, after the exempt interest exclusion but before the reserve deduction, the Supreme Court held it was being subjected to greater burdens because of the receipt of such interest. 277 U.S. 508.
\textsuperscript{15}Id. at 519.
\textsuperscript{16}281 U.S. 313 (1930).
\textsuperscript{17}The statute provided:
The property of all insurance companies organized under the laws of this state shall be subject to taxation....Every such company or association shall make returns, subject to the provisions of said laws: First, of all the real estate held or controlled by it; second, of the net value of all its other assets or values in excess of the legally required reserve necessary to rein- sure its outstanding risks and of any unpaid policy claims, which net values shall be assessed and taxed as the property of individuals....
Mo. Rev. Stat. § 6986 (1919) as quoted in Missouri ex rel Missouri Insurance Co. v. Gehner, 281 U.S. 313, 317-18 (1930).
duced by the proportion which the value of United States bonds bore to the total assets.\textsuperscript{18} The Supreme Court reversed and invalidated the statute, holding that
\begin{quote}
[W]here as in this case the ownership of United States bonds is made the basis of denying the full exemption which is accorded to those who own no such bonds this amounts to an infringement of the guaranteed freedom from taxation.\textsuperscript{19}
\end{quote}
Thus, the doctrine of \textit{National Life} that “one may not be subjected to greater burdens”\textsuperscript{20} because of ownership of government obligations was extended in \textit{Gehner} to mean that the value of government bonds must be wholly disregarded in estimating net taxable values.\textsuperscript{21}
The reasoning of \textit{Gehner} was impliedly repudiated a year later in \textit{Denman v. Slayton}.\textsuperscript{22} There, the taxpayer, engaged in the business of buying and selling municipal bonds, sought to have declared unconstitutional the sections of the Revenue Act of 1921\textsuperscript{23} which denied deductions for interest on money borrowed to purchase or carry tax-exempt securities. The Supreme Court held that the circumstances presented were different from those in \textit{National Life} and that the taxpayer was “not in effect required to pay more upon his taxable receipts than was demanded of others who enjoyed like incomes solely because he was the recipient of interest from tax-free securities . . . .”\textsuperscript{24} The court declared that “[w]hile guaranteed exemptions must be strictly observed, this obligation is not inconsistent with reasonable classifica-
\textsuperscript{18} 322 Mo. 339, 15 S.W.2d 334 (1929).
\textsuperscript{19} 281 U.S. at 321-22.
\textsuperscript{20} 277 U.S. at 519.
\textsuperscript{21} Since there is no constitutional requirement that the state allow a reserve deduction, Justice Stone reasoned in dissent that:
Calling the deduction...an “exemption” and saying that the ownership of tax exempt securities is made the basis of denying the “full exemption,” may give this case a verbal resemblance to [National Life] but it does no more. True, a change by appellant from taxable to tax free investments would result in a smaller deduction from its taxable assets, but it would also result in a proportionate reduction of its taxable assets with a corresponding decrease in taxable values, always in exact proportion of appellant’s investment in tax exempt securities.
Missouri ex rel. Missouri Ins. Co. v. Gehner, 281 U.S. 313, 330 (1930) (dissenting opinion).
\textsuperscript{22} 282 U.S. 514 (1931).
\textsuperscript{23} Section 214(a) provides:
that in computing net income there shall be allowed as deductions: . . .
(2) All interest paid or accrued within the taxable year on indebtedness, except on indebtedness incurred or continued to purchase or carry obligations or securities...the interest upon which is wholly exempt from taxation under this title....
Revenue Act of 1921, ch. 136, 42 Stat. 239.
\textsuperscript{24} 282 U.S. at 519.
tion designed to subject all to the payment of their just share of a burden fairly imposed."25 While Gehner was not referred to, it was rejected by implication since the taxpayer was not allowed the full, undiminished exemption for tax-free income. In Denman the exemption was decreased, in effect, by the amount of interest on obligations incurred to purchase and carry tax-exempt assets.
The Supreme Court in Helvering v. Independent Life Insurance Co.26 followed Denman and again distinguished National Life without mentioning Gehner. In Independent Life, the Court ruled constitutional the sections of the Revenue Acts of 192127 and 192428 which permitted deductions for depreciation and expenses of buildings owned by life insurance companies only if the companies included in their gross incomes the rental value of the space they occupied. Under the Denman rule this formula was held to be a valid apportionment of expenses between the space occupied by the company and the space for which rents were received. The sections were held to be constitutional since they did not lay a direct tax on the rental value of space occupied by the companies, which had been held previously not to be income within the meaning of the sixteenth amendment.29
The problem of tax-exempt income under the 1959 Act had been dealt with previously in United States v. Atlas Life Insurance Co.,30 which specifically concerned the exemption of income from municipal bonds. The Atlas decision held that the formula of the 1959 Act for determining taxable investment income did not place an impermissible tax on tax-exempt interest from municipal bonds. The Court, interpreting Denman and Independent Life to mean that "tax laws may require tax-exempt income to pay its way,"31 held that there was no basis for the contention that the policyholder reserve must be satisfied by resort to taxable income alone. The reasons given were that policyholder claims run against all income, taxable or not, and tax-exempt income is not exempt from the company's obligation to add a large portion of investment income to the policyholder reserve.32 The Court
---
25Id.
26292 U.S. 371 (1934).
27Revenue Act of 1921, ch. 136, § 245(b), 42 Stat. 261.
28Revenue Act of 1924, ch. 234, § 245(b), 43 Stat. 289.
29Eisner v. Macomber, 252 U.S. 189 (1920); cf. Stratton's Independence v. Howbert, 231 U.S. 399 (1913).
30381 U.S. 233 (1965).
31Id. at 247.
32Id.
noted that the insurance company's insistence on excluding both the full and reserve and tax-exempt income was
tantamount to saying that those who purchase exempt securities . . . are constitutionally entitled to reduce their tax liability and to pay less per taxable dollar than those owning no such securities. The doctrine of intergovernmental immunity does not require such a benefit to be conferred on the ownership of municipal bonds.\textsuperscript{33}
In the instant case, Jefferson Standard and its subsidiary filed consolidated returns in 1958 and 1959. The filing of consolidated returns is not a right but a privilege which exists at the discretion of Congress\textsuperscript{34} and results in the availability of substantial tax savings to the taxpayer.\textsuperscript{35} This privilege is given on the condition that all members of the affiliated group "consent to all the consolidated return regulations prescribed under section 1502 prior to the last day prescribed by law for the filing of such return."\textsuperscript{36} Furthermore, the Regulation\textsuperscript{37} permitting the exemption of intercorporate distributions in consolidated filing is subject to the general reservation of Treas. Reg. § 1.1502-3 that whenever a problem arises in consolidated filing that is not covered by the Regulations, the matter should be determined in accordance with the provisions of the Code or other applicable law.\textsuperscript{38} The court in \textit{Jefferson Standard} remarked that "[m]anifestly this reservation should be given wider latitude where, as here, the Regulation preceded the Act and the Act made fundamental changes in the mode of taxing life insurance companies."\textsuperscript{39}
The privilege of making consolidated returns and thereby receiving the exemption on intercorporate distributions is predicated solely on legislative grace, and the taxpayer making consolidated returns is made expressly subject to the Regulations and the provisions of the Code or other applicable law. It does not appear that the exemption on in-
\textsuperscript{33}\textit{Id.} at 251.
\textsuperscript{34}\textsc{Int. Rev. Code of 1954}, § 1501. 8A J. MERTENS, \textsc{Law of Federal Income Taxation} 46.01 (1964); see Smith Paper Co. v. Comm'r, 31 B.T.A. 28 (1934), \textit{aff'd sub nom.} Export Leaf Tobacco Co. v. Comm'r, 78 F.2d 163 (2d Cir. 1935).
\textsuperscript{35}See Crestol, \textit{Consolidated Return Regulations and Related Tax Provisions}, N.Y.U. 26th Inst. on Fed. Tax 731 (1968).
\textsuperscript{36}\textsc{Int. Rev. Code of 1954}, § 1501.
\textsuperscript{37}\textsc{Treas. Reg.} § 1.1502-31(b)(i)(i) (1955).
\textsuperscript{38}The Regulation provides:
[any matter in the determination of which the provisions of the regulations...are not applicable shall be determined in accordance with the provisions of the Code or other law applicable thereto.
\textsc{Treas. Reg.} § 1.1502-3 (1955).
\textsuperscript{39}408 F.2d at 846 n.6.
come from municipal obligations, grounded in constitutional law\textsuperscript{40} and specifically provided for in the Code,\textsuperscript{41} is any lesser right. Therefore, the reliance on \textit{Atlas} seems well founded. If the full exemption is not granted for income from municipal bonds under the 1959 Act, there seems no logical reason why it should be granted for intercorporate distributions in consolidated filing. In addition, the general reservation that a problem not covered by the Regulations should be resolved in accordance with the Code and law applicable thereto is satisfied by relying on \textit{Atlas} and the section 804 treatment of other forms of tax-exempt income.
The court in \textit{Jefferson Standard} rejected the taxpayer's elimination of intercorporate distributions at the outset of section 804 computations on the reasoning, presented in \textit{Atlas}, that a portion of the intercorporate distribution belongs to the policyholder's share. The court said,
If Pilot's dividend to taxpayer is to be eliminated at the outset of the . . . computations, in effect, the Act would be construed as if liabilities to taxpayer's policyholders were to be satisfied solely by other income. . . . [B]oth taxpayer's investment in Pilot and the income derived therefrom are available and legally liable for the satisfaction of taxpayer's liabilities to its policyholders.\textsuperscript{42}
This reasoning is in accord with the language in \textit{Atlas} that "the tax laws may require tax-exempt income to pay its way."\textsuperscript{43} Thus, the tax-exempt character of the income is not recognized at the outset of the computation, but is recognized at its conclusion, when the life insurance company's share of investment income is determined.
Looking behind \textit{Atlas} to the rule in \textit{National Life}, it is evident that the formula of the 1959 Act as applied in the instant case does not subject Jefferson Standard to "greater burdens" because it is the recipient of tax-exempt income. Although Jefferson Standard is not accorded the benefit of both the full exclusion of intercorporate distributions and the full reserve deduction, the receipt of such tax-exempt income does result in a lower total tax. The full portion of intercorporate distribution relegated to the company's share is eliminated before the tax is applied, thereby lowering the total tax.
\textsuperscript{40}Pollock v. Farmers Loan & Trust Co., 157 U.S. 429 (1895) held income from state and municipal obligations constitutionally immune from federal taxation. There is controversy as to whether this case still represents valid constitutional doctrine. Cf. Powell, \textit{The Waning of Inter-governmental Tax Immunities}, 58 Harv. L. Rev. 633 (1945); Powell, \textit{The Remnant of Intergovernmental Tax Immunities}, 58 Harv. L. Rev. 757 (1945).
\textsuperscript{41}Int. Rev. Code of 1954, § 103(a)(1).
\textsuperscript{42}408 F.2d at 846-47.
\textsuperscript{43}81 U.S. at 247.
|
A Life Lived Twice
Owen Fiss
Retirement celebrations are odd events. They are a mixture of joy and sadness, and that is emphatically true of those in honor of Justice Brennan.
Not since the retirement of Justice Holmes in the early 1930's has the nation been more generous in its tributes to a retiring justice. Justice Brennan served the Court for nearly thirty-four years and now, at a mere 84 (Holmes was 90), retires with a grandeur that is indeed stunning. In this, there is reason for joy because the Justice fully deserves all the accolades and honors that have been bestowed upon him. I rejoice in Brennan's glory and feel the pleasures of the moment, but I would be less than honest if I did not also acknowledge my sadness on this occasion, not just for the Justice who so loved his work, but even more for the law. His retirement imperils the achievements of the Warren Court in new and profound ways.
The Warren Court refers to that extraordinary phase of Supreme Court history that began in the mid-1950's, with *Brown v. Board of Education* and the appointments of Earl Warren (1954) and William J. Brennan, Jr. (1956), and which reached its apogee in the early 1960's, when Justice Frankfurter retired and the liberal wing of the Court achieved a solid majority. Aside from Warren and Brennan, that majority included Hugo Black, William O. Douglas, and Frankfurter's replacement, Arthur J. Goldberg, who served from 1962 until 1965 and then was replaced by Abe Fortas. In 1967, the group of five was strengthened when Thurgood Marshall replaced Tom Clark. Now and then, they picked up the vote of Potter Stewart or Byron White or even that of their most forceful critic, John Harlan, a conservative who often found himself encumbered by his commitment to stare decisis. Earl Warren retired from the chief justiceship in 1969, but the phase of Supreme Court history that bears his name continued into the early 1970's, probably until 1974. I clerked for Justice Brennan during the term of Court that began in October 1965 and ended the next summer.
Like everything else, law always has an antecedent. The roots of the jurisprudence of the Warren Court can be found in earlier periods, most especially in those decisions of the Supreme Court in the 1930's, when the Court gave important life to the principle guaranteeing freedom of speech, elevating the dissents of Holmes and Brandeis to majority status, and also began to intervene in criminal proceedings to assure a modicum of procedural fairness.
But there was something distinctive and special about the Warren Court, almost a new beginning. *Brown* itself undertook the most challenging of all constitutional tasks, making good on the nation's promise of racial equality. Even more importantly, that case embodied both a conception of law and a set of commitments that evolved into a broad-based program of constitutional reform. The Court saw the Bill of Rights and the Civil War Amendments as the embodiment of our highest ideals and soon made them the standard for judging the established order.
In the 1950's, America was not a pretty sight. Jim Crow reigned supreme. Blacks were systematically disenfranchised and excluded from juries. State-fostered religious practices, like school prayers, were pervasive. Legislatures were grossly gerrymandered and malapportioned. McCarthyism stifled radical dissent, and the jurisdiction of the censor over matters considered obscene or libelous had no constitutional limits. The heavy hand of the law threatened those who publicly provided information and advice concerning contraceptives, thereby imperiling the most intimate of human relationships. The states virtually had a free hand in the administration of justice. Trials often proceeded without counsel or jury. Convictions were allowed to stand even though they turned on illegally seized evidence or on statements extracted from the accused under coercive circumstances. There were no rules limiting the imposition of the death penalty. These practices victimized the poor and disadvantaged, as did the welfare system, which was administered in an arbitrary and oppressive manner. The capacity of the poor to participate in civic activities was also limited by the imposition of poll taxes, court filing fees, and the like.
These were the challenges that the Warren Court took up and spoke to in a forceful manner. The result was a program of constitutional reform almost revolutionary in its aspiration and, now and then, in its achievements. Of course the Court did not act in a political or social vacuum. It drew on broad-based social formations like the civil rights and welfare rights movements. At critical junctures, the Court looked to the executive and legislative branches for support. The dual school system of Jim Crow could not have been dismantled without the troops in Little Rock, the Civil Rights Act of 1964, the interventions of the Department of Justice and HEW, the suits of the NAACP Legal Defense Fund, or the black citizens who dared to become plaintiffs or, even more, to break the color line or march on behalf of their rights. The sixties would not have been what they were without the involvement of all of these institutions and persons, and the world would have looked very different. Yet the truth of the matter is that it was the Warren Court that spurred the great changes to follow, and inspired and protected those who sought to implement them.
A constitutional program so daring and so bold was, of course, the work of many minds. As is customary, we use the name of the chief justice to refer to this period of Supreme Court history, and in Warren's case that practice
seems especially appropriate. Earl Warren was a man of great dignity and vision, in every respect a leader, who discharged his duties (even the most trivial, such as admitting new members to the bar) with a grace and cheerfulness that were remarkable. He presided in a way that filled the courtroom with a glow. Yet the substance of the Court’s work, the revolution that it effectuated in our understanding of the Constitution, drew on the talents and ideas of all those who found themselves entrusted with the judicial power at that unusual moment of history.
Justice Brennan’s contribution to the ensemble known as the Warren Court had many dimensions. He was devoted to the values we identify with the Warren Court—equality, procedural fairness, freedom of speech, and religious liberty—and he was prepared to act on them. More importantly, he was the justice primarily assigned the task of speaking for the Court. The overall design of the Court’s position may have been the work of several minds, fully reflecting the contributions of such historic figures as Black, Douglas, and Warren, but it was Brennan who by and large formulated the principle, analyzed the precedents, and chose the words that transformed the ideal into law. Like any master craftsman, he left his distinctive imprint on the finished product.
Warren and Brennan were invariably on the same side in the great constitutional cases of the day. They served together for thirteen terms and agreed in 89% of the more than 1400 cases they decided. Indeed, it is hard to think of a case of any import where they differed. As chief justice, Warren had the responsibility of assigning the task of speaking for the Court when his side prevailed. Sometimes, as in *Reynolds v. Sims* and *Miranda v. Arizona*, where he felt the need for the imprimatur of his office, or where the issue was especially close to his heart, Warren wrote the opinion. But generally he turned to Justice Brennan.
In part, this reflected the unusual personal tie that developed between the two. The Chief—as Justice Brennan always called him—visited Brennan’s chambers frequently, and each visit was an important occasion for the chambers as a whole and for Justice Brennan in particular. One could see at a glance the admiration and affection that each felt for the other. The relationship between Earl Warren and William Brennan was one of the most extraordinary relationships between two colleagues that I have ever known; surely, it must be one of the most famous in the law.
But more than personal sentiment was involved. In turning to Brennan, Warren could be certain that the task of writing the opinion for the Court was in the hands of someone as thoroughly devoted as he was to the Court as an institution. An assignment is always an expression of trust, and Warren could depend on Brennan to formulate and express the Court’s position—to declare the principle and attend to the details that constitute the law—in a way that would strengthen the Court in the eyes of both the public and the profession, and thus enhance its capacity to do its great work. Brennan was, in the highest
and best sense of the word, a statesman: not a person who tempers principle with prudence, but rather someone who is capable of grasping a multiplicity of conflicting principles, some of which relate to the well-being of the institution and remind the judge that his duty is not just to speak the law, but also to see to it that it becomes an actuality—in the words of *Cooper v. Aaron*, to make sure that the law becomes “a living truth.”
Brennan could be trusted to choose his words in a way that would minimize the disagreement among the justices, not only to avoid those silly squabbles that might interfere with the smooth functioning of a collegial institution, as the Court most certainly is, but also to produce a majority opinion and strengthen the force of what the Court had to say. Only five votes are needed for a decision to become law, but the stronger the majority and broader the consensus, the more plausible is its claim for authority. Brennan could also be trusted to respect the traditions of the bar and to pay homage to the principle of stare decisis. He always tried to build from within. Sometimes that was not possible, for the break with the past was just too great. Yet, even then, Brennan’s inclination, once again rooted in a concern for the Court’s authority, was to minimize the disruption, and to find, if at all possible, a narrow path through the precedents. Brennan also understood that reform as bold as the Court tried to effectuate required a coordination, not a separation, of powers, and that gratuitous confrontations with the other branches were to be avoided. In fact, as evident from Justice Brennan’s opinion in *Katzenbach v. Morgan*, affirming a broad conception of congressional power under section 5 of the Fourteenth Amendment, every effort was made to invite the other branches of government to participate and collaborate in the program of constitutional reform inspired by the Court.
Aside from a proper regard for institutional needs, a successful opinion requires a mastery of legal craft, which Warren also found in Brennan. Justice Brennan was as much the lawyer as the statesman. Law is a blend of the theoretical and the technical, and though there were others as gifted as Brennan in the formulation of a theoretical principle, there was no one in the ruling coalition—certainly not before Fortas’s appointment—who had either the patience or the ability to master the technical detail that is also the law. Everyone on the Court, law clerk and justice alike, admired Brennan’s command of vast bodies of learning, ancient and modern. He knew the cases and the statutes, and how they interacted, and understood how the legal system worked and how it might be made to work better. Among the majority, he was the lawyer’s judge.
Even Brennan’s most theoretically ambitious opinions, like *New York Times v. Sullivan*, bear the lawyer’s mark. In that case Justice Brennan spoke of the national commitment to a debate on public issues that is “uninhibited, robust, and wide-open,” and he has been justly celebrated many times for reformulating the theory of freedom of speech associated with the work of Alexander
Meiklejohn in a fresh and original way. Meiklejohn, then in his nineties, saw Brennan’s opinion in *New York Times v. Sullivan* “as an occasion for dancing in the streets.” Of even greater importance to the lawyers and judges among us (Meiklejohn was a political theorist) was Brennan’s analysis of the common law of libel and his deft reformulation of doctrine—the announcement of the “actual malice” requirement—in order to create a rule that, one, would be operational and, two, would effectuate a just accommodation of reputational interests and democratic values. *New York Times v. Sullivan* is a great decision, a fountainhead of freedom in our day, only because it is an exercise in political philosophy made law.
In 1968, Richard Nixon ran against the Warren Court, and in so doing, attacked Justice Brennan as much as anyone, perhaps more so, given the commanding role that Brennan played on that Court. But history soon took an odd turn: the Warren Court collapsed, but Brennan remained. Prior to the election of 1968, but clearly with a view as to its likely outcome, Earl Warren tendered his resignation to President Johnson in an effort to turn the leadership of the Court over to Johnson’s confidant, Abe Fortas. The Senate, however, balked on the elevation of Fortas, and his nomination for the chief justiceship was soon withdrawn. Yet following the 1968 election, Nixon made good on Earl Warren’s resignation and began his presidency with the appointment of Warren Burger as chief justice. In addition, Fortas was forced to resign from the Court, due to the disclosure of financial improprieties; and with the resignations of John Harlan and Hugo Black, President Nixon found himself able to make three other appointments during his first term in office. Over time, one of those appointments—Harry A. Blackmun—evolved into a justice whose view of the Constitution turned out to be similar to those who sat on the Warren Court. But the two other appointments—Lewis Powell and William Rehnquist—were of a different character. There were differences between the views of Powell and Rehnquist, but the views of both were at odds with the jurisprudence that reigned supreme during the sixties.
The final dissolution of the Warren Court occurred with the resignation of Douglas in 1975 and his replacement by John Paul Stevens. The balance of power had decisively shifted, and was then locked in place by two accidents of history: Jimmy Carter had no appointments to make, a distinction he shared with no other President in our history who completed a full term, while Ronald Reagan had three—Antonin Scalia (to fill the vacancy created by Burger’s resignation), Sandra Day O’Connor (to replace Stewart), and Anthony Kennedy (to replace Powell). In 1986, at the same time he appointed Scalia, President Reagan elevated Rehnquist to the chief justiceship, but that change only conformed outward appearances to the inner reality. For much of the seventies and eighties it was Rehnquist who led the Court, building the necessary coalitions, setting the agenda, and formulating the methods of revision. Even during Burger’s years, it was the Rehnquist Court.
These changes ushered in a new phase of Supreme Court history, and Justice Brennan found himself working in a wholly new environment. He could turn to Marshall and, to a considerable extent, Blackmun, for support, but from there on in the going was rough. No longer a dominant figure in the ruling coalition, Brennan became part of the opposition, pitted against a majority driven by a contrary vision of American law and life. The new majority believed that the doctrine of the Warren Court was mistaken and had to be limited, corrected, and perhaps even eradicated.
*Brown*, of course, was not overruled, but it has been drained of much of its generative power. Arresting the trajectory that was implicit in cases like *Green v. School Board of New Kent County*, *Swann v. Charlotte-Mecklenburg Board of Education*, and *Keyes v. School Dist. No. 1, Denver*, all ultimately rooted in *Brown*, the Court ruled that school systems that contain a large number of all-black and all-white schools are constitutionally acceptable. According to the new majority, *Brown* condemned not the inequality resulting from the actual separation of the races, but only the use of racial criteria as a method of assignment. As a result, the Court has allowed school boards to assign students to schools on the basis of neighborhoods, even where there is residential segregation; it also effectively insulated suburban communities—invariably white—from the reach of court orders trying to desegregate the inner-city schools. School boards remain obliged to correct for vestiges of past practices, such as racial gerrymandering, but the Rehnquist Court has shifted the emphasis and underscored the limited nature of the remedial obligation, both geographically and temporally.
Even outside of the school desegregation context, which might have been thought to be a category unto itself, the egalitarianism of the Warren Court has been curbed by new renderings of the provision that constituted that Court’s nerve center—the equal protection clause. In cases like *Moose Lodge No. 107 v. Irvis*, which upheld the award of a state liquor license to a club that openly discriminated on the basis of race, a sharp distinction was drawn between state and society, confining the ban on discrimination to state action narrowly understood. In addition, the Court ruled that in order to establish a denial of equal protection it is not enough to show that the state action especially disadvantages minorities; it must be shown that such an effect is intended by the state. In the mid-1970’s, the Court also effectively removed the poor from the scope of the equal protection clause, leaving the war on poverty more vulnerable than ever to the vicissitudes of politics. In the same case, the Court declared that education was not a fundamental right, thereby bringing to a halt the process of enumerating rights that would warrant special solicitude under the equal protection clause. Even the commitment to strict numerical equality in the apportionment context has been diluted, as the Court became more and more tolerant of departures from the “one person, one vote” standard.
The Rehnquist Court also created new cracks in the wall between church and state by allowing the state to engage in practices, such as the maintenance of a creche, that earlier would have been unthinkable. In addition, the Court’s commitment to maintaining public debate that is “uninhibited, robust, and wide-open” has been compromised during this period. Lacking the steely tolerance for political protest that characterized the Warren Court at its most determined moment, the Court under Rehnquist upheld laws that denied political activists the opportunity to reach the public at shopping centers or in front of certain government buildings, or through posting posters on utility poles, demonstrating in public parks, and picketing in residential neighborhoods. It also refused to create access to the networks for editorial advertisements sponsored by a group of businessmen criticizing the Vietnam War. On the other hand, a number of laws trying to limit political expenditures were struck down as violative of the First Amendment, even though these measures were conceived as means of preserving the vitality of democratic politics by preventing the wealthy from drowning out the voices of the less affluent in society. In these cases, and others, the new majority seemed to be confounding the protection of speech with the protection of property.
In the criminal context, the new majority lifted the ban on the application of the death penalty that had its roots in the sixties and that formally took effect in the early seventies. Since 1976 more than 140 persons convicted of crimes have been put to death, and Rehnquist, both as a judge and in discharge of his administrative responsibilities as head of the Judicial Conference of the United States, seems determined to institute a series of procedural reforms that would expedite and facilitate that process. Similarly, the Court has sought to shift the balance of advantage in the criminal process, relaxing some of the restrictions on the investigatory techniques of the police, most notably the rule excluding illegally seized evidence.
During the 1960’s the Court had opened the doors of the federal trial courts for writs of habeas corpus and injunctions against state criminal proceedings. This was done to ensure that state criminal proceedings adhered to minimum standards of fairness and to make certain that these proceedings were not used for improper purposes, such as the harassment of political activists. During the 1970’s and 1980’s those doors were closed. *Fay v. Noia* and *Dombrowski v. Pfister*, opinions by Brennan that gave substance to the view that federal courts are the primary forum for the protection of federal rights, were emptied of all operative significance. A similar fate befell *Goldberg v. Kelly*, also written by Justice Brennan, which had extended the due process revolution of the sixties from the criminal to the civil domain. Today, government is allowed to act to the detriment of individuals, to inflict grievous suffering on them, for example, by denying disability benefits or terminating parental rights, without providing some of the most elementary forms of due process.
The law does not move slowly, but it does move unevenly, and during the last twenty years all has not been bleak, even for someone with Brennan's outlook. There have been a few bright moments. The most significant are *Roe v. Wade*, a 1973 decision creating a right to abortion, and *Regents of the University of California v. Bakke*, which, in effect, indicated that certain preferential treatment programs for minorities were permissible, a ruling later to be extended to women in *Johnson v. Transportation Agency*. No one should belittle those achievements, or any of the others that might come to mind, but they should not be taken as representative of the judicial era of which they are a part. *Roe v. Wade* and *Bakke* did not insert new premises into the law, but built on understandings of an earlier time. These cases were hard-fought victories that sharply divided the Court, and to this day survive by the narrowest of margins. At present, they define the outer limits of the law, barely tolerated, without any generative power of their own.
The danger to these decisions would have significantly increased if the campaign to place Robert Bork on the Court had been successful. As a law professor, as Solicitor General during the Nixon and Ford administrations, and as a federal appellate judge, Bork was a principal figure in the attack on the Warren Court and a relentless critic of *Roe v. Wade* and *Bakke* and the precedents upon which they were based. President Reagan's decision in 1987 to nominate him to the Supreme Court forced the Senate to consider carefully the jurisprudence of both Bork and the traditions which he so criticized, and the result was a series of hearings that offered the country an extraordinary seminar on constitutional law. The importance of those hearings and the decision of the Senate to reject the nomination cannot be denied. It should be understood, however, that contrary to what some have maintained, Bork would not have been a transformative appointment; that transformation had already occurred a decade earlier and continues to this day.
Living through this period was not easy for Justice Brennan. It proved to be a test of sorts, and as such brought to the fore many of his strengths. Value commitments that were shared in the sixties became distinguishing features of the Justice in the seventies and eighties and, as a result, are now recognized as a source of his identity and also his greatness. In some instances, his understanding of the Constitution evolved over time. A striking example is the change in his position on obscenity. In 1973 he rejected the strategy that he had created in 1957 in *Roth v. United States* and refined throughout the sixties—of keeping censorship to a minimum by providing a narrow definition of that genre of speech that falls outside the ambit of First Amendment protection—and took up a position close to the absolutism of Black and Douglas. Brennan indicated that characterizing sexually explicit material as obscene was not a sufficient justification for restricting its availability to consenting adults.
For the most part, however, the seventies and eighties were for Brennan a period devoted primarily to defending the achievements of the Warren Court.
At times that consisted of demanding that the Court take the inevitable next step, say, of extending the egalitarianism of *Brown* to issues of gender; most of the time, however, he had to confront the most blatant retrenchment. In this context the nation learned what his clerks knew first hand—namely, that the Justice is extraordinarily strong-minded—and when the day was done, he emerged as a national hero, a freedom fighter of sorts. On issues of detail Brennan is conciliatory, but when it comes to what he regards as matters of principle he is adamant and, in the best sense of the word, stubborn.
This stubbornness expressed itself in many ways, not the least of which was the profusion of dissents. One enterprising law clerk, more familiar with Lexis than I, calculated that over his career Justice Brennan wrote dissenting opinions in 2,347 cases, mostly during the 1970's and 1980's. Of those, 1,517 were in death penalty cases, and a great number of them were formulaic dissents from the denial of certiorari, jointly issued with Marshall. But even subtracting these, the number of dissenting opinions remains impressive—830. During the last term alone, the Justice wrote dissenting opinions in 23 cases. During the term I clerked, one of the halcyon days of the Warren Court, Brennan wrote a dissenting opinion in only one case, *United States v. Guest*, and it is not even clear whether that opinion should be regarded as a dissent. As commentators soon realized, though his opinion in that case was labeled a "dissent," it actually set forth a view of congressional authority which, when read in conjunction with Justice Clark's separate concurrence, had the support of a majority of five justices and which, along with *Katzenbach v. Morgan*, later was used to provide the constitutional foundation for the Civil Rights Act of 1968.
The escalation in the number of Justice Brennan's dissents during the Rehnquist years is indeed striking, all the more so because it is so much greater than the number of dissenting opinions written by the first Justice Harlan or Justice Holmes, both of whom are often referred to as the great dissenters. Over their careers, Harlan wrote 134 dissents and Holmes 81, but these numbers seem trivial when compared to the number of Brennan's dissents. It would be a mistake, however, to view Justice Brennan, even during this second phase of his career on the Court, as we view Holmes or the first Harlan, as another great dissenter or, as the enterprising clerk declared, the greatest dissenter.
As a matter of collegial style, both Harlan and Holmes were loners, and in addition, Holmes's philosophic outlook supremely suited him to the role of dissenter. Holmes viewed history, or even the action of his brethren, much as a spectator might. At times he was prepared to raise his voice in protest, as he did in *Lochner v. New York* and *Abrams v. United States*, but he did so with an indifference as to whether he persuaded his colleagues or managed to obtain their votes. Holmes spoke to the future, and as it turned out he was prophetic, but his basic intent was to speak his mind and let the chips fall wherever they might. Justice Brennan, on the other hand, never, but never, was a loner nor a spectator, but always was thoroughly engaged with his colleagues, passionately working to build a majority; in the 1950's and 1960's he did so to implement the revolution, in the 1970's and 1980's to stop the counterrevolution. The Brennan dissents of the 1970's and 1980's spoke not to the future, but to his colleagues. More often than not they read like majority opinions that just fell short of one or two votes. In one notable instance involving congressional power under the commerce clause, he soon managed to sway one of the previous majority to switch, thereby transforming the position he originally articulated in dissent into the law of the land.
Moreover, while the second phase of Brennan's career is marked by an extraordinary number of dissents, one should not be misled by the numbers. The dissents were not at the core of his mission. At the occasional law clerk dinners held during the 1970's and 1980's, he would wryly announce the tallies to the assembled. We would cheer the resistance offered by his dissents. But it was obvious that the Justice's true source of pleasure came in the cases in which he somehow—miraculously, I think—formed a majority that held the line. Rehnquist was usually on the other side, and because of his seniority, Brennan acted as a shadow chief justice and often assigned himself the task of speaking for the odd coalition that he pulled together. Justice Brennan's last term on the Court was marked by a large number of dissents, but of equal, even greater, significance is the fact that in two important cases—one involving flagburning, the other patronage—he was able to speak for the majority in support of freedom of speech. It was entirely fitting that on the last day of his last term on the Court, after almost thirty-four years of service, he announced an opinion for a majority of the Court upholding an FCC policy—born of another era—that favored minority interests in awarding broadcasting licenses. Later that afternoon, Justice Brennan received the prognosis from his doctor that led to his decision to retire.
Justice Brennan is a proud man, not in the least bit arrogant—indeed he is one of the most modest men I have ever known—but he is someone who takes a very special pride in his work. He is a fighter who likes to win, and as such, would be pained to see an earlier victory reversed, especially when, as in the line of cases that undid *Dombrowski*, the new majority, also trying to build from within, turned Brennan's doctrinal creations to another purpose. The fight in Brennan no doubt accounted for the sense of engagement that carried him through the seventies and eighties, and helps explain the unusual role he created for himself during that period. It was, however, dwarfed by an even more significant factor: his devotion to the institution.
In the Warren Court era, this devotion accounted for Brennan's role as Court spokesman and for the distinctive nature of his opinions. He took no pleasure in speaking alone, but always tried to speak through the Court and to mold judicial doctrine in a way that was fully sensitive to the needs of that institution. His first priority was to have the Court speak authoritatively and his second was to produce an opinion that would strengthen the effectiveness
of the Court. He strove to avoid any gestures that would either dissolve or splinter the majority, infuriate those on the other side of the bench, or set into motion a political dynamic that would undermine the ability of the Court to achieve all that it might. During the second half of his tenure, these same sentiments shaped his strategy of resistance. Dissent was always a possibility, but his first priority was that the Court speak to the issue in an authoritative manner, because he continued to believe in the Court and that law mattered. He remained committed to working through the institution, not to propounding his views, speaking his mind, or otherwise indulging himself. Dissent was a reluctant last resort—almost an acknowledgment of failure.
In this way, Brennan served as the bridge between the Constitution that was and the Constitution that is. He was the mediating force in the negative dialectic between the Warren and Rehnquist Courts. I can assure you that there is no one left on the Court, not even Justice Marshall (for whom I also clerked), who can play that role in quite the way that Brennan did. At the hands of Rehnquist and those inclined to follow him, the Warren Court has suffered grievously, and today, without Brennan on the bench, the work of that institution stands more in jeopardy than ever. Some, like Scalia, appear determined to chart their own process of revision, but the danger they pose to the body of law associated with the Warren Court is every bit as great.
Of course, Justice Brennan has left us a written legacy. The pages of the United States Reports are filled with his opinions, both dissents and majority opinions. A few years back, still another of his law clerks surveyed the leading casebooks on constitutional law and reported that of all the so-called “principal” cases featured in those books, Justice Brennan had written more than any other justice in the entire history of the Supreme Court. These opinions define the field within which the present Court operates; for some they will act as constraints, for others a resource—and if the testimony of Brennan’s replacement, David Souter, is to be believed, for some they might even act as an inspiration. But these opinions will not compensate for the loss of Brennan’s vote, even less for his absence within the councils of the Court.
During the October 1965 Term, the conferences were held on Fridays. Later that day, or more commonly on Saturdays, when the Justice would regularly have lunch with his clerks, he would describe what transpired at conference, or at least what he thought we should know about the conference. Some sense of the inner workings of the Court was also conveyed during the dinners he had with his clerks during the late 1960’s and early 1970’s. At that time the law clerk dinners were an annual event and the number of former clerks small enough that we could sit around a table upstairs at the Occidental. Those were also the days—prior to the publication of *The Brethren*—when he could assume that law clerks could be trusted with a confidence. The Justice was not a man for gossip, or small talk about his colleagues (though his interest in the personal lives of his clerks was boundless—he treated us as members of his family). But
he saw himself as a teacher, supplementing and enriching what we might have learned in the classroom, and he believed that understanding the dynamics among the justices was a crucial part of our education, especially if, as he hoped, we would go on to become teachers. He wanted us and our students to know how law was truly made.
His role in the deliberations was not the principal point of these conversations, but one could see in an instant how the personal qualities that drew us to the man—the quickness and clarity of his mind, the warmth of his personality, the energy that he brought to argument, his sensitivity to the views of others—were present in the Conference Room and, even more, accounted for much of what happened there. In conference, Justice Brennan always had more than one vote. Who could possibly resist him when he grabbed you by the elbow, or put his arm around your shoulder, and began, "Look, pal . . . "?¹
For many, including myself, the Justice's retirement will only compound the sense of disaffection with the Court that now pervades the academy and large sections of the profession. For a previous generation, the Supreme Court was an institution of respect and admiration. Today the Court is seen as alien and hostile, less devoted to reaffirming and actualizing our national ideals than to protecting the established order, strengthening the hand of those who serve it and belittling those who might challenge it. It is this perception that contributed to the rise of the Critical Legal Studies movement and its many cognates during the late seventies and that gives this movement such widespread support in the academy today. Yet the same disaffection is shared by those who have more moderate or centrist commitments, those who still believe in the redemptive possibility of law.
This disaffection is rooted in the overall pattern of decisions over the last two decades. It is exacerbated, however, by the person of the Chief Justice who, despite his amiability, appears so determined in his mission that he is willing to disregard even the most elementary principles of the craft, and who managed, in his two confirmation proceedings, to cast doubt on his own integrity. In the first, he mischaracterized Justice Jackson's position in the deliberations leading to *Brown* in an effort to explain away a memorandum he wrote as a law clerk supporting *Plessy v. Ferguson*; in the second, he failed to explain adequately why he did not recuse himself in a case involving military surveillance of civilians during the Nixon administration in which, of course, he was an assistant attorney general.
The Warren Court had its critics, as was known to anyone who, like myself, studied at Harvard in the early sixties and witnessed Justice Brennan's anger—soon abated—at the leading faction of the Harvard professorate and a number of scholars associated with that school who denounced the Court in terms most emphatic. This anger expressed itself in many ways, one quite trivial: the Justice decided to end his practice of taking his clerks, as a matter of course, from his own law school. My fellow clerk was his first from
Yale—the stronghold of support for what the Court was then trying to accomplish—and on the last day of my clerkship I received a poem written by a friend in the name of the Justice entitled "Ode to My Last Harvard Clerk." This too would pass. But even in those days it was understood that Harvard did not speak for the profession as a whole, and even less so for the young, who looked to the Court as an inspiration, the very reason to enter the profession. Today, the situation is completely different. The disaffection with the Court is not localized, but pervades the profession, and swells within those who are just now being initiated into it. For some, the Rehnquist Court speaks to their ideals, but for most it is a source of cynicism and doubt.
Under these circumstances it is often difficult to see or present the body of learning known as constitutional law as worthy of respect and admiration. It is also difficult to know how the Court might continue to play its historic function in the republic: how can it speak authoritatively and effectively to the issues that divide us if the bar feels so alienated from it? To some, this loss of authority might not seem so tragic, given the present course of decision. But I wonder whether such a view is inappropriately short-sighted, seeing only what is, without regard to what was and what might be. For all that it accomplished, and still might, there is reason to believe in the Court. Yet I recognize that it becomes more and more difficult to do so under the terms and conditions of the governing coalition.
In musing on this predicament, my mind often turns to the Justice, and I glance at the photograph that appears as the frontispiece of these Tributes. While the Justice represents many things for me, probably none is more important than his attachment to the Court as an institution. It is this attachment that unifies the two phases of his career and accounts for the unusual role that he created for himself during the last twenty years, as he saw so much that he created and so much that he believes in dismantled and destroyed. Justice Brennan served through these years in a cheerful and determined manner, always with an unqualified devotion to the Court. I wonder whether those, like myself, who wish to honor him and that extraordinary age of American law that he helped bring into being, might not look to him as an exemplar and an inspiration. He resisted, tenaciously, but kept the faith—why can't we?
|
International Rhino Foundation Comments on Dallas Safari Club Auction of a Permit to Hunt a Black Rhino
29 October 2013
Much media attention has been directed this past week to the Dallas Safari Club's intention to auction off a permit to hunt a black rhino in Namibia, the proceeds to go towards preserving this magnificent and critically endangered species. The International Rhino Foundation does not condone the hunt, but recognizes that it is legal under Namibian and United States law, and under the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). We respect Namibia's efforts to maintain a healthy rhino population as well as raise money for the important work of conserving the species. That said, we note that the fate of one hunted rhino pales in comparison to the nearly 800 rhinos lost to illegal poaching in South Africa alone this year and to escalating poaching losses in Namibia and other range countries where rhinos once thrived but now are barely hanging on.
The International Rhino Foundation is an apolitical, scientifically-‐oriented conservation organization that funds and operates rhino conservation programs in Africa and Asia. The sale of a critically endangered species for a trophy hunt has brought forth a range of emotions and arguments. It is a complex and multi-‐faceted issue with the following points to consider.
We believe thatthe facts pertaining to intra-species fighting as a justification for the permit auction are overstated. Young bulls naturally displace old bulls without human intervention (in normal, viable rhino populations) to maximize the ratio of effective male breeders in those populations. The issue is more about what happens to the naturally displaced old bulls that no longer are breeding. In a strictly genetic sense, they can be considered non-‐essential to population survival and, in situations where the rhinos are stocked at or near an area's ecological carrying capacity (which ideally they should not be), then these bulls may be eating browse that younger animals need to maintain reproduction and to minimize loss of genetic diversity in the population.
Old bulls that lose body condition are sometimes gored in interactions with younger bulls when there is a high level of competition for food resources. The argument for hunting is that it is more humane to use a safari hunter's bullet on such geriatric rhinos to avoid a lingering death and to generate funds for rhino conservation (assuming that a mechanism to return such income to conservation is in place).
A rationale for safari hunting of such animals must be clearly and accurately presented with clear criteria to identify which rhino bulls are geriatric and truly not essential for the population's survival. The International Rhino Foundation recognizes that, if they are not poached or killed due to other reasons, black rhino bulls eventually reach an age at which they are marginalized in their population and do not contribute reproductively. If such bulls can be objectively identified, then an argument could be made that the safari hunting of these animals will have no negative biological impact on the rhino population and in specific circumstances may alleviate problems such as overstocking and fighting within species, although these should not be common problems in well-‐managed rhino populations. What is essential is that inviolate rules be in place to ensure that income from hunts is returned to rhino population sites to meet conservation needs such as protection and biological management.
There has recently been an increasing willingness within the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) to allow for trade in products from well-‐managed populations of endangered species. Trophy hunting of black rhinos in Namibia is not new; CITES issues Namibia an export quota of up to five hunter-‐taken black rhinos per year. This, however, is the first time that a permit has been offered outside of Namibian borders, and the first time that such hunts have received this level of international attention.
But let's talk about the issue that is being sidelined by the uproar over the Dallas Safari Club auction: rhinos are under siege. To-‐date this year, at least 793 rhinos have been poached in South Africa, including many reproductive or pre-‐reproductive females. We stand to lose a century of rhino conservation success in Africa in the next few years if we can't stop, or slow, rhino poaching now.
Poachers typically operate as small, but well-‐armed gangs, sometimes backed by international organized crime syndicates. Their automatic weapons can take down a rhino as readily as ranger, and they are not averse to murdering anyone who stands between them and their payday. Some professional poachers prefer heavy caliber sporting rifles, which have a greater knockdown effect on rhinos than AK-‐47s.
The high-‐stakes black market trade in rhino horn has been linked to international terrorism, specifically to the recent mall attack in Nairobi. In July, USPresident Obama signed an executive order to combat wildlife trafficking, recognizing illegal wildlife trafficking as an escalating international crisis and establishing a task force to deal with the issue.
Sadly, all wild populations of rhinoceros are at serious risk from poaching. Namibia lost one rhino to poaching 2012 and, to-‐date in 2013, has lost four. This low poaching rate is most likely due to a combination of proactive protection and management, a law-‐ abiding society, inaccessibility of rhino sites, and luck. Experience has shown that
situations in Africa can change on a dime. It is a valid question to ask if it is worth sacrificing one rhino to contribute financially to the conservation and protection of a larger population.
The International Rhino Foundation has funded and operated rhino protection and conservation programs in Africa and Asia for 20 years. Rhino conservation is expensive. Every year we scramble to raise enough funds to support our work. The political and economic realities in the range countries (national commitment versus corruption, wildlife-‐based land-‐use versus subsistence farming, etc.) are the factors that really determine the fate of rhinos. National contributions to conservation budgets, in countries such as Namibia, considerably exceed contributions from the international donor community. It is inevitable that hunts such as the one proposed would generate confusion and concern among many members of the public who are aware of the plight of the world's rhino species. But, it is also a reality that financial constraints and land-‐ use challenges within the rhino range countries compel the authorities in those countries to consider various income-‐generating opportunities even if they involve limited, sustainable hunting of endangered species such as rhinos. It will not be possible for international conservation agencies to engage with and positively influence such countries in their rhino conservation endeavors unless objective consideration and respect is shown for their rhino management decisions, even those that are internationally controversial.
Finding middle ground between the different perspectives on rhino hunting is very difficult. Our position is therefore based purely on the optimization of conservation advantage for rhino species within their wild populations. The International Rhino Foundation will work with its international partners to regularly review the front-‐line conservation outcomes of safari hunts of rhinos in South Africa and Namibia and will draw attention to any negative outcomes or irregularities.
ADDITIONAL INFORMATION
State of the Rhino
Five living rhino species – white, black, greater one-‐horned, Javan and Sumatran – can still be found in Africa and Asia, and it's conceivable that they once numbered in the millions. Unfortunately, the global population has plummeted to less than 30,000 animals due to poaching, trophy hunting and habitat loss. Two species, the Javan and the Sumatran, now number less than 150 individuals combined, and could easily disappear within our lifetime. www.rhinos.org/state-‐of-‐the-‐rhino
Black Rhinos
During the last century, the black rhino suffered the most drastic decline in total numbers of all rhino species. Between 1970 and 1992, the population of this species decreased by 96%. In 1970, it was estimated that there were approximately 65,000 black rhinos in Africa – but, by 1993, there were only 2,300 surviving in the wild. Intensive anti-‐poaching efforts have had encouraging results since 1996. Numbers have been recovering and still are increasing very slowly through targeted conservation management, including strategic translocations to consolidate isolated populations, active management and in some countries, de-‐horning. The wild population of black rhinos is now approximately 5,055. Namibia's black rhino population of approximately 1,800 animals is the second largest next to South Africa's, followed by Kenya and Zimbabwe. www.rhinos.org/black-‐rhino
The black rhino was listed in Appendix I of CITES in 1977 and under the Endangered Species Act by the US Fish and Wildlife Service in 1980, and is listed as Critically Endangered on the IUCN Red List of Threatened Species.
The International Rhino Foundation's Black Rhino Program
The International Rhino Foundation was founded in response to the black rhino poaching crisis in Zimbabwe in the 1990s, when black rhinos were nearly wiped out by large-‐scale, organized poaching, leaving only 370 animals by 1993. By 2000, the population recovered to approximately 450 individuals, and as of today, Zimbabwe's black rhinos still number around 450 animals, representing the fourth largest black rhino population in Africa. These rhinos are spread over private and state-‐owned lands, with almost 400 black rhinos and 227 white rhinos in the South–East Lowveld private conservancies, where we work through our partner, the Lowveld Rhino Trust. www.lowveldrhinotrust.org
Conservation efforts in the Lowveld have helped increase the region's black rhino population from 4% of the national total in 1990 to 88% at present, which represents about 7% of the continental total. This incredibly significant increase has been achieved through biological management, strategic translocations of rhinos, support for anti-‐ poaching activities, informant systems, community benefits schemes, working with authorities to track, apprehend, and prosecute poachers, and other non-‐consumptive means. Our team in Zimbabwe operates under difficult and often unpredictable economic and political conditions.
In South Africa, a myriad of factors have combined to create the poaching crisis we face today. It has developed over a period of time, with an increased presence of Chinese and other Asian business interests. The International Rhino Foundation recognizes that dealing with the complexities of the poaching crisis in South Africa is well beyond the manageable interests of a small organization like ours. We have focused on a small niche: providing training and equipment to rangers in under-‐represented areas, and exploring the use of trackers dogs to combat poaching.
Rhino Horn Trade
Rhino horn has been used in China for traditional medicine for centuries, and later spread to Japan, Korea and Southeast Asia. The newest market for rhino horn is
Vietnam, where it is used as a high-‐value gift item, as a purported hangover preventative, and tragically, sold as a "cure" for cancer.
Vietnam has been the world's leading rhino horn consumer since 2005. Vietnam joined CITES in 1994 and while the country prohibits domestic trade, there is no meaningful enforcement. China joined CITES in 1981 and prohibited all domestic trade in rhino horn and registered and sealed all stockpiles in 1993. However, China is still the second-‐ largest destination for illegal horn. Approximately 100 white rhinos have been imported by China from South Africa; TRAFFIC helped to expose a plan to farm rhino in 2010.
Rhino horn markets have been shut down in Japan, Korea, and Taiwan. All three countries joined CITES and then subsequently banned rhino horn from their pharmacopoeia. Korea and Taiwan were threatened with Pelly Amendment sanctions by the United States prior to banning rhino horn use.
According to TRAFFIC, rhino horn has been classified as a "heat-‐clearing" drug with detoxifying and fever-‐reducing properties, and typically was typically combined with other medicinal ingredients for treatment of a wide range of conditions. Studies in China, where rhino horn is permitted to be used in research only to identify viable substitutes for it, found statistically significant pharmacological effects for rhino horn: anti-‐pyretic, anti-‐inflammatory, analgesic, pro-‐coagulant, among others. In contrast, studies done in the United Kingdom and South Africa found no pharmacological effects at all. www.traffic.org
CITES
CITES (the Convention on International Trade in Endangered Species of Wild Fauna and Flora) is a multilateral treaty to protect endangered plants and animals. Species are proposed for inclusion in or deletion from the CITES Appendices at meetings of the Conference of the Parties (CoP), which are held every 3 years. The most recent CoP was held in Bangkok in March. Namibia joined CITES in 1990.
There has been a recent, increased willingness within the Parties to allow for trade in products from well-‐managed populations. CITES issues Namibia an export quota of up to five hunter-‐taken black rhinos per year.www.cites.org
US Fish and Wildlife Service
The USFWS Division of Management Authority has not yet issued a permit to the Dallas Safari Club to return a rhino trophy to the US. It is our understanding that if a permit were to be issued, the individual hunter would first have to pass certain background checks and in order to have USFWS approval for import of the trophy, the "take" of the animal chosen for the hunt would have to be approved as being beneficial to the conservation of the species.
In March 2013, the U.S. Fish and Wildlife Service (USFWS) issued a permit for the importation of a sport-‐hunted black rhinoceros trophy taken in Namibia in 2009. According to its statement, the USFWS "granted this permit after an extensive review of Namibia's black rhino conservation program, in recognition of the role that well-‐ managed, limited sport hunting plays in contributing to the long-‐term survival and recovery of the black rhino in Namibia."
http://www.fws.gov/international/permits/black-‐rhino-‐import-‐permit.html#2
|
| 9/1 | Genesis 1-2 | John 1:1-3; Psalms 8, 104 | Genesis 3-5 | Genesis 6-7 | Genesis 8-9; Psalm 12 |
|---|---|---|---|---|---|
| 9/8 | Genesis 12-13 | Genesis 14-16 | Genesis 17-19 | Genesis 20-23 | Genesis 24-26 |
| 9/15 | Genesis 30-33 | Genesis 34-37 | Genesis 38-40 | Genesis 41-43 | Genesis 44-46 |
| 9/22 | Job 1-5 | Job 6-9 | Job 10-13 | Job 14-17 | Job 18-21 |
| 9/29 | Job 25-28 | Job 29-32 | Job 33-36 | Job 37:1-40:5; Psalm 19 | Job 40:6-42:17; Psalm 29 |
| 10/6 | Exodus 5-9 | Exodus 10-13 | Exodus 14-18 | Exodus 19-21 | Exodus 22-24 |
| 10/13 | Exodus 29-32 | Exodus 33-36 | Exodus 37-40 | Leviticus 1-4 | Leviticus 5-7 |
| 10/20 | Leviticus 11-14 | Leviticus 15-18 | Leviticus 19-22 | Leviticus 23-25 | Leviticus 26-27; Numbers 1-2 |
| 10/27 | Numbers 6-9 | Numbers 10-13; Psalm 90 | Numbers 14-16; Psalm 95 | Numbers 17-20 | Numbers 21-24 |
| 11/3 | Numbers 29-32 | Numbers 33-36 | Deuteronomy 1-3 | Deuteronomy 4-7 | Deuteronomy 8-11 |
| 11/10 | Deuteronomy 16-19 | Deuteronomy 20-23 | Deuteronomy 24-27 | Deuteronomy 28-30 | Deuteronomy 31-34 |
| 11/17 | Joshua 3-6 | Joshua 7-10 | Joshua 11-14 | Joshua 15-18 | Joshua 19-22 |
| 11/24 | Judges 2-5 | Judges 6-9 | Judges 10-13 | Judges 14-18 | Judges 19-21 |
| 12/1 | 1 Samuel 1-3 | 1 Samuel 4-8 | 1 Samuel 9-12 | 1 Samuel 13-16 | 1 Samuel 17-20; Psalm 59 |
| Sunday | Monday | Tuesday | Wednesday | Thursday | Friday |
|---|---|---|---|---|---|
| 12/8 | Psalms 7, 27, 31, 34, 52 | Psalms 56, 120, 140-142 | 1 Samuel 25-27; Psalms 17, 73 | Psalms 35, 54, 63, 18 | 1 Samuel 28-31; 1 Chronicles 10 |
| 12/15 | 2 Samuel 1-4 | Psalms 6, 9-10, 14, 16, 21 | 1 Chronicles 1-2; Psalms 43-44 | Psalms 49, 84-85, 87 | 1 Chronicles 3-5 |
| 12/22 | Psalms 81, 88, 92-93 | 1 Chronicles 7-9 | 2 Samuel 5:1-10; 1 Chronicles 11-12; Psalm 133 | 2 Samuel 5:11-6:23; 1 Chronicles 13-16 | Psalms 15, 23-25, 47 |
| 12/29 | 2 Samuel 7; 1 Chronicles 17; Psalms 1-2, 33, 127, 132 | 2 Samuel 8-9; 1 Chronicles 18 | 2 Samuel 10; 1 Chronicles 19; Psalms 20, 53, 60, 75 | Psalms 65-67, 69-70 | 2 Samuel 11-12; 1 Chronicles 20; Psalm 51 |
| 1/5 | 2 Samuel 13-15 | Psalms 3-4, 13, 28, 55 | 2 Samuel 16-18 | Psalms 26, 40-41, 58, 61-62, 64 | 2 Samuel 19-21; Psalms 5, 38, 42 |
| 1/12 | Psalms 97-99 | 2 Samuel 24; 1 Chronicles 21-22; Psalm 30 | Psalms 108-109 | 1 Chronicles 23-26 | Psalms 131, 138-139, 143-145 |
| 1/19 | Psalms 111-118 | 1 Kings 1-2; Psalms 37, 71, 94 | Psalm 119:1-88 | 1 Kings 3-4; 2 Chronicles 1; Psalm 72 | Psalm 119:89-176 |
| Sunday | Monday | Tuesday | Wednesday | Thursday | Friday |
|---|---|---|---|---|---|
| 1/26 | Song of Solomon 5:2-7:13, 8:14; Psalm 45 | Proverbs 1-4 | Proverbs 5-8 | Proverbs 9-12 | Proverbs 13-16 |
| 2/2 | Proverbs 21-24 | 1 Kings 5-6; 2 Chronicles 2-3 | 1 Kings 7-8; Psalm 11 | 2 Chronicles 4-7; Psalms 134, 136 | Psalms 146-150 |
| 2/9 | Proverbs 27-29 | Ecclesiastes 1-6 | Ecclesiastes 7-12 | 1 Kings 10-11; 2 Chronicles 9; Psalms 30-31 | 1 Kings 12; 2 Chronicles 10 |
| 2/16 | 1 Kings 15:1-24; 2 Chronicles 13-16 | 1 Kings 15:25-16:34; 2 Chronicles 17 | 1 Kings 17-19 | 1 Kings 20-21 | 1 Kings 22; 2 Chronicles 18-20 |
| 2/23 | 2 Kings 5:1-8:15 | 2 Kings 8:16-29; 2 Chronicles 21:1-22:9 | 2 Kings 9-11; 2 Chronicles 22:10-23:21 | 2 Kings 12-13; 2 Chronicles 24 | 2 Kings 14-15; 2 Chronicles 25-27 |
| 3/2 | Amos 1-5 | Amos 6-9 | Hosea 1-5 | Hosea 6-9 | Hosea 10-14 |
| 3/9 | Isaiah 5-8 | Isaiah 9-12 | Micah 1-4 | Micah 5-7 | 2 Kings 16-17; 2 Chronicles 28 |
| 3/16 | Isaiah 18-22 | Isaiah 23-26 | 2 Kings 18:1-8; 2 Chronicles 29-31; Psalm 48 | Isaiah 27-30 | Isaiah 31-35 |
| Sunday | Monday | Tuesday | Wednesday | Thursday | Friday |
|---|---|---|---|---|---|
| 3/23 | Isaiah 38-39; 2 Kings 20:1-21; 2 Chronicles 32:24-33 | Isaiah 40-42; Psalm 46 | Isaiah 43-45; Psalm 80 | Isaiah 46-49; Psalm 135 | Isaiah 50-53 |
| 3/30 | Isaiah 59-63 | Isaiah 64-66 | 2 Kings 21; 2 Chronicles 33 | Nahum 1-3 | Zephaniah 1-3 |
| 4/6 | Habakkuk 1-3 | Joel 1-3 | Jeremiah 1-4 | Jeremiah 5-8 | Jeremiah 9-12 |
| 4/13 | Jeremiah 17-20 | Jeremiah 21-24 | Jeremiah 25-28 | Jeremiah 29-32 | Jeremiah 33-37 |
| 4/20 | 2 Kings 24-25; 2 Chronicles 36:1-21; Jeremiah 52 | Jeremiah 41-44 | Obadiah 1; Psalms 82-83 | Jeremiah 45-48 | Jeremiah 49-50 |
| 4/27 | Lamentations 1:1-3:36 | Lamentations 3:37-5:22 | Ezekiel 1-4 | Ezekiel 5-8 | Ezekiel 9-12 |
| 5/4 | Ezekiel 17-20 | Ezekiel 21-24 | Ezekiel 25-28 | Ezekiel 29-32 | Ezekiel 33-36 |
| 5/11 | Ezekiel 41-44 | Ezekiel 45-48 | Daniel 1-3 | Daniel 4-6 | Daniel 7-9 |
| 5/18 | 2 Chronicles 36:22-23; Ezra 1-3 | Ezra 4-6 | Haggai 1-2 | Zechariah 1-7 | Zechariah 8-14 |
| 5/25 | Esther 6-10 | Malachi 1-4; Psalm 50 | Ezra 7-10 | Nehemiah 1-4 | Nehemiah 5-7 |
| Sunday | Monday | Tuesday | Wednesday | Thursday | Friday | Saturday |
|---|---|---|---|---|---|---|
| 6/1 | Nehemiah 11-13; Psalm 126 | Psalm 106; John 1:4-14 | Matthew 1; Luke 1:1-2:38 | Matthew 2; Luke 2:39-52 | Matthew 3; Mark 1:1-11; Luke 3; John 1:15-34 | Matthew 4:1-22, 13:54-58; Mark 1:12-20, 6:1-6; Luke 4:1-30, 5:1-11; John 1:35-2:1-12 |
| 6/8 | Matthew 4:23-25, 8:14-17; Mark 1:21-39; Luke 4:31-44 | John 3-5 | Matthew 8:1-4, 9:1-17, 12:1-21; Mark 1:40-3:21; Luke 5:12-6:19 | Matthew 5-7; Luke 6:20-49, 11:1-13 | Matthew 8:5-13, 11:1-30; Luke 7 | Matthew 12:22-50; Mark 3:22-35; Luke 8:19-21, 11:14-54 |
| 6/15 | Matthew 13:1-53; Mark 4:1-34; Luke 8:1-18 | Matthew 8:18-9:38; Mark 4:35-5:43; Luke 8:22-9:62 | Matthew 10, 14; Mark 6:7-56; Luke 9:1-17; John 6 | Matthew 15; Mark 7:1-8:10 | Matthew 16; Mark 8:11-9:1; Luke 9:18-27 | Matthew 17-18; Mark 9:2-50; Luke 9:28-56 |
| 6/22 | John 7-9 | Luke 10; John 10:1-11:54 | Luke 12:1-13:30 | Luke 14-15 | Matthew 19; Mark 10:1-31; Luke 16:1-18:30 | Matthew 20; Mark 10:32-52; Luke 18:31-19:27 |
| 6/29 | Matthew 21:1-22, 26:6-13; Mark 11:1-26, 14:3-9; Luke 19:28-48; John 2:13-25, 11:55-12:36 | Matthew 21:23-22:14; Mark 11:27-12:12; Luke 20:1-18; John 12:37-50 | Matthew 22:15-23:39; Mark 12:13-44; Luke 20:19-21:4, 13:31-35 | Matthew 24-25; Mark 13; Luke 21:5-38 | Matthew 26:1-5, 14-35; Mark 14:1-2, 10-31; Luke 22:1-38; John 13 | John 14-17 |
| Sunday | Monday | Tuesday | Wednesday | Thursday | Friday |
|---|---|---|---|---|---|
| 7/6 | Matthew 26:36-75; Mark 14:32-72; Luke 22:39-71; John 18:1-27 | Matthew 27:1-31; Mark 15:1-20; Luke 23:1-25; John 18:28-19:16 | Matthew 27:32-66; Mark 15:21-47; Luke 23:26-56; John 19:17-42; Psalm 22 | Matthew 28; Mark 16; Luke 24; John 20-21 | Acts 1-4; Psalm 110 |
| 7/13 | Acts 9-11 | Acts 12-14 | James 1-5 | Galatians 1-3 | Galatians 4-6 |
| 7/20 | Acts 17:1-18:18 | 1 Thessalonians 1-5 | 2 Thessalonians 1-3 | Acts 18:19-19:41 | 1 Corinthians 1-4 |
| 7/27 | 1 Corinthians 9-11 | 1 Corinthians 12-14 | 1 Corinthians 15-16 | 2 Corinthians 1-4 | 2 Corinthians 5-9 |
| 8/3 | Acts 20:1-3; Romans 1-4 | Romans 5-8 | Romans 9-12 | Romans 13-16 | Acts 20:4-23:35 |
| 8/10 | Acts 27-28 | Philippians 1-4 | Philemon 1; Colossians 1-4 | Ephesians 1-4 | Ephesians 5-6; Titus 1-3 |
| 8/17 | 1 Peter 1-5 | Hebrews 1-4 | Hebrews 5-8 | Hebrews 9-13 | 2 Timothy 1-4 |
| 8/24 | 1 John 1-5; 2 John 1; 3 John 1 | Revelation 1-5 | Revelation 6-10 | Revelation 11-13 | Revelation 14-18 |
*This Reading Plan can be found at www.bible.com. After creating an account, click on "Reading Plans" on the left side of the screen. Select "All Plans" at the top right, then click on "Whole Bible" on the left. Then scroll down to "Reading God's Story: OneYear Chronological Plan." Select this plan and click "Start this Plan" at the top right of your screen. You can now read your plan on the computer or have it email you your daily readings.
* You can also find this plan using the Bible App on your smart phone or tablet.
|
RESOLUTION NO. 23-03-341
ACCEPT QUOTE/AUTHORIZE PURCHASE
ELECTRONIC SEARCH WARRANT SOFTWARE (EZ WARRANT SOFTWARE)
MIAMI COUNTY COMMON PLEAS/MUNICIPAL COURTS
Mr. Simmons introduced the following resolution and moved it be adopted:
WHEREAS, ON March 11, 2021, the President of the United States signed into law the American Rescue Plan Act (“ARPA”) which authorized the disbursement of $362,000,000,000.00 in federal fiscal recovery aid to state and local governments (State and Local Aid), including $65,000,000,000.00 in direct aid to counties, to mitigate the effects of the COVID-19 pandemic; and
WHEREAS, on July 30, 2021, Miami County received direct payments from the United States (US) Treasury in federal fiscal recovery aid to state and local governments (State and Local Aid) authorized by the American Rescue Plan Act (ARPA); and
WHEREAS, The Department of Treasury issued the interim Final Rule on May 10, 2021 and a Final Rule on January 6, 2022 effective April 1, 2022, to provide guidance on permissible uses; and
WHEREAS, the Miami County Common Pleas Court in conjunction with the Miami County Municipal Courts, Troy, Ohio request funding to purchase software to allow for the processing of electronic search warrants through EzWarrant by StepMobile, Inc., Mansfield, Ohio which will allow the most efficient and risk free processing of criminal search warrants; and
WHEREAS, EzWarrant is a state approved vendor and the current vendor of the court’s Probation Case Management System; and
WHEREAS, EzWarrant by StepMobile, Inc., Mansfield, Ohio software will allow law enforcement to create and upload their Affidavit and Search Warrant with supporting documentation onto the EzWarrant software portal from anywhere and the platform is an efficient timesaving tool to perform an essential legal function of both law enforcement and the courts; and
WHEREAS, the County Courts desire to enter into a two-year EzWarrant Subscription Agreement, effective April 1, 2023.
NOW, THEREFORE, BE IT RESOLVED, by the Board of County Commissioners of Miami County, Ohio to accept the attached quote from EzWarrant by StepMobile, Inc., Mansfield, Ohio and authorize the Miami County Common Pleas Court and the Miami County Municipal Courts to enter into a two-year EzWarrant Subscription Agreement for the software platform with EzWarrant by StepMobile, Inc., Mansfield, Ohio. The cost of the software shall not exceed $4,800.00 annually ($9,600.00 for two years), effective April 1, 2023, which will be paid from Fund 198.
Mr. Mercer seconded the motion and the Board voted as follows upon roll call:
Mr. Mercer, Yea; Mr. Westfall, Yea; Mr. Simmons, Yea;
DATED: March 23, 2023
CERTIFICATION
I, Janelle S. Barga, Clerk to the Board of Miami County Commissioners, do hereby certify that this is a true and correct transcript of action taken by the board under the date of March 23, 2023.
Janelle S. Barga, Clerk
Cc: Journal
Auditor
Requesting Department(s)
MCC – Carrie Vaughan
October 4, 2022
TO: Charlotte Colley, Miami County Commissioner’s Administrator
Miami County Commissioners
ATTN: ARPA Committee
RE: Request for Funding for Electronic Search Warrant Software
Dear Committee Members:
The Miami County Common Pleas Court and the Miami County Municipal Court request funding to purchase computer software to allow for the processing of electronic search warrants (“EzWarrant Software”). The EzWarrant software has been developed by StepMobile, Inc. who is a state approved vendor and current vendor of the court’s Probation Case Management System. Both Courts believe the purchase of the EzWarrant software will allow the most efficient and risk free processing of criminal search warrants.
I. Current Process of Requesting and Executing Criminal Search Warrants
It has long been the standard practice of courts to process requests for criminal search warrants by paper. Law enforcement generates search warrants in house via some word document, print it out, take it to the prosecutor for review and then physically appear before the judge/magistrate to request the search warrant, swearing in and written approval. If the Judge’s
approves, law enforcement then must serve the warrant on a particular person/place and create a written inventory of what is seized and return by paper to the Clerk of Court for filing. All law enforcement agencies within the county and adjacent counties request search warrants in this fashion. Some of the requests are time sensitive requiring an officer to transport the request via paper to a Judge wherever they may be located (home or office) at all times of the day/night and weekends. As you can imagine, this can require an officer traveling from one side of the county to the next. One must consider the officer’s travel time, officer’s downtime waiting on the availability of a judge and then the time it takes serving the executed search warrant likely in another area of the county. Should this process be delayed or derailed, an officer could lose critical evidence necessary to prove a case or keep a dangerous defendant at large.
This is an issue across the state. Understanding the problems associated with the logistics of this process as well as safety and efficiency, the Ohio Supreme Court recently amended the court rules to allow for electronic processing of search warrants.
II. Benefits of Electronic Search Warrants
Electronic search warrants would allow law enforcement to create and upload their Affidavit and Search Warrant with supporting documentation onto the EzWarrant software portal from anywhere (their car, office or home). The portal would send electronically to prosecutors for review and approve electronically for submission to the judges. Once that is done, the system electronically notifies the judge that a search warrant has been requested by text, email or Robo-call. A judge can access and review the search warrant electronically, determine if it meets legal standard and call or FaceTime the officer to swear in and electronically approve. The judge can also reject the request electronically, seek more information and if previously rejected the judicial officer can view the reasons for the rejection. This is all done electronically via the EzWarrant portal. This prevents officers traveling around the county to find and appear before a judge to request approval and execution of the search warrant. This eliminates officers coming to the private homes of judges at all hours of the day/weekend. This also reduces the number of interruptions in a judge’s regular docket to entertain a search warrant request. This is an efficient, timesaving tool to perform an essential legal function of both law enforcement and the courts.
III. COVID Related
As a result of COVID and with the understanding of the importance of limiting social contact, the EzWarrant software eliminates unnecessary exposure or contacts between judges, court staff/personnel, family members and law enforcement both at their offices and homes. Yes -we all work with the public but utilizing this electronic process for obtaining a search warrant diminishes risk exposure. This is a mitigation and prevention effort by the Courts and law enforcement to reduce exposure of persons and costs associated with exposure i.e. Healthcare
costs, loss of staff, downtime, etc. while still maintaining and continuing our essential legal duties efficiently.
IV. **Stakeholder Buy-In**
The Courts have met several times with law enforcement agencies countywide, prosecutors both Municipal and County, Clerk of Court and IT, to determine the level of interest in using the EzWarrant software and its compatibility with current equipment. 100% of the law enforcement agencies have responded with approval of usage by their officer personnel. The Clerk is willing to accept whatever search warrant return electronically and IT said it met standards and was compatible with existing software. The Courts view this as a partnership with our community stakeholders and a benefit to all agencies/departments countywide alike.
The Courts have received a quote from Step Mobile, Inc. It is so attached for your review. The cost to each court is $200 per month or $2400 annually for each court ($4,800 annually). This would permit each court to designate four (4) authorized (judicial) users. We would ask the Committee to approve ARPA funds for the full two year limit.
Thank you for your consideration of this request. Should you have any questions or need further information, please feel free to contact Judge Jeannine N. Pratt.
Respectfully,
[Signature]
Jeannine N. Pratt
Common Pleas Court Administrative Judge
[Signature]
Gary A. Nasal
Municipal Court Administrative Judge
ezWarrant Subscription Agreement
This Software Subscription Agreement hereinafter referred to as “Agreement” is made between StepMobile, LLC (hereinafter referred to as “Seller”) located at 70 West 4th Street, Mansfield, Ohio 44903 and Miami County by and through its Board of County Commissioners (hereinafter referred to as “County”) with its principal place of business at 201 W. Main Street, Troy, Ohio 45373. This Agreement is effective as of the date of the last signature below (“Effective Date”). The parties agree as follows:
BY ACCEPTING THIS AGREEMENT, EITHER BY INDICATING COUNTY’S ACCEPTANCE, BY EXECUTING AN ORDER FORM THAT REFERENCES THIS AGREEMENT, OR BY UTILIZING THE SERVICES (DEFINED BELOW), COUNTY AND SELLER AGREE TO THIS AGREEMENT. THIS AGREEMENT IS A LEGALLY BINDING CONTRACT BETWEEN MIAMI COUNTY, OHIO AND SELLER AND SETS FORTH THE TERMS THAT GOVERN THE LICENSE PROVIDED TO THE COUNTY HEREBYUNDER AS WELL AS THE OBLIGATIONS OF THE PARTIES. ANY CHANGES, ADDITIONS OR DELETIONS BY EITHER PARTY TO THIS AGREEMENT WILL NOT BE ACCEPTED AND WILL NOT BE A PART OF THIS AGREEMENT UNLESS AGREED UPON AND REDUCED TO WRITING AS AN AMENDMENT TO THIS CONTRACT.
SCOPE OF SERVICES: The Seller’s portal provides “Users” who are defined as law enforcement officers, prosecutors and judges who have legal authority, to seek, review and issue search warrants in county, by electronic means via web-enabled devices. Notifications are sent automatically via SMS, email, or voice call to interested parties. All authorized users will be assigned a security PIN authentication code that enables their signatures to be affixed to the document. The process will comply with all Ohio statutes, rules and applicable Ohio law relative to electronic warrants.
1. PROVISION OF SERVICES.
1.1 Services Provided. Seller is providing a cloud-based search warrant processing service. This service consists of a law enforcement platform to fill, sign and submit a warrant to the court. The system also includes a court platform to receive the submitted warrant, review, sign and return the warrant to the law enforcement officer on a legally executable form. A platform for prosecutor review and clerk access is also available. The “swearing-in” process is conducted via our built in phone pbx. The call is recorded and stored in Seller’s system for a minimum of seven years. Recordings can be accessed anytime in the court platform by the County or its Users. Before deleting or otherwise destroying the recording, the Seller shall contact County and offer to provide copies of all material recorded by County or its Users. Users shall not have the power to modify any court signed and issued process.
1.2 Services License. Upon payment of fees and subject to continuous compliance with this
Agreement, Seller hereby grants County a limited, nonexclusive, non-transferable license to access, use, and install (if applicable) the Services, Software, and Documentation during the Term (defined below). County may provide, make available to, or permit County's users to use or access the Services, the Software, or Documentation, in whole or in part. County agrees that Seller may deliver the Services or Software to County with the assistance of its Affiliates, licensors, and service providers. During the Term (as defined herein), Seller may update or modify the Services or Software or provide alternative Services or Software to reflect changes in, among other things, laws, regulations, rules, technology, industry practices, patterns of system use, and availability of a third-party program. Seller's updates or modifications to the Services or Software or provisions of alternative Services or Software will not materially reduce the level of performance, functionality, security, or availability of the Services or Software during the Term of this Agreement. If Seller decides to terminate this contract, then it shall give written notice of sixty (60) days in advance of the date of termination, in writing, at the address set forth herein, and delivered to County by registered or certified U.S. Mail.
2. LICENSE RESTRICTIONS; OBLIGATIONS.
2.1 License Restrictions. County may not (i) provide, make available to, or permit individuals other than County's authorized users to use or access the Services, the Software, or Documentation, in whole or in part; (ii) copy, reproduce, republish, upload, post, or transmit the Services, Software, or Documentation (except for backup or archival purposes, which will not be used for transfer, distribution, sale, or installation on County's Devices); (iii) license, sell, resell, rent, lease, transfer, distribute, or otherwise transfer rights to the Services, Software, or Documentation unless as authorized in this Agreement; (iv) modify, translate, reverse engineer, decompile, disassemble, create derivative works, or otherwise attempt to derive the source code of the Services, Software, or Documentation; (v) create, market, distribute add-ons or enhancements or incorporate into another product the Services or Software without prior written consent of Seller; (vi) remove any proprietary notices or labels on the Services, Software, or Documentation, unless authorized by Seller; (vii) license the Services, Software, or Documentation of County's (or any of County's Users) are a direct competitor of Sellers; (b) for the purposes of monitoring the availability, performance, or functionality of the Services or Software or (c) for any other benchmarking or competitive purposes; (viii) use the Services or Software to store or transmit infringing, libelous, unlawful, or tortious material or to store or transmit material in violation of third party rights, including privacy rights; (ix) use the Services or Software to violate any rights of others; (x) use the Services or Software to store or transmit malicious code, Trojan horses, malware, spam, viruses, or other destructive technology ("Viruses"); (xi) interfere with, impair, or disrupt the integrity or performance of the Services or any other third party's use of the Services; use the Services in a manner that results in excessive use, bandwidth, or storage; or (xii) alter, circumvent, or provide the means to alter or circumvent the Services or Software, including technical limitations, recurring fees, or usage limits.
2.2 County's Obligations. County acknowledges, agrees, and warrants that: (i) if County becomes aware of any violation of any of the provisions set forth herein by County’s users, County will immediately notify Seller and will immediately terminate the offending party’s access to the Services, Software, and Documentation to the extent possible; (ii) County will comply with all applicable local, state, federal and international laws; (iii) County will establish a constant internet connection and electrical supply for the use of the Services, ensure the Software is installed on a supported platform as set forth in the Documentation, and the Services and Software are used only with public domain or properly license third party materials; (iv) County will install the latest version of the Software on Devices accessing or using the Services; (v) County is legally able to process County’s Data and are able to legally able to provide County’s Data to Seller and its Affiliates, including obtaining appropriate consents or rights for such processing, as outlined further herein, and have the right to access and use County’s infrastructure, including any system or network, to obtain or provide the Services and Software and will be solely responsible for the accuracy, security, quality, integrity, and legality of the same; as the same are agreed upon by Seller and County’s IT Department and (vi) County will keep County’s registration information, billing information, passwords and technical data accurate, complete, secure and current for as long as County subscribes to the Services, Software and Documentation.
3. PROPRIETARY RIGHTS.
3.1 Ownership of Seller’s Intellectual Property. The Services, Software, and Documentation are licensed, not sold. Use of “purchase” in conjunction with licenses of the Services, Software and Documentation shall not imply a transfer of ownership. Except for the limited rights expressly granted by Seller to County, County acknowledges and agrees that all right, title and interest in and to all copyright, trademark, patent, trade secret, intellectual property (including without limitation algorithms, business processes, improvements, enhancements, modifications, derivative works, information collected and analyzed in connection with the Services) and other proprietary rights, arising out of or relating to the Services, the Software, the provision of the Services or Software, and the Documentation, belong exclusively to Seller or its suppliers or licensors. All rights, title, and interest in and to content, which may be accessed through the Services or the Software, is the property of the respective owner and may be protected by applicable intellectual property laws and treaties. This Agreement gives County no rights to such content, including use of the same except as to such data and recordings generated by County and stored on Seller’s system. Seller is hereby granted a royalty-free, fully-paid, worldwide, exclusive, transferable, sub-licensable, irrevocable and perpetual license to use or incorporate into its products and services any information, suggestions, enhancement requests, recommendations or other feedback provided by County or County’ Users relating to the Services or Software. All rights not expressly granted under this Agreement are reserved by Seller. All rights not expressly granted under this Agreement are reserved by the County.
3.2 Ownership of County's Data. County and County's Users retain all right, title, and interest in and to all copyright, trademark, patent, trade secret, intellectual property, and other proprietary rights in and to County's Data. Seller has no right to the transfer of any data provided to it, to any person or entity without the expressed written consent of County. None of the data received by Seller from County shall be made public for the benefit of Seller's business or for any other reason without Seller's written permission. None of the data provided by County to Seller shall be provided to any third party or any other entity except as necessary to make Seller's system operate and Seller shall explicitly advise such recipient of County's data that disclosure of the same outside the permissions set forth herein might result in criminal or civil penalties. Seller shall not destroy any data provided to it by County without the express consent of County and data not destroyed shall be returned to County. Seller shall not retain any copies of data returned to County. Violation of this provision may constitute a criminal violation under Ohio law and the Party(ies) responsible for the theft or dissemination of such data may be subject to criminal prosecution. All data received by Seller from County shall be retained by Seller until returned to County and shall not be destroyed or disseminated except as provided herein and all such data shall be returned to County upon termination of this Agreement.
4. TERM; TERMINATION.
4.1 Term. Unless terminated earlier in accordance with this Section, this Agreement will begin on the Effective Date and will continue for a period of 12 months.
4.2 County's Termination Rights. County may terminate the Agreement by providing written notice to Seller not later than sixty (60) days prior to the date of termination or immediately if Seller becomes subject to bankruptcy or any other proceeding relating to insolvency, receivership, liquidation, or assignment for the benefit of creditors; Seller infringes or misappropriates County's data or Seller breaches this Agreement.
4.3 Seller's Suspension or Termination Rights. Seller may suspend or terminate this Agreement upon sixty (60) days' prior written notice or immediately if County becomes subject to bankruptcy or any other proceeding relating to insolvency, receivership, liquidation, or assignment for the benefit of creditors; County infringes or misappropriates Seller's intellectual property; County breaches this Agreement, including failure to pay fees when due.
4.4 Effect of Termination. Termination for cause shall not relieve County of the obligation to pay any fees or other amounts accrued or payable to Seller through date of termination. County shall not receive a credit or refund for any fees or payments made prior to termination for cause. Without prejudice to any other rights, upon termination, County must cease all use of the Services, Software, and Documentation and destroy or return (upon request by Seller) all copies of the Services, Software, and Documentation. Seller acknowledges and agrees that County will retrieve and Seller will make available all of County's Data, recordings and any copies thereof from Seller within five (5) business days of the termination of this Agreement, and that
Seller shall retain none of County’s data or copies thereof.
5. FEES AND PAYMENT; TAXES.
5.1 Fees and Payment. All orders placed will be considered final upon acceptance by Seller. Fees will be due and payable as set forth on the Order Form. Unless otherwise set forth herein, fees shall be as set forth herein. If County fails to pay, Seller shall be entitled, at its sole discretion, to: (i) suspend provision of the Services until County fulfills County’ pending obligations; (ii) charge County a legal late charge for government entities; and/or (iii) terminate this Agreement.
5.2 Seller is licensed at $200/month per court entity code, up to four judges. Each additional judge inside the court is $50/month. One judge serving multiple court entities is assessed once at $200/month. Payments can be arranged monthly, quarterly, or annually, 12-month commitment.
5.3 Taxes. Miami County is a tax exempt entity.
6. DATA; PROTECTION OF COUNTY’S DATA.
6.1 County’s Data. Seller and its Affiliates may remove County’s Data or any other data, information, or content of data or files used, stored, processed or otherwise by County or County’s Users that Seller, in its sole discretion, believes to be or is a Virus.
6.2 Protection of County’s Data. Each party shall comply with its respective obligations under applicable data protection laws. Each party shall maintain appropriate administrative, physical, technical, and organizational measures that ensure an appropriate level of security for Confidential Information and Personal Data. County is responsible for ensuring that the security of the Services is appropriate for County’s intended use and the storage, hosting, or processing of Personal Data and Seller shall fully disclose all techniques, software, hardware, and physical facilities with which it insures the security of County’s data.
7. CONFIDENTIAL INFORMATION.
As used in this Agreement, Confidential Information means any nonpublic information or materials disclosed by either party to the other party, either directly or indirectly, in writing, orally, or by inspection of tangible objects that the disclosing party clearly identifies as confidential or proprietary. For clarity, Confidential Information includes Personal Data, and Seller’s Confidential Information includes the Services, Software, and any information or materials relating to the Services, Software (including pricing), or otherwise. Confidential Information may also include confidential or proprietary information disclosed to a disclosing party by a third party.
The receiving party will: (i) hold the disclosing party’s Confidential Information in
confidence and use reasonable care to protect the same; (ii) restrict disclosure of such Confidential Information to those employees or agents with a need to know such information and who are under a duty of confidentiality respecting the protection of Confidential Information substantially similar to those of this Agreement; and (iii) use Confidential Information only for the purposes for which it was disclosed, unless otherwise set forth herein. The restrictions will not apply to Confidential Information, excluding Personal Data, to the extent it (i) is (or through no fault of the recipient, has become) generally available to the public; (ii) was lawfully received by the receiving party from a third party without such restrictions; (iii) was known to the receiving party without such restrictions prior to receipt from the disclosing party; or (iv) was independently developed by the receiving party without breach of this Agreement or access to or use of the Confidential Information.
The recipient may disclose Confidential Information to the extent the disclosure is required by law, regulation, or judicial order, provided that the receiving party will provide to the disclosing party prompt notice, where permitted, of such order and will take reasonable steps to contest or limit the steps of any required disclosure. The parties agree that any material breach of Section 2 or this Section 7 will cause irreparable injury and that injunctive relief in a court of competent jurisdiction will be appropriate to prevent an initial or continuing breach of these Sections in addition to any other relief to the applicable party may be entitled. Disclosure of County’s data by Seller may result in criminal charges against Seller, its agents or employees and civil liability to injured citizens whose legal and civil rights may be violated by the release of County’s data.
a. WARRANTY.
Seller warrants its product and service fit for the purposes for which they are intended.
9. SUPPORT.
9.1 If applicable to County, Seller shall, during the Term, provide County with Support in accordance with the applicable support terms and conditions. County agrees to: (i) promptly contact Seller with all problems with the Services or Software; and (ii) cooperate with and provide Seller with all relevant information and implement any corrective procedures that Seller requires to provide Support. Seller will have no obligation to provide Support for problems caused by or arising out of the following: (i) modifications or changes to the Software or Services; (ii) use of the Software or Services not in accordance with the Agreement or Documentation; or (iii) third-party products that are not authorized in the Documentation or, for authorized third-party products in the Documentation, problems arising solely from such third-party products.
9.2 Support is provided to the County in agreement with Seller and is available Monday through Friday 7:30am until 5:00 pm on all government workdays. Emergency support is available twenty-four hours a day, seven days a week.
9.3 Software upgrades and improvements are included in the subscription. County will automatically upgrade to the latest version of platforms.
10. GENERAL.
10.1 Notices. All notices to Seller must be in writing and shall be mailed by registered or certified mail to StepMobile, LLC, PO Box 3586, Mansfield, OH 44907 or sent via email to firstname.lastname@example.org (with evidence of effective transmission). All notices to County must be in writing and shall be mailed by registered or certified mail to Common Pleas Court Administrator, Miami County Common Pleas Court, Safety Building, 201 West Main Street, Troy, Ohio 45373.
10.2 Entire Agreement. This Agreement constitutes the entire agreement between the parties relating to the Services, Software, and Documentation provided hereunder and supersedes all prior or contemporaneous communications, agreements, and understandings, written or oral, with respect to the subject matter hereof. If other Seller terms or conditions conflict with this Agreement, this Agreement shall prevail and control with respect to the Services, Software, and Documentation provided hereunder. In addition, any and all additional or conflicting terms provided by County or Seller, whether in a purchase order, an alternative license, or otherwise, shall be void and shall have no effect unless it has been previously agreed upon in writing by both parties.
10.3 Export Control Laws. The Services, Software, and Documentation delivered to County under this Agreement may be subject to export control laws and regulations and may also be subject to import and export laws of the jurisdiction in which it was accessed, used, or obtained; if outside those jurisdictions, County shall abide by all applicable export control laws, rules, and regulations applicable to the Services, Software, and Documentation. County agrees that County is not located in or are not under the control of or a resident of any country, person, or entity prohibited to receive the Services, Software, or Documentation due to export restrictions and that County will not export, re-export, transfer, or permit the use of the Services, Software, or Documentation, in whole or in part, to or in any of such countries or to any of such persons or entities.
10.4 Modifications. Unless as otherwise set forth herein, this Agreement shall not be amended or modified by County or Seller except in writing signed by authorized representatives of each party.
10.5 Severability. If any provision of this Agreement is held to be unenforceable, illegal, or void, that shall not affect the enforceability of the remaining provisions.
10.6 Waiver. The delay or failure of either party to exercise any right provided in this Agreement shall not be deemed a waiver of that right.
10.7 Force Majeure. Seller will not be liable for any delay or failure to perform obligations under this Agreement due to any cause beyond its reasonable control, including acts of God; labor disputes; industrial disturbances; systematic electrical,
telecommunications or other utility failures; earthquakes, storms, or other elements of nature; blockages; embargoes; riots; acts or orders of government; acts of terrorism; and war.
10.8 **Construction.** Paragraph headings are for convenience and shall have no effect on interpretation.
10.9 **Governing Law.** This Agreement shall be governed by the laws of the State of Ohio and of the United States, without regard to any conflict of law provisions, except that the United Nations Convention on the International Sale of Goods and the provisions of the Uniform Computer Information Transactions Act shall not apply to this Agreement. Both parties hereby consent to jurisdiction of the state and federal courts of Ohio. If this Agreement is translated into a language other than English and there are conflicts between the translations of this Agreement, both parties agree that the English version of this Agreement shall prevail and control.
10.10 **Third Party Rights.** Other than as expressly provided herein, this Agreement does not create any rights for any person who is not a party to it, and no person not a party to this Agreement may enforce any of its terms or rely on an exclusion or limitation contained in it.
10.11 **Relationship of the Parties.** The parties are not legally related. This Agreement does not create a partnership, franchise, joint venture, agency, fiduciary, or employment relationship between the parties.
11. Acceptance of Agreement.
IN WITNESS WHEREOF, the parties hereto have executed this Agreement on the 23rd day of March, 2023.
Board of Miami County Commissioners
[Signature]
County Commissioner 3/23/2023
[Signature]
County Commissioner 3/23/2023
[Signature]
County Commissioner 3/23/2023
StopMobile, LLC
(Seller's Legally Authorized Contracting Agent)
[Signature]
Date 02-27-2023
Approved As To Form Only
By: [Signature]
Miami County Prosecutor's Office
Rev. January 24, 2023
|
Isospin Properties of Nuclear Pair Correlations from the Level Structure of the Self-Conjugate Nucleus $^{88}$Ru
B. Cederwall, X. Liu, Ö. Aktas, A. Ertoprak, W. Zhang, C. Qi, E. Clément, G. de France, D. Ralet, A. Gadea, A. Goadsby, G. Jaworski, I. Kuti, B. M. Nyakó, J. Nyberg, M. Palacz, R. Wadsworth, J. J. Valiente-Dobón, H. Al-Azri, A. Atac Nyberg, T. Bäck, G. de Angelis, M. Doncel, J. Dudouit, A. Gottardo, M. Jurado, J. Ljungvall, D. Mengoni, D. R. Napoli, C. M. Petrache, D. Sohler, J. Timár, D. Barrientos, P. Bednarczyk, G. Benzoni, B. Birkenbach, A. J. Boston, H. C. Boston, I. Burrows, L. Charles, M. Ciemala, F. C. L. Crespi, D. M. Cullen, P. Désesquelles, C. Domingo-Pardo, J. Eberth, N. Erdurman, S. Ertürk, V. González, J. Goupil, H. Hess, T. Huyuk, A. Jungclaus, W. Korten, A. Lemasson, S. Leoni, A. Maj, R. Menegazzo, B. Million, R. M. Perez-Vidal, Zs. Podolyák, A. Pullia, F. Recchia, P. Reiter, F. Saillant, M. D. Salsac, E. Sanchis, J. Simpson, O. Stęszewski, Ch. Theisen, and M. Zieliński
1KTH Royal Institute of Technology, 10691 Stockholm, Sweden
2Department of Physics, Faculty of Science, Istanbul University, Vezneciler/Fatih, 34134 Istanbul, Turkey
3Grand Accélérateur National d’Ions Lourds (GANIL), CEA/DSM—CNRS/IN2P3, Bd Henri Becquerel, BP 55027, F-14076 Caen Cedex 5, France
4Centre de Sciences Nucléaires et Sciences de la Matière, CNRS/IN2P3, Université Paris-Saclay, 91405 Orsay, France
5Instituto de Física Corpuscular, CSIC-Universidad de Valencia, E-46980 Valencia, Spain
6Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali di Legnaro, I-35020 Legnaro, Italy
7Heavy Ion Laboratory, University of Warsaw, ul. Pasteura 5A, 02-093 Warszawa, Poland
8MTA Atomki, H-4001 Debrecen, Hungary
9Department of Physics and Astronomy, Uppsala University, SE-75121 Uppsala, Sweden
10Department of Physics, University of York, Heslington, York, YO10 5DD, United Kingdom
11Rustaq College of Education, Department of Science, 529 Al-Rustaq, Sultanate of Oman
12Department of Physics, Oliver Lodge Laboratory, University of Liverpool, Liverpool L69 7ZE, United Kingdom
13Université Lyon 1, CNRS/IN2P3, IPN-Lyon, F-69622, Villeurbanne, France
14CERN, CH-1211 Geneva 23, Switzerland
15The Henryk Niewodniczański Institute of Nuclear Physics, Polish Academy of Sciences, ul. Radzikowskiego 152, 31-342 Kraków, Poland
16INFN Sezione di Milano, I-20133 Milano, Italy
17Institut für Kernphysik, Universität zu Köln, Zülpicher Str. 77, D-50937 Köln, Germany
18Oliver Lodge Laboratory, The University of Liverpool, Liverpool, L69 7ZE, United Kingdom
19STFC Daresbury Laboratory, Daresbury, Warrington, WA4 4AD, United Kingdom
20IPHC, UNISTRA, CNRS, 23 rue du Loess, 67200 Strasbourg, France
21University of Milano, Department of Physics, I-20133 Milano, Italy
22INFN Milano, I-20133 Milano, Italy
23Nuclear Physics Group, Schuster Laboratory, University of Manchester, Manchester, M13 9PL, United Kingdom
24Centre de Sciences Nucléaires et Sciences de la Matière, CNRS/IN2P3, Université Paris-Saclay, 91405 Orsay, France
25CNRS-IN2P3, Université Paris-Saclay, Bat 104, F-91405 Orsay Campus, France
26Instituto de Física Corpuscular, CSIC-Universidad de Valencia, E-46071 Valencia, Spain
27Faculty of Engineering and Natural Sciences, Istanbul Sabahattin Zaim University, 34303, Istanbul, Turkey
28Department of Physics, University of Niğde, 51240 Niğde, Turkey
29Departamento de Ingeniería Electrónica, Universidad de Valencia, 46100 Burjassot, Valencia, Spain
30Instituto de Estructura de la Materia, CSIC, Madrid, E-28006 Madrid, Spain
31Irfu, CEA, Université Paris-Saclay, F-91191 Gif-sur-Yvette, France
32INFN Padova, I-35131 Padova, Italy
33Department of Physics, University of Surrey, Guildford, GU2 7XH, United Kingdom
34Dipartimento di Fisica e Astronomia dell’Università di Padova and INFN Padova, I-35131 Padova, Italy
35Université Lyon 1, CNRS/IN2P3, IPN-Lyon, F-69622, Villeurbanne, France
(Received 11 July 2019; revised manuscript received 27 August 2019; accepted 18 December 2019; published 12 February 2020)
The low-lying energy spectrum of the extremely neutron-deficient self-conjugate ($N = Z$) nuclide $^{88}_{43}\text{Ru}_{44}$ has been measured using the combination of the Advanced Gamma Tracking Array (AGATA) spectrometer, the NEDA and Neutron Wall neutron detector arrays, and the DIAMANT charged particle detector array. Excited states in $^{88}\text{Ru}$ were populated via the $^{54}\text{Fe}+^{36}\text{Ar}, 2\nu\gamma$/$^{88}\text{Ru}^*$ fusion-evaporation reaction at the Grand Accélérateur National d’Ions Lourds (GANIL) accelerator complex. The observed $\gamma$-ray cascade is assigned to $^{88}\text{Ru}$ using clean prompt $\gamma$-$\gamma$-2-neutron coincidences in anticoincidence with the detection of charged particles, confirming and extending the previously assigned sequence of low-lying excited states. It is consistent with a moderately deformed rotating system exhibiting a band crossing at a rotational frequency that is significantly higher than standard theoretical predictions with isovector pairing, as well as observations in neighboring $N > Z$ nuclides. The direct observation of such a “delayed” rotational alignment in a deformed $N = Z$ nucleus is in agreement with theoretical predictions related to the presence of strong isoscalar neutron-proton pair correlations.
DOI: 10.1103/PhysRevLett.124.062501
Introduction.—Nucleonic pair correlations play an important role for the structure of atomic nuclei as well as for their masses. Some of the most well-known manifestations of the pairing effect in nuclei, which has strong similarities with superconductivity and superfluidity in condensed matter physics [Bardeen-Cooper-Schrieffer (BCS) theory [1,2]], are the odd-even staggering of nuclear masses [3], seniority symmetry [4–6] in the low-lying spectra of spherical even-even nuclei, and the reduced moments of inertia and backbending effect [7,8] in rotating deformed nuclei. Atomic nuclei, which are formed by the unique coexistence of two distinct fermionic systems (neutrons and protons), may also exhibit additional pairing phenomena not found elsewhere in nature. In nuclei with equal neutron and proton numbers ($N = Z$) enhanced correlations arise between neutrons and protons that occupy orbitals with the same quantum numbers. Such correlations have been predicted to favor a new type of nuclear superfluidity, termed isoscalar neutron-proton ($np$) pairing [9–12]. In addition to the normal isovector ($T = 1$) pairing mode based on like-particle neutron-neutron ($nn$) and proton-proton ($pp$) Cooper pairs that have their spin vectors antialigned and occupy time-reversed orbits, neutrons and protons may here also form $np$ $T = 1$, $I = 0$ pairs. Of special interest is the long-standing question of the possible presence of a $np$ pairing condensate [9–15] predicted to be built primarily from isoscalar $T = 0$, $I > 0$ $np$ pair correlations that still eludes experimental verification. The occurrence of a significant component of $T = 0$ correlated $np$ pairs in the nuclear wave function is also likely to have other interesting implications, e.g., the proposed “isoscalar spin-aligned $np$ coupling scheme” in the heaviest, spherical, $N = Z$ nuclei [16].
Despite vigorous activity over the last decade or so, the fundamental questions concerning the basic building blocks and fingerprints of $np$ pairing are still a matter of considerable debate. Even though until now there has been no substantial evidence for the need to include isoscalar, $T = 0$, $np$ pairing to explain the known properties of low- or high-spin states in even-even $N = Z$ nuclei the available data for the heavier $N = Z$ nuclei are very limited due to experimental difficulties: No accurate information on masses for $N = Z$ nuclei above $A \approx 80$ is currently known, shape coexistence effects have muddled the analysis of rotational patterns of deformed $N = Z$ nuclei in the mass $A \sim 70$ region, and $np$ transfer reaction studies on the lighter $N = Z$ nuclei are suffering from the complexity in the interpretation of the experimental results. Furthermore, correlations of this type are enhanced in heavier nuclei where more particles in high-$j$ shells can participate. Many theoretical calculations suggest that the best place to look for evidence of an isoscalar pairing condensate is in nuclei with $A \sim 80$; for a recent review, see Ref. [17]. Calculations using isospin-generalized BCS equations and the Hartree-Fock-Bogoliubov (HFB) equation including $pp$, $nn$, $np$ ($T = 1$), and $np$ ($T = 0$) Cooper pairs indicated that there may exist a second-order quantum phase transition in the ground states of $N = Z$ nuclei from $T = 1$ pairing below mass 80 to a predominantly $T = 0$ pairing phase above mass 90, with the intermediate mass 80–90 region showing a coexistence of $T = 0$ and $T = 1$ pairing modes [18]. There are even predictions for a dominantly $T = 0$ ground-state pairing condensate in $N \sim Z$ nuclei around mass 130 [19] (although such exotic nuclei are currently not experimentally accessible).
The interplay between rotation and the like-particle pairing interaction has been studied in great detail in deformed nuclei where, normally, the neutron and proton Fermi levels are situated in different (sub-) shells; and hence the neutrons and protons can be considered to form separate Fermi liquids dominated by $T = 1$ pair correlations. However, the isoscalar, $T = 0$, $np$ coupling has the interesting property of being less affected by the Coriolis interaction in a rotating system, which tends to break the time-reversed pairs with $T = 1$. Therefore, the presence of a $np$ pairing condensate may reveal itself in the rotational states of deformed $N = Z$ nuclei where one might expect that the $T = 0$ pairing correlations are active while
the normal isovector pairing mode is suppressed by the Coriolis antipairing effect [20]. Calculations within the isospin-generalized HFB framework indeed also suggested such a mixed $T = 1/T = 0$ pairing phase with a transition from $T = 1$ to $T = 0$ dominance as a function of increasing angular momentum [21]. Hence, medium- to high-spin states of rotating $N = Z$ nuclei appear to be among the best places to search for the presence of $T = 0$ $np$ pairing, and it is important to reach the heaviest possible $N = Z$ nuclei where, however, the experimental conditions are most challenging. One of the key signatures proposed for isoscalar pairing is a significant “delay” in band crossing frequency in deformed $N = Z$ isotopes compared with their $N > Z$ neighbors, which necessitates the study of such nuclei up to angular momentum around $I = 10\hbar$ or higher [17]. Such delays have previously been observed in the deformed $N = Z$ nuclei $^{72}_{36}\text{Kr}$, $^{76}_{38}\text{Sr}$, and $^{80}_{40}\text{Zr}$, but were not considered as conclusive evidence for isoscalar $np$-pairing effects due to the possible influence of shape coexistence on the alignment frequencies [22–24]. The nuclei $^{84}_{42}\text{Mo}$ and $^{88}_{44}\text{Ru}$ also have indications of delays in the rotational alignments; however in these cases the experimental data did not reach the required rotational frequency in order to draw firm conclusions [25,26]. The nucleus $^{88}\text{Ru}$ is here of particular interest, as it is predicted to be the last deformed self-conjugate nuclear system before the $N = Z = 50$ closed shells [27]. The structure of its intermediate-to-high-spin states constitutes one of the most promising cases for discovering effects of a BCS-type of isoscalar pairing condensate. However, due to the large experimental difficulties in producing and selecting such exotic nuclei in sufficient quantities excited states in $^{88}\text{Ru}$ were previously known only up to the $I^\pi = 8^+$ state [25], just where normal (isovector) paired band crossings are expected to appear in the absence of strong isoscalar pairing. In the present work the level scheme of $^{88}\text{Ru}$ has been extended to higher angular momentum states in the ground-state band, leading to a conclusive measurement of the rotational alignment frequency. The experimental difficulties have been overcome through the use of a highly efficient, state-of-the-art detector system and a prolonged experimental running period.
**Experimental details.**—Excited states in $^{88}\text{Ru}$ were populated in fusion-evaporation reactions induced by a $^{36}\text{Ar}$ beam produced by the CIME cyclotron at the Grand Accélérateur National d’Ions Lourds (GANIL), Caen, France. The $^{36}\text{Ar}$ ions were accelerated to an energy of 115 MeV and used to bombard target foils consisting of 99.9% isotopically enriched $^{54}\text{Fe}$ with areal density of 6 mg/cm$^2$, which was sufficient to stop the fusion products of interest. The beam intensity varied between 5 and 10 pnA with an average of 7 pnA during 13 days of irradiation time. Prompt $\gamma$ rays emitted in the reactions were detected by the Advanced Gamma Tracking Array (AGATA) spectrometer [28] in its early phase 1 implementation [29], consisting of 11 triple clusters of segmented HPGe detectors. Emission of light charged particles and neutrons was detected in prompt coincidence with the $\gamma$ rays by the nearly $4\pi$ solid angle charged particle detector array DIAMANT [30,31], consisting of 64 CsI(Tl) scintillators, and the neutron wall [32] and NEDA [33,34] neutron detector arrays consisting of 42 and 54 organic liquid-scintillator detectors, respectively. The trigger condition for recording events for subsequent offline analysis was that at least two of the high-purity germanium crystal core signals from the AGATA triple-cluster detectors were registered in fast coincidence with at least one neutronlike event recorded in the liquid scintillator detectors. The condition for the neutronlike events was determined by pulse-shape discrimination (PSD) via a firmware threshold set for the so-called charge comparison (CC) ratio between the charge integrated over the tail part of each liquid scintillator pulse and its total integrated charge. Similar PSD criteria made it possible to discriminate between different types of charged particles detected in the CsI(Tl) scintillators. The final discrimination between neutrons and $\gamma$ rays was performed off line by setting two-dimensional gates on the neutron time of flight vs the CC ratio. The rare two-neutron evaporation events were separated from events where a neutron scattered between detectors by applying simultaneous cuts on the deposited energy and time of flight as a function of the distance between detectors that fired. For the off-line charged particle selection, individual two-dimensional gates on the particle identification and energy parameters of the DIAMANT detectors enabled the identification of $\gamma$ rays as belonging to specific charged particle evaporation channels. A 50 ns wide time gate was applied to the time-aligned Ge detector timing signals in order to select prompt $\gamma$-ray emission. The $\gamma$-ray energy measurements with AGATA rely on tracking algorithms [35–39] that reconstruct trajectories of incident $\gamma$-ray photons in order to determine their energy and direction. This is achieved by disentangling the interaction points and corresponding interaction energies in the germanium crystals that are identified using pulse shape analysis of the detector signals and thereafter establishing the proper sequences of interaction points using the characteristic features of the interaction mechanisms (primarily the photoelectric effect, Compton scattering, and pair production). The energy calibration of the germanium detectors was performed using standard radioactive sources ($^{60}\text{Co}$ and $^{152}\text{Eu}$). Figure 1 shows projected spectra from the $2n$-selected $E_p - E_\gamma$ coincidence matrix obtained requiring anticoincidence with detection of any charged particle in the DIAMANT CsI(Tl) detector array. The spectrum in Fig. 1(a) was produced for events where $\gamma$ rays coincident with the 616, 800, 964, and 1100 keV transitions assigned to $^{88}\text{Ru}$ were selected. The background spectrum was produced by using identical energy cuts on a selection of the data requiring coincidence with two neutrons and a charged particle summed with the background spectrum obtained by shifting the energy cuts a
FIG. 1. (a) Gamma-ray energy spectrum detected in coincidence with the 616, 800, 964, and 1100 keV $\gamma$ rays, with the additional requirement that two neutrons and no charged particles were detected in coincidence. (b) Expanded part of the unsubtracted gated spectrum around the new $\gamma$-ray transitions at 1063 keV ($10^+$ $\rightarrow$ $8^+$), 1153 keV ($12^+$ $\rightarrow$ $10^+$), and 1253 keV ($14^+$ $\rightarrow$ $12^+$) is drawn in red together with the background spectrum (black) used to produce the spectrum shown in (a). Gamma-ray peaks due to contaminant reactions on oxygen leading to the population of excited states in $^{49,50}$Cr and $^{49}$Mn are indicated. (c) Level scheme of $^{88}$Ru deduced from the present work. Relative intensities are proportional to the widths of the arrows.
FIG. 2. Experimental values for the kinematical moment of inertia ($J^{(1)}$) for the low-lying yrast bands of the $N = 44$ isotones $^{88}_{44}$Ru$_{44}$ (this work), $^{86}_{42}$Mo$_{44}$ [40,41], and $^{84}_{40}$Zr$_{44}$ [42]. The black dashed vertical line indicates the approximate rotational frequency of the first isovector-paired band crossing due to $g_{9/2}$ protons as predicted by standard cranked shell model calculations [43,44]. The red dotted vertical line indicates the band crossing frequency for the ground-state band in $^{84}_{40}$Ru$_{44}$ observed in this work.
Constant offset of +20 keV in the two-neutron gated data requiring anticoincidence with the detection of charged particles. These transitions were previously identified as belonging to $^{86}$Ru in a study involving a different reaction: $^{58}$Ni($^{28}$Si, 2n$\gamma$)$^{86}$Ru [25]. All $\gamma$ rays observed in prompt coincidence and assigned to the ground-state band of $^{86}$Ru in this work are indicated with their energies in keV.
Discussion.—Figure 2 shows values of the kinematical moment of inertia ($J^{(1)}$) for the low-lying yrast level energy bands in the $N = 44$ isotones $^{88}_{44}$Ru$_{44}$ (this work), $^{86}_{42}$Mo$_{44}$ [40,41], and $^{84}_{40}$Zr$_{44}$ [42]. The ground-state bands in the even-$Z$, $N > Z$ isotones $^{86}_{42}$Mo$_{44}$ and $^{84}_{40}$Zr$_{44}$ exhibit a variation of $J^{(1)}$ (defined as the angular momentum, $I$, divided by the rotational frequency, $\omega = dE/dI$) as a function of rotational frequency that is characteristic of a normal paired band crossing in a rotating deformed nucleus of the isovector ($T = 1$) type. The band crossing frequency is $\hbar\omega_c \approx 0.47$ MeV in both cases (indicated by the black vertical dashed line in Fig. 2). For the $N = Z$ nucleus $^{88}_{44}$Ru$_{44}$ the increase in $J^{(1)}$ also resembles a paired band crossing, albeit at a significantly higher rotational frequency, $\hbar\omega_c \approx 0.54$ MeV, indicated by the red vertical dotted line in Fig. 2.
Theoretical predictions of the rotational response of excited states and the associated spin alignment can be provided by cranked shell model calculations [45], which predict the first proton two-quasiparticle $(\pi g_{9/2})^2$ alignment to occur at $\hbar\omega_c \approx 0.45$ MeV followed closely by a neutron $(\nu g_{9/2})^2$ alignment [43,44]. Mountford et al. have demonstrated that the first alignment in $^{84}$Zr is due to $g_{9/2}$ protons by means of a transient-field $g$-factor measurement [46]. The slopes of the $J^{(1)}$ curves around the crossing point also exhibit an expected variation, reflecting the change in interaction strength between the ground-state band and the broken-pair $S$ band as the proton Fermi level changes within the $g_{9/2}$ subshell. The large delay in band crossing frequency for $^{88}_{44}$Ru$_{44}$ compared with its closest $N = 44$ isotones cannot readily be explained using standard mean field models.
Developments of computational methods in recent years enable shell model calculations to be performed with large model spaces, providing nuclear structure predictions for medium-mass nuclei away from closed shells. Large-scale shell-model (LSSM) calculations with an isospin-conserving Hamiltonian are also the method of choice.
for theoretical investigations of the isospin dependence of nucleonic pair correlations [17]. In Ref. [26], projected shell model calculations following the approach of Ref. [47] predicted a delay in the band crossing frequency in the $N = Z$ nuclei $^{84}_{42}\text{Mo}_{42}$ and $^{44}_{22}\text{Ru}_{44}$ as an effect of enhanced neutron-proton interactions. Kaneko et al. [48] employed LSSM calculations using a “pairing-plus-multipole” Hamiltonian [49] in the $(1p_{1/2}, 3s_{1/2}, f_{5/2}, g_{9/2}, d_{5/2})$ (often denoted as $fpgd$) model space for studying $^{80}_{40}\text{Ru}_{41}$, $^{80}_{40}\text{Ru}_{46}$, and $^{72}_{30}\text{Ru}_{48}$ and concluded that $T = 0$ np pairing is responsible for the distinct difference in rotational behavior between the $N = Z$ and $N > Z$ nuclei. These calculations also predicted a significant delay in the band crossing frequency for $N = Z$ and their prediction for the $J^{(1)}$ moment of inertia of $^{44}_{22}\text{Ru}_{44}$ revealed a sharp irregularity at a rotational frequency $\hbar\omega_c \approx 0.65$ MeV [48]. We therefore conclude that the delayed alignment of $g_{9/2}$ protons observed in the ground-state band of $^{80}_{40}\text{Ru}$ in the present work is likely not to be in agreement with the response of a deformed rotating nucleus in the presence of a normal isovector pairing field and that isoscalar pairing components may be active in this self-conjugate nucleus.
**Summary.**—In summary, new $\gamma$-ray transitions in the self-conjugate nuclide $^{80}_{40}\text{Ru}_{44}$ have been identified, extending the previously reported level structure. The observed ground-state band exhibits a band crossing that is significantly delayed compared with the expected behavior of a rotating deformed nucleus in the presence of a normal isovector ($T = 1$) pairing field. The observation is in agreement with theoretical predictions for the presence of isoscalar neutron-proton pairing in the low-lying structure of $^{80}_{40}\text{Ru}$.
This work was supported by the Swedish Research Council under Grant No. 621-2014-5558 and the EU 7th Framework Programme, Integrating Activities Transnational Access, Grant No. 262010 ENSAR; the United Kingdom STFC under Grants No. ST/L005727/1 and No. ST/P003885/1; the Polish National Science Centre, Grants No. 2017/25/B/ST2/01569, No. 2016/22/M/ST2/00269, No. 2014/14/M/ST2/00738 (COPIN-INFN collaboration; COPIN-IN2P3 and COPIGAL projects; the National Research Development and Innovation Fund of Hungary (Grant No. K128947); the European Regional Development Fund (Contract No. GINOP-2.3.3-15-2016-00034), by the Hungarian National Research, Development and Innovation Office, Grant No. PD124717; the Ministry of Science, Spain, under Grants No. SEV-2014-0398 and FPA2017-84756-C4; and by the EU FEDER funds. X. L. gratefully acknowledges support from the China Scholarship Council, Grant No. 201700260183 for his stay in Sweden. We thank the GANIL staff for excellent technical support and operation.
*Corresponding author, firstname.lastname@example.org
[1] L. N. C. J. Bardeen and J. R. Schrieffer, Phys. Rev. **106**, 162 (1957).
[2] L. N. C. J. Bardeen and J. R. Schrieffer, Phys. Rev. **108**, 1175 (1957).
[3] W. Heisenberg, Z. Phys. **78**, 156 (1932).
[4] A. de Shalit and I. Talmi, *Nuclear Shell Theory* (Academic Press, New York, 1963).
[5] I. Talmi, *Simple Models of Complex Nuclei* (Harwood Academic Press, Switzerland, 1993).
[6] D. J. Rowe and G. Rosensteel, Phys. Rev. Lett. **87**, 172501 (2001).
[7] H. R. A. Johnson and J. Sztrkier, Phys. Lett. **34B**, 605 (1971).
[8] F. Stephens and R. Simon, Nucl. Phys. **A183**, 257 (1972).
[9] K. L. J. Engel and P. Vogel, Phys. Lett. B **389**, 211 (1996).
[10] J. Engel, S. Pittel, M. Stoitsov, P. Vogel, and J. Dukelsky, Phys. Rev. C **55**, 1781 (1997).
[11] O. Civitarese, M. Reboiro, and P. Vogel, Phys. Rev. C **56**, 1840 (1997).
[12] A. Goodman, Adv. Nucl. Phys. **11**, 263 (1979).
[13] W. Satula and R. Wyss, Phys. Rev. Lett. **87**, 052504 (2001).
[14] G. Martinez-Pinedo, K. Langanke, and P. Vogel, Nucl. Phys. **A651**, 379 (1999).
[15] D. Warner, M. A. Bentley, and P. Van Isacker, Nat. Phys. **2**, 311 (2006).
[16] B. Cederwall et al., Nature (London) **469**, 68 (2011).
[17] S. Frauendorf and A. Macchiavelli, Prog. Part. Nucl. Phys. **78**, 24 (2014).
[18] A. L. Goodman, Phys. Rev. C **60**, 014311 (1999).
[19] A. Gezerlis, G. F. Bertsch, and Y. L. Luo, Phys. Rev. Lett. **106**, 252502 (2011).
[20] W. Satula and R. Wyss, Phys. Lett. B **393**, 1 (1997).
[21] A. L. Goodman, Phys. Rev. C **63**, 044325 (2001).
[22] G. de Angelis et al., Phys. Lett. B **415**, 217 (1997).
[23] S. Fischer et al., Phys. Rev. Lett. **87**, 132501 (2001).
[24] P. Davies et al., Phys. Rev. C **75**, 011302(R) (2007).
[25] N. Marginean et al., Phys. Rev. C **63**, 031303(R) (2001).
[26] N. Marginean et al., Phys. Rev. C **65**, 051303(R) (2002).
[27] P. Möller, J. R. Nix, W. D. Myers, and W. J. Swiatecki, At. Data Nucl. Data Tables **59**, 185 (1995).
[28] S. Akkoyun et al., Nucl. Instrum. Methods Phys. Res., Sect. A **668**, 26 (2012).
[29] E. Clément et al., Nucl. Instrum. Methods Phys. Res., Sect. A **855**, 1 (2017).
[30] J. Scheurer et al., Nucl. Instrum. Methods Phys. Res., Sect. A **385**, 501 (1997).
[31] J. Gál et al., Nucl. Instrum. Methods Phys. Res., Sect. A **516**, 502 (2004).
[32] O. Skeppstedt et al., Nucl. Instrum. Methods Phys. Res., Sect. A **421**, 531 (1999).
[33] T. Huyuk et al., Eur. Phys. J. A **52**, 55 (2016).
[34] J. J. Valiente-Dobón et al., Nucl. Instrum. Methods Phys. Res., Sect. A **927**, 81 (2018).
[35] M. A. Deleplanque, I. Y. Lee, K. Vetter, G. J. Schmid, F. S. Stephens, R. M. Clark, R. M. Diamond, P. Fallon, and A. O. Maschiavelli, Nucl. Instrum. Methods Phys. Res., Sect. A 430, 292 (1999).
[36] J. van der Marel and B. Cederwall, Nucl. Instrum. Methods Phys. Res., Sect. A 437, 538 (1999).
[37] M. A. D. I. Y. Lee and K. Vetter, Rep. Prog. Phys. 66, 1095 (2003).
[38] A. Lopez-Martens, K. Hauschild, A. Korichi, J. Roccaz, and J.-P. Thibaud, Nucl. Instrum. Methods Phys. Res., Sect. A 533, 454 (2004).
[39] D. Bazzacco, Nucl. Phys. A746, 248 (2004).
[40] D. Rudolph et al., Phys. Rev. C 54, 117 (1996).
[41] K. Andgren et al., Phys. Rev. C 76, 014307 (2007).
[42] H. Price, C. J. Lister, B. J. Varley, W. Gelletly, and J. W. Olness, Phys. Rev. Lett. 51, 1842 (1983).
[43] K. Jonsson et al., Nucl. Phys. A645, 47 (1999).
[44] W. N. J. Dudek and N. Rowley, Phys. Rev. C 35, 1489 (1987).
[45] R. Bengtsson and S. Frauendorf, Phys. Lett. B 255, 174 (1991).
[46] A. Mountford, J. Billowes, W. Gelletly, H. G. Price, and D. D. Warner, Phys. Lett. B 279, 228 (1992).
[47] Y. Sun and J. Sheikh, Phys. Rev. C 64, 031302 (2001).
[48] Y. S. K. Kaneko and G. de Angelis, Nucl. Phys. A957, 144 (2017).
[49] K. Kaneko, T. Mizusaki, and S. Tazaki, Phys. Rev. C 89, 011302(R) (2014).
|
CMS Flexibilities to Fight COVID-19
Teaching Hospitals, Teaching Physicians and Medical Residents
Teaching Hospitals, Teaching Physicians and Medical Residents: CMS Flexibilities to Fight COVID-19
Since the beginning of the COVID-19 Public Health Emergency, the Trump Administration has issued an unprecedented array of temporary regulatory waivers and new rules to equip the American healthcare system with maximum flexibility to respond to the 2019 Novel Coronavirus (COVID-19) pandemic. These temporary changes will apply immediately across the entire U.S. healthcare system for the duration of the emergency declaration. The goals of these actions are to 1) expand the healthcare system workforce by removing barriers for physicians, nurses, and other clinicians to be readily hired from the community or from other states; 2) ensure that local hospitals and health systems have the capacity to handle a potential surge of COVID-19 patients through temporary expansion sites (also known as CMS Hospital Without Walls); 3) increase access to telehealth in Medicare to ensure patients have access to physicians and other clinicians while keeping patients safe at home; 4) expand in-place testing to allow for more testing at home or in community based settings; and 5) put Patients Over Paperwork to give temporary relief from many paperwork, reporting and audit requirements so providers, health care facilities, Medicare Advantage and Part D plans, and States can focus on providing needed care to Medicare and Medicaid beneficiaries affected by COVID-19.
**Workforce**
- **Application of Teaching Physician Regulations**: Under current rules, Medicare payment is made for services furnished by a teaching physician involving residents only if the physician is physically present for the key portion of the service or procedure or the entire procedure, where applicable. During the COVID-19 PHE, teaching physicians may use audio/video real time communications technology to interact with the resident through virtual means, which would meet the requirement that they be present for the key portion of the service, including when the teaching physician involves the resident in furnishing Medicare Telehealth services. Teaching physicians involving residents in providing care at primary care centers can provide the necessary direction, management and review for the resident’s services using audio/video real time communications technology. Residents furnishing services at primary care centers may furnish an expanded set of services to beneficiaries, including levels 4-5 of an office/outpatient evaluation and management (E/M) visit, telephone E/M, care management, and communication technology-based services. These flexibilities do not apply in the case of surgical, high risk, interventional, or other complex procedures, services performed through an endoscope, and anesthesia services. This allows teaching hospitals to maximize their workforce to safely take care of patients.
- **Resident Moonlighting**: Under current rules, Medicare considers the services of residents that are not related to their approved graduate medical education programs and performed in the outpatient department or the emergency department of a hospital as separately billable physicians’ services. During the COVID-19 PHE, Medicare
also considers the services of residents that are not related to their approved GME programs and furnished to inpatients of a hospital in which they have their training program as separately billable physicians’ services.
- **Counting of Resident Time at Alternate Locations**: Existing regulations have specific rules on when a hospital may count a resident for purposes of Medicare direct graduate medical education (DGME) payments or indirect medical education (IME) payments. Normally, if the resident is performing activities with the scope of his/her approved program in his/her own home, or a patient’s home, the hospital may not count the resident. During the PHE, a hospital that is paying the resident’s salary and fringe benefits for the time that the resident is at home or in a patient’s home, but performing duties within the scope of the approved residency program and meets appropriate physician supervision requirements can claim that resident for IME and DGME purposes. This allows medical residents to perform their duties in alternate locations, including their own home or a patient’s home, so long as such activities meet appropriate physician supervision requirements.
- **Graduate Medical Education (GME) Residents Training in Other Hospitals**: During the COVID-19 PHE, a teaching hospital that sends residents to other hospitals will be able to continue to claim those residents in the teaching hospital’s IME and DGME FTE resident counts, if certain requirements are met. Those requirements include that 1) the teaching hospital sends the resident to the other hospital in response to the COVID-19 pandemic; 2) the time spent by the resident training at the other hospital is in lieu of time that would have been spent training at the sending hospital; and 3) the time that the resident spent training immediately prior to and/or subsequent to the time frame that the COVID-19 PHE was in effect was included in the FTE count for the sending hospital. Moreover, the presence of residents in non-teaching hospitals will not trigger establishment of IME and/or DGME FTE resident caps at those non-teaching hospitals. Specifically, for DGME, the presence of residents in non-teaching hospitals will not trigger establishment of PRAs at those non-teaching hospitals.
- **IME Payments Held Harmless for Temporary Increase in Beds**: During the COVID-19 PHE, CMS will hold teaching hospitals harmless from a reduction in IME payments due to beds temporarily added during the COVID-19 PHE by not considering such beds when determining IME payments.
- **Inpatient Psychiatric Facilities (IPFs) Teaching Status Adjustment Payments**: To ensure that teaching IPFs can alleviate bed capacity issues by taking patients from the inpatient acute care hospitals without being penalized by lower teaching status adjustments, we are freezing the IPFs’ teaching status adjustment payments at their values prior to the PHE. For the duration of the COVID-19 PHE, a teaching IPF’s teaching status adjustment payments will be the same as they were on the day before the COVID-19 PHE was declared.
• **Sterile Compounding**: CMS is waiving hospital sterile compounding requirements to allow used face masks to be removed and retained in the compounding area to be re-donned and reused during the same work shift in the compounding area only. This will conserve scarce face mask supplies. CMS will not be reviewing the use and storage of facemasks under these requirements.
• **Medical Staff Requirements**: CMS is waiving the Medical Staff requirements at 42 CFR §482.22(a)(1)-(4) to allow for physicians whose privileges will expire to continue practicing at the hospital and for new physicians to be able to practice in the hospital before full medical staff/governing body review and approval to address workforce concerns related to COVID-19.
• **Physician services**: CMS is waiving 482.12(c)(1)-(2) and §482.12(c)(4), which requires that Medicare patients be under the care of a physician, and that a physician be on call at all times. This allows hospitals to use other practitioners, such as physician’s assistant and nurse practitioners, to the fullest extent possible. This waiver should be implemented so long as they are not inconsistent with a state’s emergency preparedness or pandemic plan.
• **Anesthesia services**. CMS is waiving the requirements at 42 CFR 482.52(a)(5), 42 CFR 485.639(c)(2), and 42 CFR 416.42 (b)(2) that a certified registered nurse anesthetist (CRNA) is under the supervision of a physician. CRNA supervision will be at the discretion of the hospital or Ambulatory Surgical Center (ASC), and state law. This waiver applies to hospitals, CAHs, and ASCs. These waivers will allow CRNAs to function to the fullest extent of their licensure, and should be implemented so long as they are not inconsistent with a State or pandemic/emergency plan.
• **Respiratory care services**: We are waiving the requirement at 42 CFR 482.57(b)(1) that hospitals designate in writing the personnel qualified to perform specific respiratory care procedures and the amount of supervision required for personnel to carry out specific procedures. These flexibilities should be implemented so long as they are not inconsistent with a State or pandemic/emergency plan. Not being required to designate these professionals in writing will allow qualified professionals to operate to the fullest extent of their licensure and training in providing patient care for respiratory illnesses.
• **CAH Personnel qualifications**: CMS is waiving the minimum personnel qualifications for clinical nurse specialist, nurse practitioners, and physician assistants described at 42 CFR 485.604 (a)(2), 42 CFR 485.604 (b)(1-3), and 42 C.F.R 485.604 (c)(1-3). Clinical Nurse Specialists, Nurse Practitioners, and Physician Assistants will still have to meet state requirements for licensure and scope of practice, but not additional Federal requirements that may exceed State requirements. This will give States and facilities more flexibility in using clinicians in these roles to meet increased demand. These
flexibilities should be implemented so long as they are not inconsistent with a State or pandemic/emergency plan.
- **CAH staff licensure**: CMS is deferring to staff licensure, certification, or registration to State law by waiving the requirement at 42 CFR 485.608(d) that staff of the CAH be licensed, certified, or registered in accordance with applicable Federal, State, and local laws and regulations. The CAH and its staff must still be in compliance with applicable Federal, State and Local laws and regulations, and all patient care must be furnished in compliance with State and local laws and regulations. This waiver would defer all licensure, certification, and registration requirements for CAH staff to the state, which would add flexibility where Federal requirements are more stringent. These flexibilities should be implemented so long as they are not inconsistent with a State or pandemic/emergency plan.
**CMS Hospital Without Walls (Temporary Expansion Sites)**
- **Hospitals Able to Provide Inpatient Care in Temporary Expansion Sites**: As part of the CMS Hospital Without Walls initiative, hospitals can provide hospital services in other healthcare facilities and sites not currently considered to be part of a healthcare facility or set up temporary expansion sites to help address the urgent need to increase capacity to care for patients. Previously, hospitals were required to provide services to patients within their hospital departments, and have shared concerns about capacity for treating patients during the COVID-19 Public Health Emergency, especially those requiring ventilator and intensive care services. CMS is providing additional flexibilities for hospitals to create surge capacity by allowing them to provide room and board, nursing, and other hospital services at remote locations or sites not considered part of a healthcare facility such as hotels or community facilities. This flexibility will allow hospitals to separate COVID-19 positive patients from other non-COVID-19 patients to help efforts around infection control and preservation of personal protective equipment (PPE). For example, for the duration of the Public Health Emergency, CMS is allowing hospitals to screen patients at offsite locations, furnish inpatient and outpatient services at temporary expansion sites. Hospitals would still be expected to control and oversee the services provided at an alternative location.
- **Relaxing Conditions of Participation**. Under an additional initiative, CMS is relaxing certain conditions of participation (CoPs) for hospital operations to maximize hospitals ability to focus on patient care. The same initiative will also allow currently enrolled ambulatory surgical centers (ASCs), to temporarily enroll as hospitals and to provide hospital services to help address the urgent need to increase hospital capacity to take care of patients. Other interested entities, such as freestanding emergency departments, could pursue enrolling as an ASC and then pursue converting their enrollment to hospital during the PHE. ASCs that wish to enroll to receive temporary billing privileges as a hospital should call the COVID-19 Provider Enrollment Hotline to reach the contractor that serves their jurisdiction, and then will complete and sign an
attestation form specific to the COVID-19 PHE. This document will be made available shortly. See https://www.cms.gov/files/document/provider-enrollment-relief-faqs-covid-19.pdf for additional information.
- **Off Site Patient Screening**: CMS is waiving the enforcement of section 1867(a) of the Social Security Act (the Emergency Medical Treatment and Labor Act, or EMTALA). This will allow hospitals, psychiatric hospitals, and critical access hospitals (CAHs) to screen patients at a location offsite from the hospital’s campus to prevent the spread of COVID-19, so long as it is not inconsistent with the state emergency preparedness or pandemic plan.
- **Paperwork Requirements**: CMS is waiving certain specific paperwork requirements only for hospitals which are considered to be impacted by a widespread outbreak of COVID-19. This allows hospitals to establish COVID-19 specific areas. Hospitals that are located in a state that has widespread confirmed cases would not be required to meet the following requirements:
- 42 CFR §482.13(d)(2) with respect to timeframes in providing a copy of a medical record.
- 42 CFR §482.13(h) related to patient visitation, including the requirement to have written policies and procedures on visitation of patients who are in COVID-19 isolation and quarantine processes.
- 42 CFR §482.13(e)(1)(ii) regarding seclusion.
- **Physical Environment**: CMS is waiving certain requirements under the conditions at 42 CFR §482.41 and §485.623 to allow for flexibilities during hospital, psychiatric hospital, and CAH surges. CMS will permit non-hospital buildings/space to be used for patient care and quarantine sites, provided that the location is approved by the State (ensuring safety and comfort for patients and staff are sufficiently addressed). This allows for increased capacity and promotes appropriate cohorting of COVID-19 patients.
- **Temporary Expansion Sites**. For the duration of the PHE related to COVID-19, CMS is waiving certain requirements under the Medicare conditions of participation at 42 CFR §482.41 and §485.623 (as noted above) and the provider-based department requirements at 42 CFR §413.65 to allow hospitals to establish and operate as part of the hospital any location meeting the conditions of participation for hospitals in operation during the PHE. This waiver also allows hospitals to change the status of their current provider-based department locations to the extent necessary to address the needs of hospital patients as part of the State or local pandemic plan. This waiver will enable hospitals to meet the needs of Medicare beneficiaries. CMS also is offering some additional flexibilities to furnish inpatient services under arrangements.
• **Critical Access Hospital Length of Stay:** CMS is waiving the Medicare requirements that Critical Access Hospitals (CAHs) limit the number of beds to 25, and that the length of stay be limited to 96 hours under the Medicare conditions of participation regarding number of beds and length of stay at 42 CFR §485.620.
• **CAH Status and Location:** CMS is waiving the requirement at 485.610(b) that the CAH be located in a rural area or an area being treated as rural, allowing the CAHs flexibility in the establishment of surge site locations. Waiving the requirement at 485.610(e) regarding off-campus and co-location requirements allows the CAH flexibility in establishing off-site locations. In an effort to facilitate the establishment of CAHs without walls, these waivers will remove restrictions on CAHs regarding their rural location and their location relative to other hospitals and CAHs. These flexibilities should be implemented so long as they are not inconsistent with State or emergency or pandemic plan.
• **Housing Acute Care Patients in Excluded Distinct Part Units:** CMS is waiving requirements to allow acute care hospitals to house acute care inpatients in excluded distinct part units, where the distinct part unit’s beds are appropriate for acute care inpatients. The Inpatient Prospective Payment System (IPPS) hospital should bill for the care and annotate the patient’s medical record to indicate the patient is an acute care inpatient being housed in the excluded unit because of capacity issues related to the disaster or emergency.
• **Care for Excluded Inpatient Psychiatric Unit Patients in the Acute Care Unit of a Hospital:** CMS is waiving requirements to allow acute care hospitals with excluded distinct part inpatient psychiatric units that, as a result of a disaster or emergency, need to relocate inpatients from the excluded distinct part psychiatric unit to an acute care bed and unit. The hospital should continue to bill for inpatient psychiatric services under the Inpatient Psychiatric Facility Prospective Payment System (IPF PPS) for such patients and annotate the medical record to indicate the patient is a psychiatric inpatient being cared for in an acute care bed because of capacity or other exigent circumstances related to the COVID-19 Public Health emergency. This waiver may be utilized where the hospital’s acute care beds are appropriate for psychiatric patients and the staff and environment are conducive to safe care. For psychiatric patients, this includes assessment of the acute care bed and unit location to ensure those patients at risk of harm to self and others are safely cared for.
• **Care for Excluded Inpatient Rehabilitation Unit Patients in the Acute Care Unit of a Hospital:** CMS is waiving requirements to allow acute care hospitals with excluded distinct part inpatient Rehabilitation units that, as a result of a disaster or emergency, need to relocate inpatients from the excluded distinct part rehabilitation unit to an acute care bed and unit. The hospital should continue to bill for inpatient rehabilitation services under the Inpatient Rehabilitation Facility Prospective Payment System for such
patients and annotate the medical record to indicate the patient is a rehabilitation inpatient being cared for in an acute care bed because of capacity or other exigent circumstances related to the disaster or emergency. This waiver may be utilized where the hospital’s acute care beds are appropriate for providing care to rehabilitation patients and such patients continue to receive intensive rehabilitation services.
- **Telemedicine:** CMS is waiving the provisions related to telemedicine for hospitals and CAHs at 42 CFR 482.12(a)(8)-(9) and 42 CFR 485.616(c), making it easier for telemedicine services to be furnished to the hospital's patients through an agreement with an off-site hospital. This allows for increased access to necessary care for hospital and CAH patients, including access to specialty care.
**Patients Over Paperwork**
- **“Stark Law” Waivers:** The physician self-referral law (also known as the “Stark Law”) prohibits a physician from making referrals for certain healthcare services payable by Medicare if the physician (or an immediate family member) has a financial relationship with the entity performing the service. There are statutory and regulatory exceptions, but in short, a physician cannot refer a patient to any entity with which he or she has a financial relationship. On March 30, 2020, CMS issued blanket waivers of certain provisions of the Stark Law regulations. These blanket waivers apply to financial relationships and referrals that are related to the COVID-19 emergency. The remuneration and referrals described in the blanket waivers must be solely related to COVID-19 Purposes, as defined in the blanket waiver document. Under the waivers, CMS will permit certain referrals and the submission of related claims that would otherwise violate the Stark Law. These flexibilities include:
- Hospitals and other health care providers can pay above or below fair market value for the personal services of a physician (or an immediate family member of a physician), and parties may pay below fair market value to rent equipment or purchase items or services. For example, a physician practice may be willing to rent or sell needed equipment to a hospital at a price that is below what the practice could charge another party. Or, a hospital may provide space on hospital grounds at no charge to a physician who is willing to treat patients who seek care at the hospital but are not appropriate for emergency department or inpatient care.
- Health care providers can support each other financially to ensure continuity of health care operations. For example, a physician owner of a hospital may make a personal loan to the hospital without charging interest at a fair market rate so that the hospital can make payroll or pay its vendors.
- Hospitals can provide benefits to their medical staffs, such as multiple daily meals, laundry service to launder soiled personal clothing, or child care services while the
physicians are at the hospital and engaging in activities that benefit the hospital and its patients.
- Health care providers may offer certain items and services that are solely related to COVID-19 Purposes (as defined in the waivers), even when the provision of the items or services would exceed the annual non-monetary compensation cap. For example, a home health agency may provide continuing medical education to physicians in the community on the latest care protocols for homebound patients with COVID-19, or a hospital may provide isolation shelter or meals to the family of a physician who was exposed to the novel coronavirus while working in the hospital’s emergency department.
- Physician-owned hospitals can temporarily increase the number of their licensed beds, operating rooms, and procedure rooms, even though such expansion would otherwise be prohibited under the Stark Law. For example, a physician-owned hospital may temporarily convert observation beds to inpatient beds to accommodate patient surge during the COVID-19 pandemic in the United States.
- Some of the restrictions regarding when a group practice can furnish medically necessary designated health services (DHS) in a patient’s home are loosened. For example, any physician in the group may order medically necessary DHS that is furnished to a patient by one of the group’s technicians or nurses in the patient’s home contemporaneously with a physician service that is furnished via telehealth by the physician who ordered the DHS.
- Group practices can furnish medically necessary MRIs, CT scans or clinical laboratory services from locations like mobile vans in parking lots that the group practice rents on a part-time basis.
- **Verbal Orders:** CMS is waiving the requirements of 42 CFR §482.23, §482.24 and §485.635(d)(3) to allow for additional flexibilities related to verbal orders where read-back verification is still required but authentication may occur later than 48 hours. This will allow for more efficient treatment of patients in a surge situation.
- **Reporting Requirements:** CMS is waiving reporting requirements at §482.13(g) (1)(i)-(ii) which require hospitals to report patients in an intensive care unit whose death is caused by their disease process but who required soft wrist restraints to prevent pulling tubes/IVs may be reported later than close of business next business day, provided any death where the restraint may have contributed is continued to be reported within standard time limits. Due to current hospital surge, we are waiving this requirement to ensure that hospitals are focusing on increased care demands and patient care.
• **Limit Discharge Planning for Hospital and CAHs**: To allow hospitals and CAHs more time to focus on increasing care demands, discharge planning will focus on ensuring that patients are discharged to an appropriate setting with the necessary medical information and goals of care. CMS is waiving detailed regulatory requirements to provide information regarding discharge planning, as outlined in 42 CFR §482.43(a)(8), §482.61(e), and 485.642(a)(8). The hospital, psychiatric hospital, and CAH must assist patients, their families, or the patient's representative in selecting a post-acute care provider by using and sharing data that includes, but is not limited to, home health agency (HHA), skilled nursing facility (SNF), inpatient rehabilitation facility (IRF), and long term care hospital (LTCH) data on quality measures and data on resource use measures. The hospital must ensure that the post-acute care data on quality measures and data on resource use measures is relevant and applicable to the patient's goals of care and treatment preferences. During this public health emergency, a hospital may not be able to assist patients in using quality measures and data to select a nursing home or home health agency, but must still work with families to ensure that the patient discharge is to a post-acute care provider that is able to meet the patient’s care needs.
• **Modify Discharge Planning for Hospitals**: Patients must continue to be discharged to an appropriate setting with the necessary medical information and goals of care. To address the COVID-19 pandemic, CMS is waiving certain requirements related to hospital discharge planning for post-acute care services at 42 CFR §482.43(c), so as to expedite the safe discharge and movement of patients among care settings, and to be responsive to fluid situations in various areas of the country. CMS is waiving certain requirements for those patients discharged home and referred for HHA services, or for those patients transferred to a SNF for post-hospital extended care services, or transferred to an IRF or LTCH for specialized hospital services. For example, a patient may not be able to receive a comprehensive list of nursing homes in the geographic area, but must still be discharged to a nursing home that is available to provide the care that is need by the patient.
• **Medical Records**: CMS is waiving 42 CFR §482.24(a) through (c), which cover the subjects of the organization and staffing of the medical records department, requirements for the form and content of the medical record, and record retention requirements. CMS is waiving these requirements under 42 CFR §482.24(c)(4)(viii) and §485.638(a)(4)(iii) related to medical records to allow flexibility in completion of medical records within 30 days following discharge and for CAHs that all medical records must be promptly completed. This flexibility will allow clinicians to focus on the patient care at the bedside during the pandemic.
• **Flexibility in Patient Self Determination Act Requirements (Advance Directives)**: CMS is waiving the requirements at section 1902(a)(58) and 1902(w)(1)(A) for Medicaid, 1852(i) (for Medicare Advantage), and 1866(f) and 42 CFR 489.102 for Medicare, which require hospitals and CAHs to provide information about its advance directive policies to
patients. We are waiving this requirement to allow for staff to more efficiently deliver care to a larger number of patients.
- **Extension for Inpatient Prospective Payment System (IPPS) Wage Index Occupational Mix Survey Submission**: CMS collects data every 3 years on the occupational mix of employees for each short-term, acute care hospital participating in the Medicare program. CMS is currently granting an extension for data submission for hospitals nationwide affected by COVID-19 until August 3, 2020. If hospitals encounter difficulty meeting this extended deadline date, hospitals should communicate their concerns to CMS via their MAC, and CMS may consider an additional extension if CMS determines it is warranted.
- **Utilization review**. CMS is waiving these requirements at 42 CFR §482.1(a)(3) and 42 C.F.R §482.30 that requires that hospitals participating in Medicare and Medicaid to have a utilization review plan that meets specified requirements. CMS is waiving the entire Utilization Review CoP at §482.30, which requires that a hospital must have a utilization review (UR) plan with a UR committee that provides for review of services furnished to Medicare and Medicaid beneficiaries to evaluate the medical necessity of the admission, duration of stay, and services provided. These flexibilities should be implemented so long as they are not inconsistent with a State or pandemic/emergency plan. Removing these administrative requirements will allow hospitals to focus more resources on providing direct patient care.
- **Quality assessment and performance improvement program**. CMS is waiving 482.21(a)-(d) and (f), and 485.641(a), (b), and (d), which provide details the scope of the program, the incorporation, and setting priorities for the program’s performance improvement activities, and integrated QAPI programs (for hospitals that are a part of a hospital system). These flexibilities, which apply to both hospitals and CAHs, should be implemented so long as they are not inconsistent with a state’s emergency preparedness or pandemic plan. We expect any improvements to the plan to focus on the Public Health Emergency. While this waiver decreases burden associated with the development of a hospital or CAH QAPI program, the requirement that hospitals and CAHs maintain an effective, ongoing, hospital-wide, data-driven quality assessment and performance improvement program will remain. This waiver applies to both hospitals and CAHs.
- **Nursing services**: CMS is waiving the provision at 42 CFR 482.23(b)(4), 42 CFR 482.23(b)(7), and 485.635(d)(4), which requires the nursing staff to develop and keep current a nursing care plan for each patient, and the provision that requires the hospital to have policies and procedures in place establishing which outpatient departments are not required under to have a registered nurse present. These waivers allow nurses increased time to meeting the clinical care needs of each patient and allows for the provision of nursing care to an increased number of patients. In addition, we expect that
hospitals will need relief for the provision of inpatient services and as a result, the requirement to establish nursing-related policies and procedures for outpatient departments is likely unnecessary. These flexibilities apply to both hospitals and CAHs, and should be implemented so long as they are not inconsistent with a State or pandemic/emergency plan.
- **Food and dietetic service**: CMS is waiving the requirement at 42 CFR 482.28(b)(3) to have a current therapeutic diet manual approved by the dietitian and medical staff readily available to all medical, nursing, and food service personnel. Such manuals would not need to be maintained at surge capacity sites. These flexibilities should be implemented so long as they are not inconsistent with a State or pandemic/emergency plan. Removing these administrative requirements will allow hospitals to focus more resources on providing direct patient care.
- **Written policies and procedures for appraisal of emergencies at off campus hospital departments**: CMS is waiving 482.12(f)(3) related to Emergency services, with respect to the surge facility(ies) only, such that written policies and procedures for staff to use when evaluating emergencies are not required for surge facilities. This removes the burden on facilities to develop and establish additional policies and procedures at their surge facilities or surge sites related to the assessment, initial treatment and referral of patients. These flexibilities should be implemented so long as they are not inconsistent with a state’s emergency preparedness or pandemic plan.
- **Emergency preparedness policies and procedures**: CMS is waiving 482.15(b) and 485.625(b), which requires the hospital and CAH to develop and implement emergency preparedness policies and procedures, and 482.15(c)(1)-(5) and 485.625(c)(1)-(5) which requires that the emergency preparedness communication plans for hospitals and CAHs to contain specified elements with respect to the surge site. The requirement under the communication plan requires hospitals and CAHs to have specific contact information for staff, entities providing services under arrangement, patients' physicians, other hospitals and CAHs, and volunteers. This would not be an expectation for temporary expansion site. This waiver removes the burden on facilities to establish these policies and procedures for their surge facilities or surge sites.
- **Signature Requirements**: CMS is waiving signature and proof of delivery requirements for Part B drugs and Durable Medical Equipment when a signature cannot be obtained because of the inability to collect signatures. Suppliers should document in the medical record the appropriate date of delivery and that a signature was not able to be obtained because of COVID-19.
- **Accelerated/Advance Payments**: In order to provide additional cash flow to healthcare providers and suppliers impacted by COVID-19, CMS expanded and streamlined the Accelerated and Advance Payments Program, which provided conditional partial
payments to providers and suppliers to address disruptions in claims submission and/or claims processing subject to applicable safeguards for fraud, waste and abuse. Under this program, CMS made successful payment of over $100 billion to healthcare providers and suppliers. As of April 26, 2020, CMS is reevaluating all pending and new applications for the Accelerated Payment Program and has suspended the Advance Payment Program, in light of direct payments made available through the Department of Health & Human Services’ (HHS) Provider Relief Fund. Distributions made through the Provider Relief Fund do not need to be repaid. For providers and suppliers who have received accelerated or advance payments related to the COVID-19 Public Health Emergency, CMS will not pursue recovery of these payments until 120 days after the date of payment issuance. Providers and suppliers with questions regarding the repayment of their accelerated or advance payment(s) should contact their appropriate Medicare Administrative Contractor (MAC).
- **Cost Reporting**: CMS is delaying the filing deadline of certain cost report due dates due to the COVID-19 outbreak. We are currently authorizing delay for the following fiscal year end (FYE) dates. CMS will delay the filing deadline of FYE 10/31/2019 cost reports due by March 31, 2020 and FYE 11/30/2019 cost reports due by April 30, 2020. The extended cost report due dates for these October and November FYEs will be June 30, 2020. CMS will also delay the filing deadline of the FYE 12/31/2019 cost reports due by May 31, 2020. The extended cost report due date for FYE 12/31/2019 will be July 31, 2020.
- **Provider Enrollment**: CMS has established toll-free hotlines for all providers as well as the following flexibilities for provider enrollment:
- Waive certain screening requirements.
- Postpone all revalidation actions.
- Expedite any pending or new applications from providers.
**Medicare appeals in Fee for Service, Medicare Advantage (MA) and Part D**
- CMS is allowing Medicare Administrative Contractors (MACs) and Qualified Independent Contractor (QICs) in the FFS program 42 CFR 405.942 and 42 CFR 405.962 and MA and Part D plans, as well as the Part C and Part D Independent Review Entity (IREs), 42 CFR 562, 42 CFR 423.562, 42 CFR 422.582 and 42 CFR 423.582 to allow extensions to file an appeal;
- CMS is allowing MACs and QICs in the FFS program 42 CFR 405. 950 and 42 CFR 405.966 and the Part C and Part D IREs to waive requirements for timeliness for requests for additional information to adjudicate appeals; MA plans may extend the timeframe to adjudicate organization determinations and reconsiderations for medical items and services (but not Part B drugs) by up to 14 calendar days if: the enrollee requests the extension; the extension is justified and in the enrollee's interest due to the need for
additional medical evidence from a noncontract provider that may change an MA organization's decision to deny an item or service; or, the extension is justified due to extraordinary, exigent, or other non-routine circumstances and is in the enrollee's interest 42 CFR § 422.568(b)(1)(i), § 422.572(b)(1) and § 422.590(f)(1);
- CMS is allowing MACs and QICs in the FFS program 42 C.F.R 405.910 and MA and Part D plans, as well as the Part C and Part D IREs to process an appeal even with incomplete Appointment of Representation forms 42 CFR § 422.561, 42 CFR § 423.560. However, any communications will only be sent to the beneficiary;
- CMS is allowing MACs and QICs in the FFS program 42 CFR 405. 950 and 42 CFR 405.966 and MA and Part D plans, as well as the Part C and Part D IREs to process requests for appeal that don’t meet the required elements using information that is available 42 CFR § 422.562, 42 CFR § 423.562.
- CMS is allowing MACs and QICs in the FFS program 42 CFR 405. 950 and 42 CFR 405.966 and MA and Part D plans, as well as the Part C and Part D IREs, 42 CFR 422.562, 42 CFR 423.562 to utilize all flexibilities available in the appeal process as if good cause requirements are satisfied.
**Additional Guidance**
- The Interim Final Rules and waivers can be found at: [https://www.cms.gov/about-cms/emergency-preparedness-response-operations/current-emergencies/coronavirus-waivers](https://www.cms.gov/about-cms/emergency-preparedness-response-operations/current-emergencies/coronavirus-waivers).
- CMS has released guidance to describe standards of practice and flexibilities within the current regulations for hospitals (including critical access hospitals and psychiatric hospitals) at [https://www.cms.gov/files/document/qso-20-13-hospitalspdf.pdf-2](https://www.cms.gov/files/document/qso-20-13-hospitalspdf.pdf-2).
- CMS guidance also addresses hospital flexibilities under the Emergency Medical Treatment and Labor Act (EMTALA) to establish alternate testing and triage sites to address the pandemic at [https://www.cms.gov/files/document/qso-20-15-hospitalcahemtala.pdf](https://www.cms.gov/files/document/qso-20-15-hospitalcahemtala.pdf).
- CMS has released guidance to providers related to relaxed reporting requirements for quality reporting programs at [https://www.cms.gov/files/document/guidance-memo-exceptions-and-extensions-quality-reporting-and-value-based-purchasing-programs.pdf](https://www.cms.gov/files/document/guidance-memo-exceptions-and-extensions-quality-reporting-and-value-based-purchasing-programs.pdf).
|
No Place to Hide: Catching Fraudulent Entities in Tensors
Yikun Ban*
Peking University
firstname.lastname@example.org
Xin Liu
Tsinghua University
email@example.com
Yitao Duan
Fintec.ai
firstname.lastname@example.org
Xue Liu
McGill University
email@example.com
Wei Xu
Tsinghua University
firstname.lastname@example.org
ABSTRACT
Many approaches focus on detecting dense blocks in the tensor of multimodal data to prevent fraudulent entities (e.g., accounts, links) from retweet boosting, hashtag hijacking, link advertising, etc. However, no existing method is effective to find the dense block if it only possesses high density on a subset of all dimensions in tensors. In this paper, we novelly identify dense-block detection with dense-subgraph mining, by modeling a tensor into a weighted graph without any density information lost. Based on the weighted graph, which we call information sharing graph (ISG), we propose an algorithm for finding multiple densest subgraphs, D-Spot, that is faster (up to 11x faster than the state-of-the-art algorithm) and can be computed in parallel. In an N-dimensional tensor, the entity group found by the ISG+D-Spot is at least 1/2 of the optimum with respect to density, compared with the 1/N guarantee ensured by competing methods. We use nine datasets to demonstrate that ISG+D-Spot becomes new state-of-the-art dense-block detection method in terms of accuracy specifically for fraud detection.
CCS CONCEPTS
• Information systems → Wrappers (data mining).
KEYWORDS
Dense-block Detection; Graph Algorithms; Fraud Detection
ACM Reference Format:
Yikun Ban, Xin Liu, Yitao Duan, Xue Liu, and Wei Xu. 2019. No Place to Hide: Catching Fraudulent Entities in Tensors. In Proceedings of the 2019 World Wide Web Conference (WWW ’19), May 13–17, 2019, San Francisco, CA, USA. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3308558.3313403
1 INTRODUCTION
Fraud represents a serious threat to the integrity of social or review networks such as Twitter and Amazon, with people introducing fraudulent entities (e.g., fake accounts, reviews, etc.) to gain more publicity/profit over a brief period. For example, on a social network or media sharing website, people may wish to enhance their account’s popularity by illegally buying more followers [27]; on e-commerce websites, fraudsters may register multiple accounts to benefit from ‘new user’ promotions.
Consider the typical log data generated from a social review site (e.g., Amazon), which contains four-dimensional features: users, products, timestamps, rating scores. These data are often formulated as a tensor, in which each dimension denotes a separate feature and an entry (tuple) of the tensor represents a review action. Based on previous studies [12, 30], fraudulent entities form dense blocks (sub-tensors) within the main tensor, such as when a mass of fraudulent user accounts create an enormous number of fake reviews for a set of products over a short period. Dense-block detection has also been applied to network intrusion detection [20, 30], retweet boosting detection [12], bot activities detection [30], and genetics applications [20, 26].
Various dense-block detection methods have been developed. One approach uses tensor decomposition, such as CP decomposition and higher-order singular value decomposition [20]. However, as observed in [32], such methods are outperformed by search-based techniques [12, 30, 32] in terms of accuracy, speed, and flexibility regarding support for different density metrics. Furthermore, [30, 32] provide an approximation guarantee for finding the densest/optimal block in a tensor.
We have examined the limitations of search-based methods for dense-block detection. First, these methods are incapable of detecting hidden-densest blocks. We define a hidden-densest block as one that does not have a high-density signal on all dimensions of a tensor, but evidently has a high density on a subset of all dimensions. Moreover, existing methods neglect the data type and distribution of each dimension on the tensor. Assuming that two dense blocks A and B have the same density, however, A is the densest on a subset of critical features, such as IP address and device ID, whereas B is the densest on some trivial features such as age and gender. Can we simply believe that A is as suspicious as B? Unfortunately, the answer when using existing methods is ‘yes.’
To address these limitations, we propose a dense-block detection framework and focus on entities that form dense blocks on tensors. The proposed framework is designed using a novel approach. Given a tensor, the formation of dense blocks is the result of value sharing (the behavior whereby two or more different entries share a distinct value (entity) in the tensor). Based on this key point, we propose a novel Information Sharing Graph (ISG) model, which accurately captures each instance of value sharing. The transformation from dense blocks in a tensor to dense subgraphs in ISG leads us to propose a fast, high-accuracy algorithm, D-Spot, for determining fraudulent entities with a provable guarantee regarding the densities of the detected subgraphs.
In summary, the main contributions of this study are as follows:
1) **[Graph Model]**. We propose the novel ISG model, which converts every value sharing in a tensor to the representation of weighted edges or nodes (entities). Furthermore, our graph model considers diverse data types and their corresponding distributions based on information theory to automatically prioritize multiple features.
2) **[Algorithm]**. We propose the *D-Spot* algorithm, which is able to find multiple densest subgraphs in one run. And we theoretically prove that the multiple subgraphs found by D-Spot must contain some subgraphs that are at least 1/2 as dense as the optimum. In real-world graphs, D-Spot is up to 11× faster than the state-of-the-art competing algorithm.
3) **[Effectiveness]**. In addition to dense blocks, ISG+D-Spot also effectively differentiates hidden-densest blocks from normal ones. In experiments using eight public real-world datasets, ISG+D-Spot detected fraudulent entities more accurately than conventional methods.
## BACKGROUND
### 2.1 Economics of Fraudsters
As most fraudulent schemes are designed for financial gain, it is essential to understand the economics behind the fraud. Only when the benefits to a fraudster outweigh their costs will they perform a scam.
To maximize profits, fraudsters have to share/multiplex different resources (e.g., fake accounts, IP addresses, and device IDs) over multiple frauds. For example, [13] found that many users are associated with a particular group of followers on Twitter; [36] identified that many cases of phone number reuse; [4] observed that the IP addresses of many spam proxies and scam hosts fall into a few uniform ranges; and [38] revealed that fake accounts often conduct fraudulent activities over a short time period.
Thus, fraudulent activities often form dense blocks in a tensor (as described below) because of this resource sharing.
### 2.2 Related Work
**Search-based dense-block detection in tensors.** Previous studies [12, 20, 30] have shown the benefit of incorporating features such as timestamps and IP addresses, which are often formulated as a multi-dimensional tensor. Mining dense blocks with the aim of maximizing a density metric on tensors is a successful approach. CrossSpot [12] randomly chooses a seed block and then greedily adjusts it in each dimension until the local optimum is attained. This technique usually requires enormous seed blocks and does not provide any approximation guarantee for finding the global optimum. In contrast to adding feature values to seed blocks, M-Zoom [30] removes feature values from the initial tensor one by one using a similar greedy strategy, providing a 1/N-approximation guarantee for finding the optimum (where N is the number of dimensions in the tensor). M-Biz [31] also starts from a seed block and then greedily adds or removes feature values until the block reaches a local optimum. Unlike M-Zoom, D-Cube [32] deletes a set of feature values on each step to reduce the number of iterations, and is implemented in a distributed disk-based manner. D-Cube provides the same approximation guarantee as M-Zoom.
**Tensor decomposition methods.** Tensor decomposition [17] is often applied to detect dense blocks within tensors [20]. Scalable algorithms, such as those described in [23, 33, 37], have been developed for tensor decomposition. However, as observed in [12, 32], these methods are limited regarding the detection of dense blocks, and usually detect blocks with significantly lower densities, provide less flexibility with regard to the choice of density metric, and do not provide any approximation guarantee.
**Dense-subgraph detection.** A graph can be represented by a two-dimensional tensor, where an edge corresponds to a non-zero entry in the tensor. The mining of dense subgraphs has been extensively studied [18]. Detecting the densest subgraph is often formulated as finding the subgraph with the maximum average degree, and may use exact algorithms [10, 16] or approximate algorithms [6, 16]. Fraudar [11] is an extended approximate algorithm that can be applied to fraud detection in social or review graphs. CoreScope [29] tends to find dense subgraphs in which all nodes have a degree of at least k. Implicitly, singular value decomposition (SVD) also focuses on dense regions in matrixes. EigenSpoke [24] reads scatter plots of pairs of singular vectors to find patterns and chip communities, [7] extracts dense subgraphs using a spectral cluster framework, and [14, 27] use the top eigenvectors from SVD to identify abnormal users.
**Other anomaly/fraud detection methods** The use of belief propagation [3, 22] and HITS-like ideas [8, 9, 13] is intended to catch rare behavior patterns in graphs. Belief propagation has been used to assign labels to the nodes in a network representation of a Markov random field [3]. When adequate labeled data are available, classifiers can be constructed based on multi-kernel learning [2], support vector machines [35], and k-nearest neighbor [34] approaches.
## DEFINITIONS AND MOTIVATION
In this section, we introduce the notations and definitions used throughout the paper, analyze the limitations of existing approaches, and describe our key motivations.
### 3.1 Notation and Formulations
Table 2 lists the notations used in this paper. We use $[N] = \{1, ..., N\}$ for brevity. Let $\mathbf{R}(A_1, ..., A_N, X) = \{t_0, ..., t_{|X|}\}$ be a relation with $N$-dimensional features, denoted by $\{A_1, ..., A_N\}$, and a dimensional entry identifiers, denoted by $X$. For each entry (tuple) $t \in \mathbf{R}$, $t = (a_1, ..., a_N, x)$, where $\forall n \in [N]$, we use $t[A_n]$ to denote the value of $A_n$ in $t$, $t[A_n] = a_n$ and $t[X]$ to denote the identification of $t$, $t[X] = x$, $x \in X$. We define the mass of $\mathbf{R}$ as $|\mathbf{R}|$, which is the total number of such entries, $|\mathbf{R}| = |X|$. For each $n \in [N]$, we use $\mathbf{R}_n$ to
Table 2: Symbols and Definitions
| Symbol | Interpretation |
|--------|----------------|
| $N$ | number of dimensions in a tensor set $\{1, ..., N\}$ |
| $[N]$ | relation representing a tensor |
| $A_n$ | $n$-th dimensional values of $R$ |
| $R_n$ | set of distinct values of $A_n$ of $R$ |
| $t = (a_1, ..., a_N, x)$ | an entry (tuple) of $R$ |
| $\mathcal{B}(A_1, ..., A_N, X)$ | a block in $R$ |
| $\mathcal{B}_n$ | set of distinct values of $A_n$ of $\mathcal{B}$ |
| $U$ | set of distinct values of $U$ |
| $V = \{u_1, ..., u\}$ | set of distinct values of $U$ |
| $G = (V, E)$ | Information Sharing Graph |
| $S_{i,j}$ | $S$-score between $u_i$ and $u_j$ |
| $S_i$ | $S$-score of $u_i$ |
| $G = (\mathcal{V}, \mathcal{E})$ | subgraph in $G$ |
| $\rho$ | density metric |
denote the set of distinct values of $A_n$. Thus, $R$ naturally represents an $N$-dimensional tensor of size $|R_1| \times ... \times |R_N|$.
A block $\mathcal{B}$ in $R$ is defined as $\mathcal{B}(A_1, ..., A_N, X) = \{t \in R : t[X] \in X\}$ and $X \subseteq X$. Additionally, the mass $|\mathcal{B}|$ is the number of entries of $\mathcal{B}$ and $\mathcal{B}_n$ is the set of distinct values of $A_n$. Let $\mathcal{B}(a, A_n) = \{t \in R : t[A_n] = a\}$ represent all entries that take the value $a$ on $A_n$. The mass $|\mathcal{B}(a, A_n)|$ is the number of such entries. A simple example is given as follows.
Example 1 (Amazon review logs). Assume a relation $R(user, product, timestamp, X)$, where $\forall t \in R$, $t = (a_1, a_2, a_3, x)$ indicates a review action where user $a_1$ reviews product $a_2$ at timestamp $a_3$, and the identification of the action is $x$. Because $a_1$ may review $a_2$ at $a_3$ (we assume that $a_3$ represents a period) multiple times, $X$ helps us distinguish each such action. The mass of $R$, denoted by $|R|$, is the number of all review actions in the dataset. The number of distinct users in $R$ is $|R_1|$. A block $\mathcal{B}(a_1, user)$ is the set of all rating entries operated by user $a_1$, and the number of overall entries of $\mathcal{B}(a_1, user)$ is $|\mathcal{B}(a_1, user)|$.
First, we present a density metric that is known to be useful for fraud detection [30, 32]:
Definition 1. (Arithmetic Average Mass $\rho$). Given a block $\mathcal{B}(A_1, ..., A_N, X)$, the arithmetic average mass of $\mathcal{B}$ on dimensions $N$ is
$$\rho(\mathcal{B}, N) = \frac{|\mathcal{B}|}{|N| \sum_{n \in N} |\mathcal{B}_n|},$$
where $N$ is a subset of $[N]$ and obviously $\rho \in [1.0, +\infty)$.
If block $\mathcal{B}$ is dense in $R$, then $\rho(\mathcal{B}, [N]) > 1.0$.
Other density metrics listed in [32] are also effective for fraud detection. It is broadly true that all density measures are functions of the cardinalities of the dimensions and masses of $\mathcal{B}$ and $R$. In $R$, previous studies [12, 30–32] have focused on detecting the top-$k$ densest blocks in terms of a density metric. In the remainder of this paper, we use the density metric $\rho$ to illustrate our key points.
3.2 Shortcomings of Existing Approaches and Motivation
In practice, the blocks formed by fraudulent entities in $R$ may be described by hidden-densest blocks. To illustrate hidden-densest blocks, we present the following definitions and examples.
Definition 2. In $R(A_1, ..., A_N, X)$, we say that $\mathcal{B}(A_1, ..., A_N, X)$ is the densest on a dimension $A_n$ if $\rho(\mathcal{B}, \{n\})$ is the maximal value of all possible $\rho(\mathcal{B}, \{n\})$, where $\mathcal{B}$ is any possible block in $R$.
Definition 3. (Hidden-Densest Block). In $R(A_1, ..., A_N, X)$, $\mathcal{B}(A_1, ..., A_N, X)$ is the hidden-densest block if $\mathcal{B}$ is the densest on a small subset of $\{A_1, ..., A_N\}$.
Example 2 (Registration logs). In a registration dataset with 19 features, fake accounts only exhibit conspicuous resource sharing with respect to the IP address feature.
Example 3 (TCP dumps). The DARPA dataset [1] has 43 features, but the block formed by malicious connections is only the densest on two features.
Thus, catching hidden-densest blocks has a significant utility in the real world. Unfortunately, the problem is intractable using existing approaches [12, 30–32].
First, assuming that the hidden-densest block $\mathcal{B}(A_1, ..., A_N, X)$ is only the densest on dimension $A_N$, we have that
$$\rho(\mathcal{B}, [N-1]) = \frac{|\mathcal{B}|}{\frac{1}{N-1} \sum_{n \in [N-1]} |\mathcal{B}_n|} \approx \rho(\mathcal{B}, [N]) = \frac{|\mathcal{B}|}{\frac{1}{N} \sum_{n \in [N]} |\mathcal{B}_n|}$$
when $N$ is sufficiently large. Assuming $\rho(\mathcal{B}, [N - 1])$ is very low, then the methods in [12, 30–32], which try to find the block $\mathcal{B}$ that maximizes $\rho(\mathcal{B}, [N])$, have a limited ability to detect the hidden-densest block.
Second, consider a block $\mathcal{B}$ formed by fraudulent entities, in which $\mathcal{B}$ is only the densest on $\{A_2, A_3, A_5\}$, and thus $\rho(\mathcal{B}, \{2, 3, 5\})$ is maximal. However, the techniques of [12, 30–32] cannot find $\{A_2, A_3, A_5\}$ when the feature combinations are exploded.
Furthermore, in $R(A_1, ..., A_N, X)$, consider two blocks $\mathcal{B}_1$ and $\mathcal{B}_2$, where $\mathcal{B}_1$ is the densest on $A_i$, $\mathcal{B}_2$ is the densest on $A_j$, and $\rho(\mathcal{B}_1, [N]) = \rho(\mathcal{B}_2, [N])$. Does this indicate that $\mathcal{B}_1$ and $\mathcal{B}_2$ are equally suspicious? No, absolutely not, because $A_i$ could be the IP address feature and $A_j$ could be a trivial feature such as the user’s age, location, or gender.
[Value Sharing]. Based on the considerations above, we design our approach from a different angle. The key reason behind the formation of dense blocks is value sharing. Given $t_1 \in R$, a dimension $A_n$, and $t_1[A_n] = a$, we can identify value sharing when $\exists t_2 \in R$, $t_2 \neq t_1$, and $t_2[A_n] = a$.
Obviously, if a block $\mathcal{B}$ is dense, $\rho(\mathcal{B}, [N]) > 1.0$, then value sharing must be occurring, i.e., value sharing results in dense blocks.
Therefore, detecting dense blocks is equivalent to catching value sharing signals. We propose ISG based on information theory and design the D-Spot algorithm to leverage graph features, allowing us to catch fraudulent entities within dense blocks and overcome the limitations mentioned above.
4 ISG BUILDING
In this section, we present the Information Sharing Graph (ISG), which is constructed on the relation $R$.
4.1 Problem Formulation
Catching fraudulent entities is equivalent to detecting a subset of distinct values in a certain dimension. Let $U$ denote the target dimension in which a subset of distinct values form the fraudulent entities we wish to detect. In $R = (A_1, ..., A_N, X)$, we choose a dimension and set it as $U$, and denote the remaining $(N - 1)$ dimensions as $K$ dimensions, $k \in [K]$, for brevity. We build the ISG of $U$, i.e., the weighted-undirected graph $G = (V, E)$, in which $V = \{u_1, ..., u_n\}$ is the set of distinct values of $U$.
In Example 1, $R(\text{user, product, timestamp}, X)$, we set $U = \text{user}$ if we wish to detect fraudulent user accounts. In Example 2, we set $U = \text{account}$ if we would like to identify fake accounts. In Example 3, we set $U = \text{connection}$ to catch malicious connections.
To specifically describe the process of value sharing, we present the two following definitions:
**Definition 4.** (Pairwise Value Sharing). Given $u_i, u_j \in V$ and $a \in A_k$, we say that $u_i$ and $u_j$ share value $a$ on $A_k$ if $\exists t_1, t_2 \in R$ such that $t_1[U] = u_i, t_2[U] = u_j$ and $t_1[A_k] = t_2[A_k] = a$.
Pairwise value sharing occurs when a distinct value is shared by multiple individual entities. Given a value sharing process in which $a$ is shared by $'V' \subset V$, we denote this as $\frac{|'V'|(|'V'|-1)}{2}$ pairwise value sharing.
**Definition 5.** (Self-Value Sharing). Given $t_1 \in R$, where $t_1[U] = u_i, u_i \in V$, and $t_1[A_k] = a$, we say that $u_i$ shares value $a$ on $A_k$ if $\exists t_2 \in R$ and $t_2 \neq t_1$ such that $t_2[U] = u_i$ and $t_2[A_k] = a$.
Another type of value sharing occurs when the distinct value $a$ is shared $n$ times by an entity $u_i$, which can be represented by $n$ instances of self-value sharing.
In ISG $G = (V, E)$, for some edge $(u_i, u_j) \in E$, $S_{i,j}$ represents the information between $u_i$ and $u_j$ derived from the other $K$ dimensions, and for some node $u_i \in V$, $S_i$ denotes the information of $u_i$ calculated from the other $K$ dimensions. From the definitions and notations defined in the previous section, Problem 1 gives a formal definition of how to build the ISG of a tensor.
**Problem 1** (Building a pairwise information graph). (1) **Input**: a relation $R$, the target dimension $U$, (2) **Output**: the information sharing graph $G = (V, E)$.
4.2 Building an ISG
Given a dimension $A_k$, the target dimension $U$, any $u_i \in V$, and an entry $t_1 \in R$ for which $t_1[U] = u_i$, then for each $a \in A_k$, we assume that the probability of $t_1[A_k] = a$ is $p^k(a)$.
**Edge Construction.** Based on information theory [28], the self-information of the event that $u_i$ and $u_j$ share $a$ is:
$$I^k_{i,j}(a) = \log(\frac{1}{p^k(a)})^2.$$
(1)
To compute the pairwise value sharing between $u_i$ and $u_j$ across all $K$ dimensions, we propose the metric $S$-score as the edge weight of ISG:
$$S_{i,j} = \sum_{k=1}^{K} \sum_{a \in H_k(u_i,u_j)} I^k_{i,j}(a),$$
(2)
where $H_k(u_i, u_j)$ is the set of all values shared by $u_i$ and $u_j$ on $A_k$. Note that $S_{i,j} = 0.0$ if $\bigcup_{k=1}^{K} H_k(u_i, u_j) = \emptyset$.
Intuitively, if $u_i$ and $u_j$ do not have any shared values, which is to be expected in normal circumstances, we have zero information. Otherwise, we obtain some information. Thus, the higher the value of $S_{i,j}$ is, the more similar $u_i$ is to $u_j$. In practice, the $S$-score has a large variance. For example, fraud user pairs sharing an IP subnet and device ID will have a high $S$-score, whereas normal users are unlikely to share these values with anyone, and will thus have an $S$-score close to zero. Additionally, the information we obtain for $u_i$ and $u_j$ sharing the value $a$ is related to the overall probability of that value. For example, it would be much less surprising if they both follow Donald Trump on Twitter than if they both follow a relatively unknown user.
**Node Setting.** For a node $u_i \in V$, let $B(a, A_k, u_i, U)$ be the set $\{t \in R : (t[A_k] = a) \land (t[U] = u_i)\}$. When $|B(a, A_k, u_i, U)| \geq 2$. The information of forming $B(a, A_k, u_i, U)$ is:
$$I^k_i(a) = \log(\frac{1}{p^k(a)})^{|B(a, A_k, u_i, U)|}.$$
(3)
We now define $S_i$ to compute the self-value sharing for $u_i$ across all $K$ dimensions:
$$S_i = \sum_{k=1}^{K} \sum_{B \in H_k(u_i)} I^k_i(a),$$
(4)
where $H_k(u_i)$ is the set $\{B(a, A_k, u_i, U), \forall a \in R_k\}$ and $R_k$ is the set of distinct values of $A_k$. Note that $S_i = 0.0$ if $\bigcup_{k=1}^{K} H_k(u_i) = \emptyset$.
In effect, self-value sharing occurs in certain fraud cases. For instance, a fraudulent user may create several fake reviews for a product/restaurant on Amazon/Yelp [25] over a few days. In terms of network attacks [19], a malicious TCP connection tends to attack a server multiple times.
**Determining $[ p^k(a) ]$.** We can extend the $S$-score to accommodate different data types and distributions.
It is difficult to determine $p^k(a)$, as we do not always know the distribution of $A_k$. In this case, for dimensions that are attribute features, we assume a uniform distribution and simply set
$$p^k(a) = \frac{1}{|R_k|}.$$
(5)
This approximation works well for many fraud-related properties such as IP subnets and device IDs, which usually follow a Poisson distribution [12].
However, the uniform assumption works poorly for low-entropy distributions, such as the long-tail distribution, which is common in dimensions such as items purchased or users followed. Low entropy implies that many users behave similarly anyway, independent of frauds. Intuitively for such distributions, there is no surprise in following a celebrity (head of the distribution), but considerable information if they both follow someone at the tail. For example, 20% of users correspond to more than 80% of the “follows” in online social networks. The dense subgraphs between celebrities and their fans are very unlikely to be fraudulent. If feature $A_k$ has a long-tail distribution, its entropy is very low. For example, the entropy of the uniform distribution over 50 values is 3.91, but the entropy of a
long-tail distribution with 90% of probabilities centered around one value is only 0.71. Therefore, we set \( p^k(a) \) based on the empirical distribution as
\[
p^k(a) = |\mathcal{B}(a, A_k)|/|\mathcal{R}|,
\]
when the values in \( A_k \) have low entropy. We also provide an interface so that users can define their own \( p^k(a) \) function.
**Optimization of ISG Construction.** In theory, a graph with \( |V| \) nodes has \( O(|V|^2) \) edges. Naively, therefore, it takes \( O(|V|^2) \) time for graph initialization and traversal.
To reduce the complexity of building the ISG, we use the key-value approach. The key corresponds to a value \( a \) on \( A_k \) and the value represents the block \( \mathcal{B}(a, A_k) \). Let \( V \subseteq V \) denote the entities that occur in \( \mathcal{B}(a, A_k) \). As each pair \( (u_i, u_j) \in V \) shares \( a \), we increase the value of \( S_{i,j} \) by \( I^k_{i,j}(a) \). Additionally, for each \( u_i \in V \), there exists some \( \mathcal{B}(a, A_k, u_i, U) \subseteq \mathcal{B}(a, A_k) \). Thus, we increase the value of \( S_i \) by \( I^k_i(a) \) if \( |\mathcal{B}(a, A_k, u_i, U)| \geq 2 \).
To build the ISG, we compute all key-value pairs across \( K \) dimensions by traversing \( R \) in parallel. Thus, it takes \( O(K|R| + |E|) \) time to build the graph \( G = (V, E) \). Note that we only retain positive \( S_{i,j} \) and \( S_i \). In practice, \( G \) is usually sparse, which is discussed in Section 6.1.
### 4.3 Key Observations on ISG
Given a relation \( R = (A_1, ..., A_N, X) \) in which we set \( U = A_N \), we construct the ISG of \( U \), \( G = (V, E) \). Assuming there is a fraudulent block \( B \) in \( R \), \( B = (A_1, ..., A_N, X) \) is transformed into a subgraph \( G = (V, E) \) in \( G \), where \( V \) is the set of distinct values of \( A_N \) and an edge \( S_{i,j} \in E \) denotes the information between \( u_i \) and \( u_j \) calculated from the other \( K \) dimensions. Then, \( V \) is the fraud group comprised of fraudulent entities that we wish to detect.
We summarize three critical observations of \( G \) that directly lead to the algorithms presented in Section 5.2. Given \( G = (V, E) \), we define the edge density of \( G \) as
\[
\rho_{edge}(G) = \frac{|E|}{|V|(|V| - 1)}
\]
1) **The value of \( S_{i,j} \) or \( S_i \) is unusually high.** Value sharing may happen frequently, but sharing across certain features, even certain values, is more suspicious than others. Intuitively, it might be suspicious if two users share an IP address or follow the same random “nobody” on Twitter. However, it is not so suspicious if they have a common gender, city, or follow the same celebrity. In other words, certain value sharing is likely to be fraudulent because the probability of sharing across a particular dimension, or at a certain value, is quite low. Thus, the information value is high, which is accurately captured by \( S_{i,j} \) and \( S_i \).
2) **\( |V| \) is usually large.** Fraudsters perform the same actions many times to achieve economies of scale. Thus, we expect to find multiple pairwise complicities among fraudulent accounts. A number of studies have found that large cluster sizes are a crucial indicator of fraud [5, 38]. Intuitively, while it is natural for a few family members to share an IP address, it is highly suspicious when dozens of users share one.
3) **The closer \( \rho_{edge}(G) \) is to 1.0, the more suspicious \( G \) is.** Fraudsters usually operate a number of accounts for the same job, and thus it is likely that users manipulated by the same fraudster will share the same set of values. Thus, the \( G \) formed by the fraud group will be well-connected.
**Appearance of legitimate entities on ISG.** In \( G = (V, E) \), given some \( u_i \) that we assume to be legitimate, let \( h(u_i) \) denote the set of its neighbor nodes. We have two findings. (1) For \( u_i \), \( S_i + \sum_{u_j \in V} S_{i,j} \rightarrow 0 \) because \( u_i \) is unlikely to share values with others. Even if exists, the shared values should have a high probability (see observation 1) and therefore small \( S_{i,j} \). (2) The subgraph \( G \) induced by \( h(u_i) \) is typically not well-connected, as resource sharing is uncommon in the real world. If \( G \) is well-connected, \( |h(u_i)| \) is quite small compared with the fraud group size (see observation 2).
In summary, the techniques described in [12, 20, 30–32] work directly on the tensor, indicating that they consider value sharing on each dimension, and even certain values, as equivalent. In contrast, ISG assigns each instance of value sharing a theoretical weight based on the edges and nodes of the ISG, which is more effective for identifying the (hidden-) densest blocks (comparison in Sec.6.2).
## 5 SPOTTING FRAUD
Based on the observations in Section 4.3, we now describe our method for finding objective subgraphs in \( G \). This section is divided into two parts: first, we define a density metric \( F_G \), and then we illustrate the proposed D-Spot algorithm.
### 5.1 Density Metric and Problem Definition
To find the objective \( G = (V, E) \), we define a density metric \( F_G \) as [6, 11]:
\[
F_G = \frac{\sum_{(u_i, u_j) \in E} S_{i,j} + \sum_{u_i \in V} S_i}{|V|}.
\]
The form of \( F_G \) satisfies the three key observations of \( G \) in Section 4.3.
1) Keeping \( |V| \) fixed, we have that \( \sum_{(u_i, u_j) \in E} S_{i,j} + \sum_{u_i \in V} S_i \uparrow \Rightarrow F_G \uparrow \).
2) Keeping \( S_{i,j}, S_i, \) and \( \rho_{edge}(G) \) fixed, we have that \( |V| \uparrow \Rightarrow F_G \uparrow \).
3) Keeping \( S_{i,j}, S_i, \) and \( |V| \) fixed, we have that \( \rho_{edge}(G) \uparrow \Rightarrow F_G \uparrow \).
Thus, our subgraph-detection problem can be defined as follows:
**Problem 2** (Detecting dense subgraphs). (1) **Input:** the information sharing graph \( G = (V, E) \). (2) **Find:** multiple subgraphs of \( G \) that maximize \( F \).
### 5.2 D-Spot (Algorithm 1-3)
In real-world datasets, there are usually numerous fraud groups forming multiple dense subgraphs. Based on the considerations described above, we propose D-Spot (Algorithms 1–3). Compared with other well-known algorithms for finding the densest subgraph [6, 11], D-Spot has two differences:
1) D-Spot can detect multiple densest subgraphs simultaneously. D-Spot first partitions the graph, and then detects a single densest subgraph in each partition.
2) D-Spot is faster. First, instead of removing nodes one by one, D-Spot removes a set of nodes at once, reducing the number of iterations. Second, it detects the single densest subgraph in
partition $G = (V, E)$, rather than in graph $G = (V, E)$, where $|V| << |V|$ and $|E| << |E|$.
D-Spot consists of two main steps: (1) Given $G$, divide $G$ into multiple partitions (Algorithm 1); (2) In each $G$, find a single dense subgraph (Algorithms 2 and 3).
**Algorithm 1: graph partitioning.** Let $\hat{G}$ be a dense subgraph formed by a fraud group that we are about to detect. In $G$, there are usually multiple $\hat{G}$s, where each $\hat{G}$ should be independent or connected with others by small values of $S_{i,j}$. Thus we let $\hat{G}$s be the connected components of $G$ as partitions (line 6). For each $\hat{G} \in \hat{G}$s, we run Algorithms 2 and 3 (lines 7–9) to find $\hat{G}$. Finally, Algorithm 1 returns multiple dense subgraphs $\hat{G}$s (line 10). Note that there is a guarantee that $\hat{G}$s must contain the $\hat{G}$ that is at least 1/2 of the optimum of $G$ in terms of $F$ (proof in Sec.6.3).
**Information pruning (recommended).** As mentioned before, Fraud entities usually have surprising similarities that are quantified by $S_{i,j}$. We want to delete edges with regular weights and thus we provide a threshold for removing edges:
$$\theta = \frac{\sum_{(u_i, u_j) \in E} S_{i,j}}{|V|(|V| - 1)}$$
It is easy to see that $\theta$ (conservative) is the average information of all possible pairs $(u_i, u_j)$. Thus, we iterate through all edges in $G$, and remove those for which $S_{i,j} < \theta$ (lines 3–5). In all experiments of this paper, we used $\theta$ and get the expected conclusion that the performance of D-Spot using $\theta$ is hardly different from no pruning but using $\theta$ is able to significantly decrease the running cost of D-Spot.
**Algorithm 1 find multiple dense subgraphs in G**
**Require:** $G = (V, E)$, $\theta$ (Eq.8), $w()$ (Eq. 9)
**Ensure:** $\hat{G}$s
1: $\hat{G}$s $\leftarrow$ 0
2: if needed then
3: for each $S_{i,j} \in E$ do
4: if $S_{i,j} < \theta$ then
5: remove $S_{i,j}$
6: $\hat{G}$s $\leftarrow$ connected components of $G$
7: for each $\hat{G} \in \hat{G}$s do
8: $\hat{G} \leftarrow$ find a dense subgraph ($\hat{G}$, $w()$)
9: $\hat{G}$s $\leftarrow$ $\hat{G}$s $\cup$ ($\hat{G}$)
10: return $\hat{G}$s
**Algorithm 2 find a dense subgraph**
**Require:** $G = (V, E)$, $w()$(Eq. 9)
**Ensure:** $\hat{G}$
1: $V_c \leftarrow$ copy($V$)
2: $S_{sum} \leftarrow \sum_{(u_i, u_j) \in E} S_{i,j} + \sum_{u_i \in V} S_i$
3: $\forall u \in V$, Dict1[u] $\leftarrow$ 0, Dict2[u] = $w(u, \hat{G})$
4: index $\leftarrow$ 0, $F^{max} \leftarrow \frac{S_{sum}}{|V_c|}$, top $\leftarrow$ 0
5: while $V_c \neq 0$ do
6: $R \leftarrow \{u \in V_c : Dict2[u] \leq \frac{2 \sum_{(u_i, u_j) \in E} S_{i,j} + \sum_{u_i \in V_c} S_i}{|V_c|}\}$ (Eq. 10)
7: sort $R$ in increasing order of Dict2[u]
8: for each $u \in R$ do
9: $V_c \leftarrow V_c - u$, $S_{sum} \leftarrow S_{sum} - Dict2[u]$
10: index $\leftarrow$ index + 1, Dict1[u] $\leftarrow$ index
11: $F = \frac{S_{sum}}{|V_c|}$
12: if $F > F^{max}$ then
13: $F^{max} \leftarrow F$, top $\leftarrow$ index
14: Dict2 $\leftarrow$ update edges ($u, V_c, Dict2, \hat{G}$)
15: $\hat{R} \leftarrow \{u \in V : Dict1[u] > top\}$
16: return $\hat{G}$ (the subgraph induced by $\hat{R}$)
**Algorithm 3 update edges**
**Require:** $u_i$, $V_c$, Dict2, $\hat{G} = (V, E)$
**Ensure:** Dict2
1: for each $u_j \in V_c$ do
2: if $(u_i, u_j) \in E$ then
3: Dict2[u_j] $\leftarrow$ Dict2[u_j] – $S_{i,j}$
4: remove $(u_i, u_j)$ from $E$
5: return Dict2
The nodes are deleted (line 10), which allows us to determine the value of $\hat{R}$ that maximizes $F$. Line 6 determines which $R$ are deleted in each iteration. $R$ is confirmed by $\{u \in V : w(u, \hat{G}) \leq \overline{w}\}$ (line 6), where the average $\overline{w}$ is given by:
$$\overline{w} = \frac{\sum_{u \in V} w(u, \hat{G})}{|V|}$$
$$= \frac{2 \sum_{(u_i, u_j) \in E} S_{i,j} + \sum_{u_i \in V} S_i}{|V|} \leq 2F_{\hat{G}},$$
because each edge $S_{i,j}$ is counted twice in $\sum_{u \in V} w(u, \hat{G})$. In lines 7–14, the nodes in $\hat{R}$ are removed from $V_c$ in each iteration (In contrast, [11] recomputes all nodes and finds those with the minimal $w$ after deleting a node). As removing a subset of $R$ may result in a higher value of $F$, D-Spot records each change of $F$, as if the nodes were removed one by one (lines 8–14). Algorithm 3 describes how the edges are updated after a node is removed, requiring a total of $|E|$ updates. Finally, Algorithm 2 returns the subgraph $\hat{G}$ induced by $\hat{R}$, the set of nodes achieving $F^{max}$, according to top and Dict1 (line 15).
**Summary.** As $R$ contains at least one node, the worst-case time complexity of Algorithm 2 is $O(|V|^2 + |E|)$. In practice, the worst case is too pessimistic. In line 6, $\hat{R}$ usually contains plenty of nodes, significantly reducing the number of scans of $V_c$ (see Section 7.2).
6 ANALYSIS
6.1 Complexity.
In the graph initialization stage, it takes $O(K|R| + |E|)$ time to build $G$ based on the optimization in Section 4.2. In D-Spot, the cost of partitioning $G$ is $O(|E|)$, and detecting a dense block in a partition $\hat{G}$ requires $O(|E| + |V|^2)$ operations, where $|E| << |E|, |V| << |V|$. Thus, the complexity of ISG+D-Spot is linear with respect to $|E|$.
In the worst case, admittedly, $|E| = |V|^2$ when there is some dimension $A_k$ in which $|R_k| = 1$. However, that is too pessimistic. In the target fraud attacks, fraud groups typically exhibit strong value sharing while legit entities should not. Hence, we expect $G$ to be sparse because the $u_i$ only have positive edges with a small subset of $V$. We constructed a version of $G$ using several real-world datasets (see Fig. 3), and the edge densities were all less than 0.06.
6.2 Effectiveness of ISG+D-Spot
**Theorem 6.** (Spotting the Hidden-Densest Block). Given a dense block $B(\mathcal{A}_1, ..., \mathcal{A}_N, X)$ in which the target dimension $U = A_N$ and $V$ denotes the set of distinct values of $\mathcal{A}_N$, a shared value $a$ exists in $B$ such that, $\forall u \in V, \exists t \in B$ satisfying $(t[U] = u) \land (t[A_k] = a)$. Then, $B$ must form a dense subgraph $G$ in $G$.
**Proof.** Using the optimization algorithm in Section 4.2, we build $G$ by scanning all values in $R$ once. Hence, the block $B(a, A_k)$ must be found. Let $G = (V, E)$ be the subgraph induced by $V$ in $G$. Then, $\forall(u_i, u_j) \in E$, the edge $S_{i,j} \geq I^k_{i,j}(a)$. Hence, $\rho_{edge}(G) = 1.0$ and $\mathcal{F}_G = \frac{\sum_{\forall(u_i, u_j) \in E} S_{i,j} + \sum_{u_j \in V} S_i}{|V|} \geq \frac{|V|(|V| - 1)I^k_{i,j}(a)}{|V|} = (|V| - 1)I^k_{i,j}(a)$. □
**Observation.** (Effectiveness of ISG+D-Spot) Consider a hidden-densest block $B(\mathcal{A}_1, ..., \mathcal{A}_{N-1}, \mathcal{A}_N, X)$ of size $|X| \times ... \times |X| \times 1$ and $|B| = |X|$, i.e., $B$ is the densest on $A_N$ by sharing the value $a$. Then, assuming the target dimension $U = A_1$ and the fraudulent entities $V$ are distinct values of $\mathcal{A}_1$, ISG+D-Spot captures $V$ more accurately than other algorithms based on tensors (denoted as Tensor+Other Algorithms).
**Proof.** Let us consider a non-dense block $\hat{B}(\hat{A}_1, ..., \hat{A}_N, X)$ of size $|X| \times ... \times |X|$, $|\hat{B}| = |X|$, and let $\hat{V}$ denote the distinct values of $\hat{A}_1$. Denoting legitimate entities as $\hat{V}$ and fraudulent entities as $V$, we now discuss the difference between ISG+D-Spot and Tensor+Other Algorithms.
[Working on the tensor]. On $R$, $\hat{B}$ is not dense and thus $\rho(\hat{B}, [N]) = 1$. For $B$, because $|\{B_1|, ..., |B_{N-1}|, |B_N|\} = \{|X|, ..., |X|, 1\}$, we have $\rho(B, [N]) = \frac{|B|}{\frac{1}{N} \sum_{n \in [N]} |B_n|} \approx 1$ for sufficiently large $N$.
[Working on ISG]. On $G$, let $\hat{G}$ denote the subgraph induced by $\hat{V}$ and $G$ denote the subgraph formed by $V$. We know that $\hat{G}, \mathcal{F}_{\hat{G}} = 0$, because $\hat{B}$ does not have any shared values. For $G$, $\mathcal{F}_G = (|V| - 1)I^k_{i,j}(a)$ according to Theorem 6.
[Other Algorithms]. M-Zoom [30] and D-Cube [32] are known to find blocks that are at least $1/N$ of the optimum in term of $\rho$ on $R$ ($\frac{1}{N}$-Approximation guarantee).
[D-Spot]. In Section 6.3, we will show that the subgraph detected by D-Spot is at least $1/2$ of the optimum in term of $\mathcal{F}_G$ on ISG ($\frac{1}{2}$-Approximation guarantee).
In summary, Tensor+Other Algorithms vs. ISG+D-Spot corresponds to:
$$\left( \rho(\hat{B}, [N]) = 1 \mid \rho(B, [N]) \approx 1 + (\frac{1}{N}\text{-Approximation}) \right)$$
vs. $$\left( \mathcal{F}_{\hat{G}} = 0 \mid \mathcal{F}_G = (|V| - 1)I^k_{i,j}(a) + (\frac{1}{2}\text{-Approximation}) \right)$$
Therefore, ISG+D-Spot catches fraudulent entities within hidden-densest blocks more accurately than Tensor+Other Algorithms. □
From the observation, ISG+D-Spot can effectively detect hidden-densest blocks. Similarly, when $B$ becomes denser, the $G$ formed by $B$ will also be much denser, and thus ISG+D-Spot will be more accurate in detecting the densest block.
6.3 Accuracy Guarantee of D-Spot
For brevity, we use $[V]$ to denote a subgraph induced by the set of nodes $V$.
**Theorem 7.** (Algorithm 1 Guarantee). Given $G = (V, E)$, let $G_s = \{G_1, ..., G_n\}$ denote the connected components of $G$. Let $\mathcal{F}_i^{opt}$ denote the optimal $\mathcal{F}$ on $G_i$, i.e., $\exists G' \subseteq G_i$ satisfying $\mathcal{F}_{G'} > \mathcal{F}_i^{opt}$. Then, if $\mathcal{F}_n^{opt}$ is the maximal value of $\{\mathcal{F}_1^{opt}, ..., \mathcal{F}_n^{opt}\}$, $\mathcal{F}_n^{opt}$ must be the optimum in terms of $\mathcal{F}$ on $G$.
**Proof.** Given any two sets of nodes $V_1$ and $V_2$ and assuming there are no edges connecting $V_1$ and $V_2$, we assume that $\mathcal{F}_{[V_1]} > \mathcal{F}_{[V_2]} \Rightarrow \frac{c_1}{|V_1|} > \frac{c_2}{|V_2|} \Rightarrow c_1|V_2| > c_2|V_1|$. Then,
$$\mathcal{F}_{[V_1]} - \mathcal{F}_{[V_1 \cup V_2]} \Rightarrow \frac{c_1}{|V_1|} - \frac{c_1 + c_2}{|V_1| + |V_2|}$$
$$\Rightarrow \frac{c_1|V_2| - c_2|V_1|}{|V_1|(|V_1| + |V_2|)} > \frac{c_2|V_1| - c_2|V_1|}{|V_1|(|V_1| + |V_2|)} = 0.$$
Thus, for any $V_1$ and $V_2$ that are not connected by any edges, it follows that $\mathcal{F}_{[V_1 \cup V_2]} \leq \max(\mathcal{F}_{[V_1]}, \mathcal{F}_{[V_2]})$ (Conclusion 1).
In $G_n = (V_n, E_n)$, we use $\hat{V}$ to denote the set of nodes satisfying $\hat{V} \subseteq V_n$ and $\mathcal{F}_{[\hat{V}]} = \mathcal{F}_n^{opt}$. Let $V'$ be a set of nodes satisfying $V' \subseteq V$ and $V' \cap \hat{V} = \emptyset$. Now, let us consider two conditions.
First, if $V' \subset V_n$, then $\mathcal{F}_{[V']} \leq \mathcal{F}_{[\hat{V}]}$ and $\mathcal{F}_{[V' \cup \hat{V}]} \leq \mathcal{F}_{[\hat{V}]}$ because $\mathcal{F}_{[\hat{V}]}$ is the optimum on $G_n$.
Second, if $V' \cap V_n = \emptyset$, then $\mathcal{F}_{[V']} \leq \mathcal{F}_{[\hat{V}]}$ and $\mathcal{F}_{[V' \cup \hat{V}]} \leq \mathcal{F}_{[\hat{V}]}$ by Conclusion 1 and because $\mathcal{F}_{[\hat{V}]}$ is the maximum of $\{\mathcal{F}_1^{opt}, ..., \mathcal{F}_n^{opt}\}$.
If $V' \cap V_n \neq \emptyset$, then $V'$ can be divided into two parts conforming with the two conditions stated above.
Therefore, $\exists V' \subset V$ satisfies $\mathcal{F}_{[V']} > \mathcal{F}_{[\hat{V}]}$ or $\mathcal{F}_{[V' \cup \hat{V}]} > \mathcal{F}_{[\hat{V}]}$. We can conclude that $\mathcal{F}_n^{opt} = \mathcal{F}_{[\hat{V}]}$ must be the optimum in terms of $\mathcal{F}$ on $G$. □
**Theorem 8.** (Algorithm 2 Guarantee). Given a graph $G = (V, E)$, let $Q^*$ be a subset of nodes maximizing $\mathcal{F}_{[Q^*]}$ in $G$. Let $[Q]$ be the subgraph returned by Algorithm 2 with $\mathcal{F}_{[Q]}$. Then, $\mathcal{F}_{[Q]} \geq \frac{1}{2}\mathcal{F}_{[Q^*]}$.
**Proof.** Consider the optimal set $Q^*$. We know that, $\forall u \in Q^*$, $w(u, [Q^*]) \geq \mathcal{F}_{[Q^*]}$, because if we remove a node $u$ for which
\[ w(u, [Q^*]) < \mathcal{F}_{[Q^*]}, \]
\[
\mathcal{F}' = \frac{|Q^*| \mathcal{F}_{[Q^*]} - w(u, Q^*)}{|Q^*| - 1} > \frac{|Q^*| \mathcal{F}_{Q^*} - \mathcal{F}_{Q^*}}{|Q^*| - 1} = \mathcal{F}_{[Q^*]},
\]
which contradicts the definition of \( Q^* \).
Denote the first node that Algorithm 2 removes from \( Q^* \) as \( u_i \), \( u_i \in R \), and denote the node set before Algorithm 2 starts removing \( R \) as \( Q' \). Because \( Q^* \subseteq Q' \), we have \( w(u_i, [Q^*]) \leq w(u_i, [Q']) \). According to Algorithm 2 (line 6), \( w(u_i, [Q']) \leq 2\mathcal{F}_{[Q']} \) (Eq. 10). Additionally, Algorithm 2 returns the best solution when deleting nodes one by one, and so \( \mathcal{F}_{[Q]} \geq \mathcal{F}_{[Q']} \). We conclude that
\[
\mathcal{F}_{[Q]} \geq \mathcal{F}_{[Q']} \geq \frac{w(u_i, [Q'])}{2} \geq \frac{w(u_i, [Q^*])}{2} \geq \frac{\mathcal{F}_{Q^*}}{2}.
\]
In summary, let \( \{G_1, ..., G_n\} \) be the subgraphs returned by D-Spot, and \( \{\mathcal{F}_{G_1}, ..., \mathcal{F}_{G_n}\} \) be the corresponding scores. Then, based on Theorems 7 and 8, \( \mathcal{F}^{max} = \max(\mathcal{F}_{G_1}, ..., \mathcal{F}_{G_n}) \) is at least 1/2 of the optimum in terms of \( \mathcal{F} \) on \( G \) (1/2-Approximation guarantee).
### 7 EVALUATION
A series of evaluation experiments were conducted under the following conditions:
**Implementation.** We implemented ISG+D-Spot in Python, and conducted all experiments on a server with two 2.20 GHz Intel(R) CPUs and 64 GB memory.
**Baselines.** We selected several state-of-the-art dense-block detection methods (M-Zoom [30], M-Biz [31], and D-Cube [32]) as the baselines (using open source code). To obtain optimal performance, we run three different density metrics from [32] for each baseline: \( \rho \) (ari), Geometric Average Mass (geo), and Suspiciousness (sus).
**Suspiciousness Score Setting.** For the baselines, we considered a detected block \( B(A_1, ..., A_N, X) \) and let \( \theta = \rho(B, N) \). For any unique value \( a \) within \( B \), we then set the suspiciousness score of \( a \) to \( \theta \). If \( a \) occurred in multiple detected dense blocks, we chose the one with the maximal value of \( \theta \). Regarding ISG+D-Spot, given a detected subgraph \( \hat{G} = (\hat{V}, \hat{E}) \), for a unique value \( a \in \hat{V} \), we set the suspiciousness score of \( a \) to \( w(a, \hat{G}) \) (Eq. 9). Finally, we evaluated the ranking of the suspiciousness scores of unique values using the standard area under the ROC curve (AUC) metric.
### 7.1 Datasets
Table 3 summarizes the datasets used in the experiments.
**Synthetic** is a series of datasets we synthesized using the same method as in [12]. First, we generated random seven-dimensional relations \( R(A_1, ..., A_7, X) \), in which \( |R| = 10000 \) and the size of \( R \) is \( 1000 \times 500 \times ... \times 500 \). In \( R \), we assume that \( A_1 \) corresponds to users and the other six dimensions are features. To specifically check the detection performance of the hidden-densest block using each method, we injected a dense block \( B(A_1, ..., A_7, X) \) into \( R \) five separate times, with each injection assigned a different configuration to generate five datasets. For \( B \), \( |B_1| = 50 \) and \( |B| = 500 \). We introduce the parameter \( \lambda \), which denotes the number of dimensions on which \( B \) is the densest. For example, when \( \lambda = 1 \), the size of \( B \) is \( 50 \times 12 \times 25... \times 25 \); when \( \lambda = 5 \), the size of \( B \) is \( 50 \times 12 \times ... \times 12 \times 25 \). Obviously, \( \rho(B, 7) > \rho(R, 7) \) and \( B \) is the hidden-densest block when \( \lambda \) is small. Finally, we labeled the users within \( B \) as “fraud”.
**Amazon** [15]. AmaOffice, AmaBaby, and AmaTools are three collections of reviews about office products, baby-related products, and tool products, respectively, on Amazon. They can be modeled using the relation \( R(user, product, timestamp, X) \). For each entry \( t \in R \), \( t = (u, p, t, x) \) indicates a review \( x \) that user \( u \) reviewed product \( p \) at time \( t \). According to the specific cases of fraud discovered by previous studies [12, 38], fraudulent groups usually exhibit suspicious synchronized behavior in social networks. For instance, a large group of users may surprisingly review the same group of products over a short period. Thus, we use a similar method as in [12, 30, 32, 38]. We use a dense block \( B \) to represent the synchronized behavior, where \( B(user, product, timestamp, X) \) has a size of \( 200 \times 30 \times 1 \). In total, we injected four such blocks \( B \) with a mass randomly selected from \([1000, 2000]\). The users in the injected blocks were labeled as “malicious.”
**Yelp** [25]. The YelpChi, YelpNYU, and YelpZip datasets [21, 25] contain restaurant reviews submitted to Yelp. They can be represented by the relation \( R(user, restaurant, date, X) \), where each entry \( t = (u, r, d, x) \) denotes a review \( x \) by user \( u \) of restaurant \( r \) on date \( d \). Note that all three datasets include labels indicating whether or not each review is fake. The detection of malicious reviews or users is studied in [25] using text information. In these datasets, we focus on detecting fraudulent restaurants that purchase fake reviews using the three-dimensional features. Intuitively, the more fake reviews a restaurant has, the more suspicious it is. As some legitimate users have the potential of reviewing fraudulent restaurants, we label a restaurant as “fraudulent” if it has received more than 40 fake reviews.
**DARPA** [19] was collected by the Cyber Systems and Technology Group in 1998 regarding network attacks in TCP dumps. The data has the form \( R(sourceIP, targetIP, timestamp, X) \). Each entry \( t = (IP_1, IP_2, t) \) represents a connection made from \( IP_1 \) to \( IP_2 \) at time \( t \). The dataset includes labels indicating whether or not each connection is malicious. In practice, the punishment for malicious connections is to block the corresponding IP address. Thus, we compared the detection performance of suspicious IP addresses. We labeled an IP address as suspicious if it was involved in a malicious connection.
**AirForce** [1] was used for the KDD Cup 1999, and has also been considered in [30, 32]. This dataset includes a wide variety of simulated intrusions in a military network environment. However, it does not contain any specific IP addresses. According to the cardinality of each dimension, we chose the top-2 features and built the relation \( R(src\_bytes, dst\_bytes, connections, X) \), where src bytes denotes the number of data bytes sent from source to destination and dst bytes denotes the number of data bytes sent from destination to source. The target dimension \( U \) was set to be connections. Note that this dataset includes labels indicating whether or not each connection is malicious.
### 7.2 Speed and Accuracy of D-Spot
First, we measured the speed and accuracy with which D-Spot detected dense subgraphs in real-world graphs. We compared the
Table 3: Multi-dimensional datasets used in our experiments
| | Synthetic | TCP Dumps | Review Data |
|------------------|-----------|-----------|-------------|
| | | DARPA | AirForce | YelpChi | YelpNYU | YelpZip | AmaOffice | AmaBaby | AmaTools |
| Entries (Mass) | 10K | 4.6M | 30K | 67K | 359K | 1.14M | 53K | 160K | 134K |
| Dimensions | 7 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 |
The performance of D-Spot with that of another dense-subgraph detection algorithm, Fraudar [11], the extension of [6], which maximizes the density metric by greedily deleting nodes one by one. We used the three Amazon datasets and applied D-Spot and Fraudar to the same bipartite graph built on the first two dimensions, users and products, where each edge in the graph represents an entry. We measured the wall-clock time (average over three runs) required to detect the top-4 subgraphs. Figure 2 illustrates the runtime and performance of the two algorithms.
Figure 2: Comparison of D-Spot and Fraudar using the Amazon datasets.
D-Spot provides the best trade-off between speed and accuracy. Specifically, D-Spot is up to $11 \times$ faster than Fraudar. This supports our claim in Section 5.2 that the worst-case time complexity of D-Spot ($O(|V|^2 + |E|)$) is too pessimistic.
7.3 Effectiveness of ISG+D-Spot
This section illustrates the effectiveness of ISG+D-Spot for detecting fraudulent entities on multi-dimensional tensors. ISG+D-Spot exhibits extraordinary performance compared with the baseline methods (Fraudar is not in the baselines as it only works on the bipartite graph).
**Synthetic.** Table 4 presents the detection performance of each method for the hidden-densest block. We assume that the injected block $\mathcal{B}$ is the hidden-densest block when $\lambda \leq 3$. In detail, ISG+D-Spot achieves extraordinary performance even when $\lambda = 1$, because each instance of value sharing in $\mathcal{B}$ is accurately captured by ISG and D-Spot, providing a higher accuracy guarantee than the baselines (Theorems 2 and 3). When $\lambda > 3$, the performance of each method improves because the density of $\mathcal{B}$ increases as $\lambda$ increases.
**Amazon.** Table 5 presents the results for catching suspicious users by detecting the top-4 dense blocks on the Amazon datasets. ISG+D-Spot detects synchronized behavior accurately. The typical attack scenario involves a mass of fraudulent users creating massive numbers of fake reviews for a comparatively small group of products over a short period. This behavior is represented by the injected blocks. ISG+D-Spot exhibits robust and near-perfect performance. However, the other baselines produce worse performance on the AmaOffice and AmaBaby datasets, even with the multiple supported metrics.
**Yelp.** Table 6 reports the (highest) accuracy with which collusive restaurants were detected by each method. In summary, using ISG+D-Spot results in the highest accuracy across all three datasets, because D-Spot applies a higher theoretical bound to the ISG.
**Table 4: Performance (AUC) on the Synthetic datasets**
| | $\lambda = 1$ | $\lambda = 2$ | $\lambda = 3$ | $\lambda = 4$ | $\lambda = 5$ |
|------------------|---------------|---------------|---------------|---------------|---------------|
| $M$-Zoom (ari) | 0.5005 | 0.5005 | 0.5000 | 0.5489 | 0.6567 |
| $M$-Zoom (geo) | 0.5005 | 0.5005 | 0.5000 | 0.6789 | 0.7543 |
| $M$-Zoom (sus) | 0.7404 | 0.7715 | 0.8238 | 0.9685 | 0.9767 |
| $M$-Biz (ari) | 0.5005 | 0.5005 | 0.5005 | 0.5005 | 0.6834 |
| $M$-Biz (geo) | 0.5005 | 0.5005 | 0.5005 | 0.6235 | 0.7230 |
| $M$-Biz (sus) | 0.6916 | 0.7638 | 0.8067 | 0.9844 | **0.9948** |
| $D$-Cube (ari) | 0.5005 | 0.5005 | 0.5005 | 0.5670 | 0.6432 |
| $D$-Cube (geo) | 0.5005 | 0.5005 | 0.5340 | 0.6876 | 0.6542 |
| $D$-Cube (sus) | **0.8279** | **0.8712** | **0.9148** | **0.9909** | 0.9725 |
ISG+D-Spot | **0.9843** | **0.9957** | **0.9949** | **1.0000** | **1.0000** |
**Table 5: Performance (AUC) on the Amazon datasets**
| | AmaOffice | AmaBaby | AmaTools |
|------------------|-----------|---------|----------|
| $M$-Zoom (ari) | 0.6795 | 0.5894 | 0.8689 |
| $M$-Zoom (geo) | 0.8049 | 0.8049 | 1.0000 |
| $M$-Zoom (sus) | 0.7553 | 0.6944 | 0.6503 |
| $M$-Biz (ari) | 0.6328 | 0.5461 | 0.8384 |
| $M$-Biz (geo) | 0.8049 | **0.8339** | 1.0000 |
| $M$-Biz (sus) | 0.7478 | 0.6944 | 0.6503 |
| $D$-Cube (ari) | 0.7127 | 0.5956 | 0.6250 |
| $D$-Cube (geo) | **0.8115** | 0.7561 | 1.0000 |
| $D$-Cube (sus) | 0.7412 | 0.6190 | 0.5907 |
ISG+D-Spot | **0.8358** | **0.9995** | **1.0000** |
**Table 6: Performance (AUC) on the Yelp datasets**
| | YelpChi | YelpNYU | YelpZip |
|------------------|---------|---------|---------|
| $M$-Zoom (ari) | 0.9174 | 0.6669 | 0.8859 |
| $M$-Zoom (geo) | 0.9752 | 0.8826 | 0.9274 |
| $M$-Zoom (sus) | 0.9831 | **0.9451** | 0.9426 |
| $M$-Biz (ari) | 0.9174 | 0.6669 | 0.8863 |
| $M$-Biz (geo) | 0.9757 | 0.8826 | 0.9271 |
| $M$-Biz (sus) | **0.9831** | 0.9345 | **0.9403** |
| $D$-Cube (ari) | 0.5000 | 0.5000 | 0.9033 |
| $D$-Cube (geo) | 0.9793 | 0.9223 | 0.9376 |
| $D$-Cube (sus) | 0.9810 | 0.9007 | 0.9365 |
ISG+D-Spot | **0.9875** | **0.9546** | **0.9529** |
**DARPA.** Table 7 lists the accuracy of each method for detecting the source IP and the target IP. ISG+D-Spot assigns each IP address
a specific suspiciousness score. We chose a detected IP with the highest score and found that the IP participated in more than 1M malicious connections. The top ten suspicious IPs were all involved in more than 10k malicious connections. Thus, using ISG+D-Spot would enable us to crack down on these malicious IP addresses in the real world.
**Table 7: Performance (AUC) on the DARPA dataset**
| Dataset | DARPA |
|---------------|-------|
| | U = source IP | U = target IP |
| M-Zoom (ari) | 0.5649 | 0.5584 |
| M-Zoom (geo) | 0.7086 | **0.5714** |
| M-Zoom (sus) | 0.6989 | 0.3878 |
| M-Biz (ari) | 0.5649 | 0.5584 |
| M-Biz (geo) | **0.7502** | 0.5679 |
| M-Biz (sus) | 0.6989 | 0.3878 |
| D-Cube (ari) | 0.3728 | 0.5323 |
| D-Cube (geo) | 0.4083 | 0.3926 |
| D-Cube (sus) | 0.4002 | 0.3720 |
| ISG+D-Spot | **0.7561** | **0.8181** |
**AirForce.** As this dataset does not contain IP addresses, we set the target dimension $U = \text{connections}$. We randomly sample 30k connections from the dataset [1] three times. Table 8 lists the accuracy of each method on samples 1–3. Malicious connections form dense blocks on the two-dimensional features. The results demonstrate that ISG+D-Spot effectively detected the densest blocks.
**Table 8: Performance (AUC) on the AirForce dataset**
| | Sample 1 | Sample 2 | Sample 3 |
|---------------|----------|----------|----------|
| M-Zoom (ari) | 0.8696 | 0.8675 | 0.8644 |
| M-Zoom (geo) | 0.9693 | 0.9693 | 0.9738 |
| M-Zoom (sus) | 0.9684 | 0.9683 | 0.9726 |
| M-Biz (ari) | 0.9038 | 0.8848 | 0.8885 |
| M-Biz (geo) | 0.9694 | 0.9693 | 0.9741 |
| M-Biz (sus) | 0.9684 | 0.9683 | 0.9726 |
| D-Cube (ari) | **0.9824** | **0.9823** | 0.9851 |
| D-Cube (geo) | 0.9695 | 0.9691 | **0.9862** |
| D-Cube (sus) | 0.9697 | 0.9692 | 0.9737 |
| ISG+D-Spot | **0.9835** | **0.9824** | **0.9877** |
### 7.4 Scalability
As mentioned in Sec.6.1, the ISG Gs built using real-world tensors are typically sparse, as value sharing should only conspicuously appear in fraudulent entities. We implemented G on the three Amazon datasets (details in Figure 3). The edge densities of G are quite low (lower than 0.06) across all datasets, which indicates that the worst-case time complexity discussed in Section 6.1 rarely occurs. Figure 3 reports the runtime of ISG+D-Spot on the three Amazon datasets, where the number of edges was varied by subsampling entries in the dataset. In practice, $|E|$ increases near-linearly with the mass of the dataset. Additionally, because the time complexity of ISG+D-Spot is linear with respect to $|E|$, ISG+D-Spot exhibits near-linear scaling with the mass of the dataset. Figure 3 demonstrates our conclusion.

### 7.5 Feature Prioritization
This section is to demonstrate that ISG+D-Spot is more robust to resist noisy features than existing approaches. ISG automatically weighs each feature and continuously accumulates value sharing by one scan of the tensor, and D-Spot amounts to finds entities with the maximum of value sharing. We conducted the following experiment to demonstrate our conclusion.
**Registration** is a dataset derived from an e-commerce company, in which each record contains two crucial features, IP subnet and phone prefix, and three noisy features, IP city, phone city, and timestamp. The dataset also includes labels showing whether or not the account is a “zombie” account. Thus, it can be formulated as $R(\text{accounts}, IP, \text{phone}, IP \text{ city}, \text{phone city}, \text{timestamp}, X)$. To compare the detection performance of malicious accounts, we applied each method on various R by successively appending 1–5 features to $R(\text{accounts}, X)$.
**Table 9: Performance (AUC) on the Registration dataset. ‘C’ represents ‘crucial feature’ and ‘N’ represents ‘noisy feature’**
| | 1C | 2C | 2C+1N | 2C+2N | 2C+3N |
|---------------|----|----|-------|-------|-------|
| M-Zoom (ari) | 0.5000 | 0.5000 | 0.5031 | 0.5000 | 0.5430 |
| M-Zoom (geo) | 0.7676 | 0.8880 | **0.8827** | **0.8744** | **0.8439** |
| M-Zoom (sus) | 0.7466 | 0.8328 | 0.4009 | 0.4878 | 0.4874 |
| M-Biz (ari) | 0.5000 | 0.5000 | 0.5000 | 0.5000 | 0.5004 |
| M-Biz (geo) | **0.7677** | 0.8842 | 0.8827 | 0.8744 | 0.8439 |
| M-Biz (sus) | 0.7466 | 0.8328 | 0.4009 | 0.4878 | 0.4874 |
| D-Cube (ari) | 0.7073 | 0.8189 | 0.8213 | 0.8295 | 0.7987 |
| D-Cube (geo) | 0.7073 | **0.9201** | 0.8586 | 0.8312 | 0.7324 |
| D-Cube (sus) | 0.7522 | 0.8956 | 0.7877 | 0.7642 | 0.7080 |
| ISG+D-Spot | **0.7699** | **0.9946** | **0.9935** | **0.9917** | **0.9859** |
Table 9 gives the variation of each method with regard to the added noisy features (3–5 dimensions). As each account only possesses one entry, R is quite sparse. We found that existing methods usually miss small-scale instances of value sharing because their density is close to the legitimate range on R. For example, a 51-member group sharing a single IP subnet was missed by the baseline methods. However, ISG amplifies each instance of value sharing through its information-theoretic and graph features, allowing D-Spot to accurately capture fraudulent entities.
### 8 CONCLUSIONS AND FUTURE WORK
In this paper, we novelly identified dense-block detection with dense-subgraph mining, by modeling a tensor in ISG. Additionally, we propose a multiple dense-subgraphs detection algorithm that is faster and can be computed in parallel. In future, ISG + D-Spot will be implemented on Apache Spark [39] to support very large tensors.
REFERENCES
[1] 1999. Kdd cup 1999 data. http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html.
[2] Melih Abdulhayoglu, Melih Abdulhayoglu, Melih Abdulhayoglu, and Melih Abdulhayoglu. 2017. HintDroid: An Intelligent Android Malware Detection System Based on Structured Heterogeneous Information Network. In ACM SIGKDD. 1457–1466.
[3] Lerzan Aloglu, Rishi Chandy, and Christos Faloutsos. 2013. Opinion Fraud Detection in Online Reviews by Network Effects. In ICWSM. The AAAI Press.
[4] David S Anderson, Chris Fleitzach, Stefan Savage, and Geoffrey M Voelker. 2007. Spamscatter: characterizing internet scam hosting infrastructure. In Usenix Security Symposium on Usenix Security Symposium. 1132–1141.
[5] Qiang Cao, Xiaowei Yang, Jieqi Yu, and Christopher Palow. 2014. Uncovering Large Groups of Active Malicious Accounts in Online Social Networks. In ACM CCS. 477–488.
[6] Moses Charikar. 2000. Greedy approximation algorithms for finding dense components in a graph. In International Workshop on Approximation Algorithms for Combinatorial Optimization. Springer, 84–95.
[7] Yizhou Chen and Yasushi Saad. 2010. Dense Subgraph Extraction with Application to Community Detection. IEEE TKDE 24, 7 (2012), 1216–1230.
[8] Hector Garcia-Molina and Jan Pedersen. 2004. Combating web spam with trustrank. In VLDB. 576–587.
[9] Saptarshi Ghosh, Bimal Viswanath, Farshad Kooti, Naveen Kumar Sharma, Gautam Korlakai Fabricio Benevenuto, Niloy Ganguly, and Krishna Phani Gummadi. 2012. Understanding and combating link farming in the Twitter social network. In WWW.
[10] A. V Goldberg. 1984. Finding a Maximum Density Subgraph. Technical Report.
[11] Bryan Hooi, Hyun Ah Song, Alex Beutel, Neil Shah, Kijung Shin, and Christos Faloutsos. 2016. FRAUDAR: Bounding Graph Fraud in the Face of Camouflage. In ACM SIGKDD. 895–904.
[12] Meng Jiang, Alex Beutel, Peng Cui, Bryan Hooi, Shiqiang Yang, and Christos Faloutsos. 2016. Spotting Suspicious Behaviors in Multimodal Data: A General Metric and Algorithms. IEEE TKDE 28, 8 (2016), 2187–2200.
[13] Meng Jiang, Peng Cui, Alex Beutel, Christos Faloutsos, and Shiqiang Yang. 2014. CatchSync: catching synchronized behavior in large directed graphs. In ACM SIGKDD. 941–950.
[14] Meng Jiang, Peng Cui, Alex Beutel, Christos Faloutsos, and Shiqiang Yang. 2016. Inferring lockstep behavior from connectivity pattern in large graphs. Knowledge & Information Systems 48, 2 (2016), 399–428.
[15] Medaley Julian. [n. d.]. Amazon product data. http://jmcauley.ucsd.edu/data/amazon.
[16] Samir Khuller and Barna Saha. 2009. On Finding Dense Subgraphs. In Automata, Languages and Programming. International Colloquium, ICALP 2009, Rhodes, Greece, July 5-12, 2009, Proceedings. 597–608.
[17] Tamara G. Kolda and Brett W. Bader. 2009. Tensor Decompositions and Applications. Siam Review 51, 3 (2009), 455–500.
[18] Victor E. Lee, Ruan Ning, Ruoming Jin, and Charu Aggarwal. 2010. A Survey of Algorithms for Dense Subgraph Discovery. 303–336 pages.
[19] R. P. Lippmann, D. J. Fried, I. Graf, J. W. Harris, S. A. Kendall, D. McClung, D. Weber, S. E. Wecker, D. Wyvchogrod, R. K. Cunningham, and M. A. Zissman. 2000. Evaluating intrusion detection systems: the 1998 DARPA off-line intrusion detection evaluation. In Proceedings DARPA Information Survivability Conference and Exposition. DISCEX’00, Vol. 2. 12–26 vol.2. https://doi.org/10.1109/DISCEX.2000.821506
[20] Koji Maruyoshi, Fan Guo, and Christos Faloutsos. 2011. MultiAspectForensics: Pattern Mining on Large-Scale Heterogeneous Networks with Tensor Analysis. InASONAM. 203–210.
[21] Arunava Mukherjee, Vivek Venkataraman, Bing Liu, and Natalie Glance. 2013. What yelp fake review filter might be doing?. In Proceedings of the 7th International Conference on Weblogs and Social Media. ICWSM 2013. AAAI press, 409–418.
[22] Shashank Pandit, Duen Horng Chau, Samuel Wang, and Christos Faloutsos. 2007. Netprobe: a fast and scalable system for fraud detection in online auction networks. In WWW. 201–210.
[23] Evangelos E. Papalexakis, Christos Faloutsos, and Nicholas D. Sidiropoulos. 2012. ParCube: sparse parallelizable tensor decompositions. In PKDD. 521–536.
[24] B. Aditya Prakash, Ashwin Sridharan, Mukund Seshadri, Siddhar Machiraju, and Christos Faloutsos. 2010. EigenSpokes: Surprising Patterns and Scalable Community Clipping in Large Graphs. In PAKDD.
[25] Shashank Pandit, Duen Horng Chau, and Christos Faloutsos. Interactive Opinion Spam Detection: Bridging Review Networks and metadata. In ACM SIGKDD.
[26] Barna Saha, Allison Hoch, Samir Khuller, Louisa Raschid, and Xiao Ning Zhang. 2010. Dense Subgraphs with Restrictions and Applications to Gene Annotation Graphs. Springer Berlin Heidelberg. 456–472 pages.
[27] Neil Shah, Alex Beutel, Brian Gallagher, and Christos Faloutsos. 2014. Spotting Suspicions Link Behavior with fbox: An Adversarial Perspective. In IEEE ICIDM. 959–964.
[28] C. E Shannon. 1948. A mathematical theory of communication. Bell Labs Technical Journal 27, 4 (1948), 379–423.
[29] K. Shin, T. Eliassi-Rad, and C. Faloutsos. 2017. CoreScope: Graph Mining Using k-Core Analysis - Patterns, Anomalies and Algorithms. In ICDM. Vol. 00. 469–478.
[30] Kijung Shin, Bryan Hooi, and Christos Faloutsos. 2016. M-Zoom: Fast Dense-Block Detection in Tensors with Quality Guarantees. In ECML PKDD. 264–280.
[31] Kijung Shin, Bryan Hooi, and Christo Faloutsos. 2018. Fast, Accurate, and Flexible Algorithms for Dense Subtensor Mining. ACM Transactions on Knowledge Discovery from Data 12, 3 (2018), 1–30.
[32] Kijung Shin, Bryan Hooi, Jise Kim, and Christos Faloutsos. 2017. D-Cube: Dense-Block Detection in Terabyte-Scale Tensors. In WSDM. 681–689.
[33] Kijung Shin and U. Kang. 2015. Distributed Methods for High-Dimensional and Large-Scale Tensor Factorization. In IEEE ICIDM. 989–994.
[34] Ming Yang and et al. 2011. Real-time anomaly detection systems for Denial-of-Service attacks by weighted k-nearest-neighbor classifiers. Expert Systems with Applications 38, 4 (2011), 3492–3498.
[35] Hua Tang and Zhuolin Cao. 2009. Machine Learning-based Intrusion Detection Algorithms. Journal of Computational Information Systems (2009), 1825–1831.
[36] Kurt Thomas, Dmytro Iatskiv, Elie Bursztein, Tadek Pietraszek, Chris Grier, and Damon McCoy. 2014. Dialing back abuse on phone verified accounts. In Proceedings of the 2014 ACM SIGSAC. ACM, 465–476.
[37] Yiming Wang, Hsiao Yu Tung, Alexander Smola, and Animashree Anandkumar. 2015. Fast and Guaranteed Tensor Decomposition via Sketching. NIPS.
[38] Wenlong Xu, Bryan Hooi, Xuristopher Palow, Christopher Palow, and Christos Faloutsos. 2013. CopyCatch: stopping group attacks by spotting lockstep behavior in social networks. In WWW. 119–130.
[39] Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy Mccauley, Michael J. Franklin, Scott Shenker, and Ion Stoica. 2012. Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing. In Usenix NSDI. 2–2.
|
Kernel Design using Boosting
Koby Crammer Joseph Keshet Yoram Singer
School of Computer Science & Engineering
The Hebrew University, Jerusalem 91904, Israel
{kobics,jkeshet,email@example.com
Abstract
The focus of the paper is the problem of learning kernel operators from empirical data. We cast the kernel design problem as the construction of an accurate kernel from simple (and less accurate) base kernels. We use the boosting paradigm to perform the kernel construction process. To do so, we modify the booster so as to accommodate kernel operators. We also devise an efficient weak-learner for simple kernels that is based on generalized eigen vector decomposition. We demonstrate the effectiveness of our approach on synthetic data and on the USPS dataset. On the USPS dataset, the performance of the Perceptron algorithm with learned kernels is systematically better than a fixed RBF kernel.
1 Introduction and problem Setting
The last decade brought voluminous amount of work on the design, analysis and experimentation of kernel machines. Algorithm based on kernels can be used for various machine learning tasks such as classification, regression, ranking, and principle component analysis. The most prominent learning algorithm that employs kernels is the Support Vector Machines (SVM) [1, 2] designed for classification and regression. A key component in a kernel machine is a kernel operator which computes for any pair of instances their inner-product in some abstract vector space. Intuitively and informally, a kernel operator is a means for measuring similarity between instances. Almost all of the work that employed kernel operators concentrated on various machine learning problems that involved a predefined kernel. A typical approach when using kernels is to choose a kernel before learning starts. Examples to popular predefined kernels are the Radial Basis Functions and the polynomial kernels (see for instance [1]). Despite the simplicity required in modifying a learning algorithm to a “kernelized” version, the success of such algorithms is not well understood yet. More recently, special efforts have been devoted to crafting kernels for specific tasks such as text categorization [3] and protein classification problems [4].
Our work attempts to give a computational alternative to predefined kernels by learning kernel operators from data. We start with a few definitions. Let $\mathcal{X}$ be an instance space. A kernel is an inner-product operator $K : \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R}$. An explicit way to describe $K$ is via a mapping $\phi : \mathcal{X} \rightarrow \mathcal{H}$ from $\mathcal{X}$ to an inner-products space $\mathcal{H}$ such that $K(x, x') = \phi(x) \cdot \phi(x')$. Given a kernel operator and a finite set of instances $S = \{x_i, y_i\}_{i=1}^m$, the kernel matrix (a.k.a the Gram matrix) is the matrix of all possible inner-products of pairs from $S$, $K_{i,j} = K(x_i, x_j)$. We therefore refer to the general form of $K$ as the kernel operator and to the application of the kernel operator to a set of pairs of instances as the kernel matrix.
The specific setting of kernel design we consider assumes that we have access to a base kernel learner and we are given a target kernel $K^*$ manifested as a kernel matrix on a set of examples. Upon calling the base kernel learner it returns a kernel operator denote $K_j$. The goal thereafter is to find a weighted combination of kernels $\hat{K}(x, x') = \sum_j \alpha_j K_j(x, x')$ that is similar, in a sense that will be defined shortly, to the target kernel, $\hat{K} \sim K^*$. Cristianini et al. [5] in their pioneering work on kernel target alignment employed as the notion of similarity the inner-product between the kernel matrices $< K, K' >_F = \sum_{i,j=1} K(x_i, x_j) K'(x_i, x_j)$. Given this definition, they defined the kernel-similarity, or alignment, to be the above inner-product normalized by the norm of each kernel, $\hat{A}(S, \hat{K}, K^*) = < \hat{K}, K^* >_F / \sqrt{< \hat{K}, \hat{K} >_F < K^*, K^* >_F}$, where $S$ is, as above, a finite sample of $m$ instances. Put another way, the kernel alignment Cristianini et al. employed is the cosine of the angle between the kernel matrices where each matrix is “flattened” into a vector of dimension $m^2$. Therefore, this definition implies that the alignment is bounded above by 1 and can attain this value iff the two kernel matrices are identical. Given a (column) vector of $m$ labels $y$ where $y_i \in \{-1, +1\}$ is the label of the instance $x_i$, Cristianini et al. used the outer-product of $y$ as the the target kernel, $K^* = yy^T$. Therefore, an optimal alignment is achieved if $\hat{K}(x_i, x_j) = y_i y_j$. Clearly, if such a kernel is used for classifying instances from $\mathcal{X}$, then the kernel itself suffices to construct an excellent classifier $f : \mathcal{X} \rightarrow \{-1, +1\}$ by setting, $f(x) = \text{sign}(y_i K(x_i, x))$ where $(x_i, y_i)$ is any instance-label pair. Cristianini et al. then devised a procedure that works with both labelled and unlabelled examples to find a Gram matrix which attains a good alignment with $K^*$ on the labelled part of the matrix. While this approach can clearly construct powerful kernels, a few problems arise from the notion of kernel alignment they employed. For instance, a kernel operator such that the sign$(K(x_i, x_j))$ is equal to $y_i y_j$ but its magnitude, $|K(x_i, x_j)|$, is not necessarily 1, might achieve a poor alignment score while it can constitute a classifier whose empirical loss is zero. Furthermore, the task of finding a good kernel when it is not always possible to find a kernel whose sign on each pair of instances is equal to the products of the labels (termed the soft-margin case in [5, 6]) becomes rather tricky. We thus propose a different approach which attempts to overcome some of the difficulties above.
Like Cristianini et al. we assume that we are given a set of labelled instances $S = \{(x_i, y_i) | x_i \in \mathcal{X}, y_i \in \{-1, +1\}, i = 1, \ldots, m\}$. We are also given a set of unlabelled examples $\tilde{S} = \{\tilde{x}_i\}_{i=1}^{\tilde{m}}$. If such a set is not provided we can simply use the labelled instances (without the labels themselves) as the set $\tilde{S}$. The set $\tilde{S}$ is used for constructing the primitive kernels that are combined to constitute the learned kernel $\hat{K}$. The labelled set is used to form the target kernel matrix and its instances are used for evaluating the learned kernel $\hat{K}$. This approach, known as transductive learning, was suggested in [5, 6] for kernel alignment tasks when the distribution of the instances in the test data is different from that of the training data. This setting becomes in particular handy in datasets where the test data was collected in a different scheme than the training data. We next discuss the notion of kernel goodness employed in this paper. This notion builds on the objective function that several variants of boosting algorithms maintain [7, 8]. We therefore first discuss in brief the form of boosting algorithms for kernels.
## Using Boosting to Combine Kernels
Numerous interpretations of AdaBoost and its variants cast the boosting process as a procedure that attempts to minimize, or make small, a continuous bound on the classification error (see for instance [9, 7] and the references therein). A recent work by Collins et al. [8] unifies the boosting process for two popular loss functions, the exponential-loss (denoted henceforth as ExpLoss) and logarithmic-loss (denoted as LogLoss) that bound the empirInput: Labelled and unlabelled sets of examples: $S = \{(x_i, y_i)\}_{i=1}^m$ ; $\tilde{S} = \{\tilde{x}_i\}_{i=1}^{\tilde{m}}$
Initialize: $K \leftarrow 0$ (all zeros matrix)
For $t = 1, 2, \ldots, T$:
- Calculate distribution over pairs $1 \leq i, j \leq m$:
$$D_t(i, j) = \begin{cases}
\exp(-y_i y_j K(x_i, x_j)) & \text{ExpLoss} \\
1/(1 + \exp(-y_i y_j K(x_i, x_j))) & \text{LogLoss}
\end{cases}$$
- Call base-kernel-learner with $(D_t, S, \tilde{S})$ and receive $K_t$
- Calculate:
$$S_t^+ = \{(i, j) | y_i y_j K_t(x_i, x_j) > 0\} \quad ; \quad S_t^- = \{(i, j) | y_i y_j K_t(x_i, x_j) < 0\}$$
$$W_t^+ = \sum_{(i, j) \in S_t^+} D_t(i, j) |K_t(x_i, x_j)| \quad ; \quad W_t^- = \sum_{(i, j) \in S_t^-} D_t(i, j) |K_t(x_i, x_j)|$$
- Set: $\alpha_t = \frac{1}{2} \ln \left( \frac{W_t^+}{W_t^-} \right)$ ; $K \leftarrow K + \alpha_t K_t$.
Return: kernel operator $K : \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R}$
Figure 1: The skeleton of the boosting algorithm for kernels.
classical classification error. Given the prediction of a classifier $f$ on an instance $x$ and a label $y \in \{-1, +1\}$ the ExpLoss and the LogLoss are defined as,
$$\text{ExpLoss}(f(x), y) = \exp(-y f(x))$$
$$\text{LogLoss}(f(x), y) = \log(1 + \exp(-y f(x))) .$$
Collins et al. described a single algorithm for the two losses above that can be used within the boosting framework to construct a strong-hypothesis which is a classifier $f(x)$. This classifier is a weighted combination of (possibly very simple) base classifiers. (In the boosting framework, the base classifiers are referred to as weak-hypotheses.) The strong-hypothesis is of the form $f(x) = \sum_{t=1}^{T} \alpha_t h_t(x)$. Collins et al. discussed a few ways to select the weak-hypotheses $h_t$ and to find a good of weights $\alpha_t$. Our starting point in this paper is the first sequential algorithm from [8] that enables the construction or creation of weak-hypotheses on-the-fly. We would like to note however that it is possible to use other variants of boosting to design kernels.
In order to use boosting to design kernels we extend the algorithm to operate over pairs of instances. Building on the notion of alignment from [5, 6], we say that the inner-product of $x_1$ and $x_2$ is aligned with the labels $y_1$ and $y_2$ if $\text{sign}(K(x_1, x_2)) = y_1 y_2$. Furthermore, we would like to make the magnitude of $K(x, x')$ to be as large as possible. We therefore use one of the following two alignment losses for a pair of examples $(x_1, y_1)$ and $(x_2, y_2)$,
$$\text{ExpLoss}(K(x_1, x_2), y_1 y_2) = \exp(-y_1 y_2 K(x_1, x_2))$$
$$\text{LogLoss}(K(x_1, x_2), y_1 y_2) = \log(1 + \exp(-y_1 y_2 K(x_1, x_2))) .$$
Put another way, we view a pair of instances as a single example and cast the pairs of instances that attain the same label as positively labelled examples while pairs of opposite labels are cast as negatively labelled examples. Clearly, this approach can be applied to both losses. In the boosting process we therefore maintain a distribution over pairs of instances. The weight of each pair reflects how difficult it is to predict whether the labels of the two instances are the same or different. The core boosting algorithm follows similar lines to boosting algorithms for classification algorithm. The pseudo code of the booster is given in Fig. 1. The pseudo-code is an adaptation the to problem of kernel design of the sequential-update algorithm from [8]. As with other boosting algorithm, the base-learner, which in our case is charge of returning a good kernel with respect to the current distribution, is left unspecified. We therefore turn our attention to the algorithmic implementation of the base-learning algorithm for kernels.
3 Learning Base Kernels
The base kernel learner is provided with a training set $S$ and a distribution $D_t$ over a pairs of instances from the training set. It is also provided with a set of unlabelled examples $\tilde{S}$. Without any knowledge of the topology of the space of instances a learning algorithm is likely to fail. Therefore, we assume the existence of an initial inner-product over the input space. We assume for now that this initial inner-product is the standard scalar products over vectors in $\mathbb{R}^n$. We later discuss a way to relax the assumption on the form of the inner-product. Equipped with an inner-product, we define the family of base kernels to be the possible outer-products $K_w = ww^T$ between a vector $w \in \mathbb{R}^n$ and itself.
Using this definition we get,
$$K_w(x_i, x_j) = (x_i \cdot w)(x_j \cdot w).$$
Therefore, the similarity between two instances $x_i$ and $x_j$ is high iff both $x_i$ and $x_j$ are similar (w.r.t the standard inner-product) to a third vector $w$. Analogously, if both $x_i$ and $x_j$ seem to be dissimilar to the vector $w$ then they are similar to each other. Despite the restrictive form of the inner-products, this family is still too rich for our setting and we further impose two restrictions on the inner products. First, we assume that $w$ is restricted to a linear combination of vectors from $\tilde{S}$. Second, since scaling of the base kernels is performed by the boosted, we constrain the norm of $w$ to be 1. The resulting class of kernels is therefore, $\mathcal{C} = \{K_w = ww^T \mid w = \sum_{r=1}^{\tilde{m}} \beta_r \tilde{x}_r, \|w\| = 1\}$.
In the boosting process we need to choose a specific base-kernel $K_w$ from $\mathcal{C}$. We therefore need to devise a notion of how good a candidate for base kernel is given a labelled set $S$ and a distribution function $D_t$. In this work we use the simplest version suggested by Collins et al. This version can been viewed as a linear approximation on the loss function. We define the score of a kernel $K_w$ w.r.t to the current distribution $D_t$ to be,
$$\text{Score}(K_w) = \sum_{i,j} D_t(i, j)y_iy_j K_w(x_i, x_j). \tag{1}$$
The higher the value of the score is, the better $K_w$ fits the training data. Note that if $D_t(i, j) = 1/m^2$ (as is $D_0$) then Score($K_w$) is proportional to the alignment since $\|w\| = 1$. Under mild assumptions the score can also provide a lower bound of the loss function. To see that let $c$ be the derivative of the loss function at margin zero, $c = |\text{Loss}'(0)|$. If all the training examples $x_i \in S$ lies in a ball of radius $\sqrt{c}$, we get that Loss($K_w(x_i, x_j), y_iy_j$) $\geq 1 - cK_w(x_i, x_j)y_iy_j \geq 0$, and therefore,
$$\sum_{i,j} D_t(i, j)\text{Loss}(K_w(x_i, x_j), y_iy_j) \geq 1 - c \sum_{i,j} D_t(i, j)K_w(x_i, x_j)y_iy_j.$$
Using the explicit form of $K_w$ in the Score function (Eq. (1)) we get, Score($K_w$) = $\sum_{i,j} D(i, j)y_iy_j(w \cdot x_i)(w \cdot x_j)$. Further developing the above equation using the constraint that $w = \sum_{r=1}^{\tilde{m}} \beta_r \tilde{x}_r$ we get,
$$\text{Score}(K_w) = \sum_{r,s} \beta_s \beta_r \sum_{i,j} D(i, j)y_iy_j (x_i \cdot \tilde{x}_r)(x_j \cdot \tilde{x}_s).$$
**Figure 2:** The base kernel learning algorithm.
To compute efficiently the base kernel score *without* an explicit enumeration we exploit the fact that if the initial distribution $D_0$ is symmetric ($D_0(i, j) = D_0(j, i)$) then all the distributions generated along the run of the boosting process, $D_t$, are also symmetric. We now define a matrix $A \in \mathbb{R}^{m \times \tilde{m}}$ where $A_{i,r} = x_i \cdot \tilde{x}_r$ and a symmetric matrix $B \in \mathbb{R}^{m \times m}$ with $B_{i,j} = D_t(i, j)y_i y_j$. Simple algebraic manipulations yield that the score function can be written as the following quadratic form, $\text{Score}(\beta) = \beta^T (A^T B A) \beta$, where $\beta$ is $\tilde{m}$ dimensional column vector. Note that since $B$ is symmetric so is $A^T B A$. Finding a good base kernel is equivalent to finding a vector $\beta$ which *maximizes* this quadratic form under the norm equality constraint $\|w\|^2 = \|\sum_{r=1}^{\tilde{m}} \beta_r \tilde{x}_r\|^2 = \beta^T K \beta = 1$ where $K_{r,s} = \tilde{x}_r \cdot \tilde{x}_s$. Finding the maximum of $\text{Score}(\beta)$ subject to the norm constraint is a well known maximization problem known as the generalized eigen vector problem (cf. [10]). Applying simple algebraic manipulations it is easy to show that the matrix $A^T B A$ is positive semi-definite. Assuming that the matrix $K$ is invertible, the vector $\beta$ which maximizes the quadratic form is proportional the eigenvector of $K^{-1} A^T B A$ which is associated with the generalized largest eigenvalue. Denoting this vector by $v$ we get that $w \propto \sum_{r=1}^{\tilde{m}} v_r \tilde{x}_r$. Adding the norm constraint we get that $w = (\sum_{r=1}^{\tilde{m}} v_r \tilde{x}_r)/\|\sum_{r=1}^{\tilde{m}} v_r \tilde{x}_r\|$. The skeleton of the algorithm for finding a base kernels is given in Fig. 3. To conclude the description of the kernel learning algorithm we describe how to the extend the algorithm to be employed with general kernel functions.
**Kernelizing the Kernel:** As described above, we assumed that the standard scalar-product constitutes the template for the class of base-kernels $\mathcal{C}$. However, since the procedure for choosing a base kernel depends on $S$ and $\tilde{S}$ only through the inner-products matrix $A$, we can replace the scalar-product itself with a general kernel operator $\kappa : \mathcal{X} \times \mathcal{X} \to \mathbb{R}$, where $\kappa(x_i, x_j) = \phi(x_i) \cdot \phi(x_j)$. Using a general kernel function $\kappa$ we can not compute however the vector $w$ explicitly. We therefore need to show that the norm of $w$, and evaluation $K_w$ on any two examples can still be performed efficiently.
First note that given the vector $v$ we can compute the norm of $w$ as follows,
$$\|w\|^2 = \left( \sum_r v_r \tilde{x}_r \right)^T \left( \sum_s v_s \tilde{x}_s \right) = \sum_{r,s} v_r v_s \kappa(\tilde{x}_r, \tilde{x}_s).$$
Next, given two vectors $x_i$ and $x_j$ the value of their inner-product is,
$$K_w(x_i, x_j) = \sum_{r,s} v_r v_s \kappa(x_i, \tilde{x}_r) \kappa(x_j, \tilde{x}_s).$$
Therefore, although we cannot compute the vector $w$ explicitly we can still compute its norm and evaluate any of the kernels from the class $\mathcal{C}$.
### 4 Experiments
**Synthetic data:** We generated binary-labelled data using as input space the vectors in $\mathbb{R}^{100}$. The labels, in $\{-1, +1\}$, were picked uniformly at random. Let $y$ designate the label of a particular example. Then, the first two components of each instance were drawn from a two-dimensional normal distribution, $\mathcal{N}(\mu, \Delta \sum \Delta^{-1})$ with the following parameters,
$$\mu = y \begin{pmatrix} 0.03 \\ 0.03 \end{pmatrix}, \quad \Delta = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & -1 \\ 1 & 1 \end{pmatrix}, \quad \sum = \begin{pmatrix} 0.1 & 0 \\ 0 & 0.01 \end{pmatrix}.$$
That is, the label of each examples determined the mean of the distribution from which the first two components were generated. The rest of the components in the vector (98
altogether) were generated independently using the normal distribution with a zero mean and a standard deviation of 0.05. We generated 100 training and test sets of size 300 and 200 respectively. We used the standard dot-product as the initial kernel operator.
On each experiment we first learned a linear classifier that separates the classes using the Perceptron [11] algorithm. We ran the algorithm for 10 epochs on the training set. After each epoch we evaluated the performance of the current classifier on the test set. We then used the boosting algorithm for kernels with the LogLoss for 30 rounds to build a kernel for each random training set. After learning the kernel we re-trained a classifier with the Perceptron algorithm and recorded the results. A summary of the online performance is given in Fig. 4. The plot on the left-hand-side of the figure shows the instantaneous error (achieved during the run of the algorithm). Clearly, the Perceptron algorithm with the learned kernel converges much faster than the original kernel. The middle plot shows the test error after each epoch. The plot on the right shows the test error on a noisy test set in which we added a Gaussian noise of zero mean and a standard deviation of 0.03 to the first two features. In all plots, each bar indicates a 95% confidence level. It is clear from the figure that the original kernel is much slower to converge than the learned kernel. Furthermore, though the kernel learning algorithm was not exposed to the test set noise, the learned kernel reflects better the structure of the feature space which makes the learned kernel more robust to noise.
Fig. 3 further illustrates the benefits of using a boutique kernel. The first and third plots from the left correspond to results obtained using the original kernel and the second and fourth plots show results using the learned kernel. The left plots show the empirical distribution of the two informative components on the test data. For the learned kernel we took each input vector and projected it onto the two eigenvectors of the learned kernel operator matrix that correspond to the two largest eigenvalues. Note that the distribution after the projection is bimodal and well separated along the first eigen direction ($x$-axis) and shows rather little deviation along the second eigen direction ($y$-axis). This indicates that the kernel learning algorithm indeed found the most informative projection for separating the labelled data with large margin. It is worth noting that, in this particular setting, any algorithm which chooses a single feature at a time is prone to failure since both the first and second features are mandatory for correctly classifying the data.
The two plots on the right hand side of Fig. 3 use a gray level color-map to designate the value of the inner-product between each pairs instances, one from training set ($y$-axis) and the other from the test set. The examples were ordered such that the first group consists of the positively labelled instances while the second group consists of the negatively labelled instances. Since most of the features are non-relevant the original inner-products are noisy and do not exhibit any structure. In contrast, the inner-products using the learned kernel yields in a $2 \times 2$ block matrix indicating that the inner-products between instances sharing the same label obtain large positive values. Similarly, for instances of opposite
Figure 4: The online training error (left), test error (middle) on clean synthetic data using a standard kernel and a learned kernel. Right: the online test error for the two kernels on a noisy test set.
labels the inner products are large and negative. The form of the inner-products matrix of the learned kernel indicates that the learning problem itself becomes much easier. Indeed, the Perceptron algorithm with the standard kernel required around 94 training examples on the average before converging to a hyperplane which perfectly separates the training data while using the Perceptron algorithm with learned kernel required a single example to reach a perfect separation on all 100 random training sets.
**USPS dataset:** The USPS (US Postal Service) dataset is known as a challenging classification problem in which the training set and the test set were collected in a different manner. The USPS contains 7,291 training examples and 2,007 test examples. Each example is represented as a $16 \times 16$ matrix where each entry in the matrix is a pixel that can take values in $\{0, \ldots, 255\}$. Each example is associated with a label in $\{0, \ldots, 9\}$ which is the digit content of the image. Since the kernel learning algorithm is designed for binary problems, we broke the 10-class problem into 45 binary problems by comparing all pairs of classes. The interesting question of how to learn kernels for multiclass problems is beyond the scope of this short paper. We thus constraint on the binary error results for the 45 binary problem described above. For the original kernel we chose a RBF kernel with $\sigma = 1$ which is the value employed in the experiments reported in [12]. We used the kernelized version of the kernel design algorithm to learn a different kernel operator for each of the binary problems. We then used a variant of the Perceptron [11] and with the original RBF kernel and with the learned kernels. One of the motivations for using the Perceptron is its simplicity which can underscore differences in the kernels. We ran the kernel learning algorithm with LogLoss and ExpLoss, using both the training set and the test test as $S$. Thus, we obtained four different sets of kernels where each set consists of 45 kernels. By examining the training loss, we set the number of rounds of boosting to be 30 for the LogLoss and 50 for the ExpLoss, when using the trainin set. When using the test set, the number of rounds of boosting was set to 100 for both losses. Since the algorithm exhibits slower rate of convergence with the test data, we choose a higher value without attempting to optimize the actual value. The left plot of Fig. 5 is a scatter plot comparing the test error of each of the binary classifiers when trained with the original RBF a kernel versus the performance achieved on the same binary problem with a learned kernel. The kernels were built using boosting with the LogLoss and $S$ was the training data. In almost all of the 45 binary classification problems, the learned kernels yielded lower error rates when combined with the Perceptron algorithm. The right plot of Fig. 5 compares two learned kernels: the first was build using the training instances as the templates constituting $S$ while the second used the test instances. Although the difference between the two versions is not as significant as the difference on the left plot, we still achieve an overall improvement in about 25% of the binary problems by using the test instances.
Figure 5: Left: a scatter plot comparing the error rate of 45 binary classifiers trained using an RBF kernel ($x$-axis) and a learned kernel with training instances. Right: a similar scatter plot for a learned kernel only constructed from training instances ($x$-axis) and test instances.
5 Discussion
In this paper we showed how to use the boosting framework to design kernels. Our approach is especially appealing in transductive learning tasks where the test data distribution is different than the distribution of the training data. For example, in speech recognition tasks the training data is often clean and well recorded while the test data often passes through a noisy channel that distorts the signal. An interesting and challenging question that stems from this research is how to extend the framework to accommodate more complex decision tasks such as multiclass and regression problems. Finally, we would like to note alternative approaches to the kernel design problem has been devised in parallel and independently. See [13, 14] for further details.
Acknowledgements: Special thanks to Cyril Goutte and to John Show-Taylor for pointing the connection to the generalized eigen vector problem. Thanks also to the anonymous reviewers for constructive comments.
References
[1] V. N. Vapnik. *Statistical Learning Theory*. Wiley, 1998.
[2] N. Cristianini and J. Shawe-Taylor. *An Introduction to Support Vector Machines*. Cambridge University Press, 2000.
[3] Huma Lodhi, John Shawe-Taylor, Nello Cristianini, and Christopher J. C. H. Watkins. Text classification using string kernels. *Journal of Machine Learning Research*, 2:419–444, 2002.
[4] C. Leslie, E. Eskin, and W. Stafford Noble. The spectrum kernel: A string kernel for svm protein classification. In *Proceedings of the Pacific Symposium on Biocomputing*, 2002.
[5] Nello Cristianini, Andre Elisseeff, John Shawe-Taylor, and Jaz Kandla. On kernel target alignment. In *Advances in Neural Information Processing Systems 14*, 2001.
[6] G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M. Jordan. Learning the kernel matrix with semi-definite programming. In *Proc. of the 19th Intl. Conf. on Machine Learning*, 2002.
[7] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Additive logistic regression: a statistical view of boosting. *Annals of Statistics*, 28(2):337–374, April 2000.
[8] Michael Collins, Robert E. Schapire, and Yoram Singer. Logistic regression, adaboost and bregman distances. *Machine Learning*, 47(2/3):253–285, 2002.
[9] Llew Mason, Jonathan Baxter, Peter Bartlett, and Marcus Frean. Functional gradient techniques for combining hypotheses. In *Advances in Large Margin Classifiers*. MIT Press, 1999.
[10] Roger A. Horn and Charles R. Johnson. *Matrix Analysis*. Cambridge University Press, 1985.
[11] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain. *Psychological Review*, 65:386–407, 1958.
[12] B. Schölkopf, S. Mika, C.J.C. Burges, P. Knirsch, K. Müller, G. Rätsch, and A.J. Smola. Input space vs. feature space in kernel-based methods. *IEEE Trans. on NN*, 10(5):1000–1017, 1999.
[13] O. Bosquet and D.J.L. Herrmann. On the complexity of learning the kernel matrix. NIPS, 2002.
[14] C.S. Ong, A.J. Smola, and R.C. Williamson. Superkernels. NIPS, 2002.
|
A multiquark description of the $D_{sJ}(2860)$ and $D_{sJ}(2700)$
J. Vijande$^1$, A. Valcarce$^2$, F. Fernández$^3$.
$^1$Dpto. de Física Atómica, Molecular y Nuclear,
Universidad de Valencia - CSIC, E-46100 Burjassot, Valencia, Spain.
$^2$Departamento de Física Fundamental,
Universidad de Salamanca, E-37008 Salamanca, Spain.
$^3$Grupo de Física Nuclear and IUFFyM,
Universidad de Salamanca, E-37008 Salamanca, Spain
Abstract
Within a theoretical framework that accounts for all open-charm mesons, including the $D_0^*(2308)$, the $D_{sJ}^*(2317)$ and the $D_{sJ}(2460)$, we analyze the structure and explore possible quantum number assignments for the $D_{sJ}(2860)$ and the $D_{sJ}(2700)$ mesons reported by BABAR and Belle Collaborations. The open-charm sector is properly described if considered as a combination of conventional quark-antiquark states and four–quark components. All negative parity and 2$^+$ states can be understood in terms only of $q\bar{q}$ components, however the description of the 0$^+$ and 1$^+$ mesons is improved whenever the mixing between two– and four–quarks configurations is included. We analyze all possible quantum number assignments for the $D_{sJ}(2860)$ in terms of both $c\bar{s}$ and $cns\bar{s}\bar{n}$ configurations. We discuss the role played by the electromagnetic and strong decay widths as basic tools to distinguish among them. The broad structure reported by BABAR near 2.7 GeV is also analyzed.
Keywords: Charm mesons, Charm-strange mesons, quark models.
Pacs: 14.40.Lb, 14.40.Ev, 12.39.Pn.
The open-charm sector does not cease to amaze both theorists and experimentalists with new states that defy our understanding of the heavy-meson spectra. Two new mesons have recently joined the open-charm zoo. BABAR Collaboration reported the observation of a new $D_s$ state, the $D_{sJ}(2860)$, with a mass of $2856.6 \pm 1.5 \pm 5.0$ MeV and a width of $48 \pm 7 \pm 10$ MeV in the analysis of the $DK$ spectra. No structures seem to appear in the $D^*K$ invariant mass distribution in the same range of masses. This state was reported together with a broad enhancement near 2.7 GeV with a tentative mass of $2688 \pm 4 \pm 2$ MeV and a width $112 \pm 7 \pm 36$ MeV [1]. Since the only reported decay channels observed by BABAR correspond to two pseudoscalar mesons, the assignment of natural parity quantum numbers, $J^P = 0^+, 1^-, 2^+, 3^-...$, is strongly favored. The second state, the $D_{sJ}(2700)$, was observed by Belle Collaboration in the decay $B^+ \to \overline{D}^0 D^0 K^+$ with a mass of $2708 \pm 9^{+11}_{-10}$ MeV, a width of $108 \pm 23^{+30}_{-31}$ MeV, and quantum numbers $J^P = 1^-$ [2]. The $D_{sJ}(2860)$ is not observed in the Belle data.
The $D_{sJ}(2860)$ and $D_{sJ}(2700)$ mesons are new members of a long list of charm resonances reported during the last few years. In 2003 BABAR reported the observation of a charm-strange state, the $D^*_{sJ}(2317)$, with a mass of $2316.8 \pm 0.4 \pm 3$ MeV and a width of less than 4.6 MeV [3]. This state was soon after confirmed by CLEO [4] and Belle Collaborations [5]. Besides, BABAR also pointed out the existence of another charm-strange meson, the $D_{sJ}(2460)$ [3]. This resonance was measured by CLEO [4] and confirmed by Belle [5] with a mass of $2457.2 \pm 1.6 \pm 1.3$ MeV and a width less than 5.5 MeV. Belle results are consistent with the spin-parity assignments of $J^P = 0^+$ for the $D^*_{sJ}(2317)$ and $J^P = 1^+$ for the $D_{sJ}(2460)$. Thus, these two states are definitively well established and confirmed independently by different experiments. In the nonstrange sector Belle reported the observation of a nonstrange broad scalar resonance named $D^*_0$ with a mass of $2308 \pm 17 \pm 15 \pm 28$ MeV and a width $276 \pm 21 \pm 18 \pm 60$ MeV [6]. A state with similar properties has been suggested by FOCUS Collaboration [7] during the measurement of excited charm mesons $D^*_S$. SELEX Collaboration at Fermilab [8] has reported an state with a mass of $2632.5 \pm 1.7$ MeV and a width smaller than 17 MeV. However, up to now no other experiment has been able to confirm the existence of this resonance [1, 9].
The positive parity open-charm mesons present unexpected properties quite different from those predicted by quark potential models if a pure $c\bar{q}$ configuration is considered. If they would correspond to standard $P-$wave mesons made of a charm quark and a light antiquark their masses would be larger, around 2.48 GeV for the $D^*_{sJ}(2317)$, 2.55 GeV for the $D_{sJ}(2460)$, and 2.46 GeV for the $D^*_0(2308)$. They would be therefore above the $DK$, $D^*K$, and $D\pi$ thresholds respectively, being broad resonances. However, as stated above, the $D^*_{sJ}(2317)$ and $D_{sJ}(2460)$ are very narrow. In the case of the $D^*_0(2308)$ the large width observed would be expected but not its low mass. Although there are several theoretical interpretations for the masses and widths of some of the positive parity states $D^*_0(2308)$, $D^*_{sJ}(2317)$, and $D_{sJ}(2460)$ [10], in Ref. [11] it was shown that the difficulties to identify the three of them with conventional $c\bar{q}$ mesons are rather similar to those appearing in the light-scalar meson sector and may be indicating that other configurations, as for example four-quark components, may be playing a role. $q\bar{q}$ states are more easily identified with physical hadrons when virtual quark loops are not important. This is the case of the pseudoscalar and vector mesons, mainly due to the $P-$wave nature of this hadronic dressing. On the contrary, in the scalar sector it is the $q\bar{q}$ pair the one in a $P-$wave state, whereas quark loops may be in a $S-$wave. In this case the intermediate hadronic states that are created may play a crucial role in the composition of the resonance, in other words unquenching may be
important. The vicinity of these components to the lightest $q\bar{q}$ state implies that they have to be considered. This has been shown as a possible interpretation of the low-lying light-scalar mesons, where the coupling of the scalar $q\bar{q}$ nonet to the lightest $qqq\bar{q}$ configurations allows for an almost one-to-one correspondence between theoretical states and experiment [12]. The possible role played by non–$q\bar{q}$ components in the description of the $D_{sJ}(2860)$ was illustrated in Ref. [13]. In this work it was proposed that this state could be understood within a unitarized meson model as a quasi-bound $c\bar{s}$ state coupled with the nearby $S-$wave $DK$ threshold, therefore being the first radial excitation of the $D^*_{sJ}(2317)$.
In non-relativistic quark models gluon degrees of freedom are frozen and therefore the wave function of a zero baryon number (B=0) hadron may be written as
$$|B = 0\rangle = \Omega_1 |q\bar{q}\rangle + \Omega_2 |qqq\bar{q}\rangle + ....$$
(1)
where $q$ stands for quark degrees of freedom and the coefficients $\Omega_i$ take into account the mixing of two– and four–quark states. $|B = 0\rangle$ systems could then be described in terms of a hamiltonian
$$H = H_0 + H_1 \quad \text{being} \quad H_0 = \begin{pmatrix} H_{q\bar{q}} & 0 \\ 0 & H_{qqq\bar{q}} \end{pmatrix}$$
and $$H_1 = \begin{pmatrix} 0 & V_{q\bar{q} \rightarrow qqq\bar{q}} \\ V_{q\bar{q} \rightarrow qqq\bar{q}} & 0 \end{pmatrix},$$
(2)
where $H_0$ is a constituent quark model hamiltonian described below and $H_1$, that takes into account the mixing between $q\bar{q}$ and $qqq\bar{q}$ configurations, includes the annihilation operator of a quark-antiquark pair into the vacuum. This operator could be described using the $^3P_0$ model, however, since this model depends on the vertex parameter, we prefer in a first approximation to parametrize this coefficient by looking to the quark pair that is annihilated and not to the spectator quarks that will form the final $q\bar{q}$ state. Therefore we have taken $V_{q\bar{q} \rightarrow qqq\bar{q}} = \gamma$. If this coupling is weak enough one can solve independently the eigenproblem for the hamiltonians $H_{q\bar{q}}$ and $H_{qqq\bar{q}}$, treating $H_1$ perturbatively. The two–body problem has been solved exactly by means of the Numerov algorithm [14]. The four–body problem has been solved by means of a variational method using the most general combination of gaussians as trial wave functions [15]. In particular, the so-called mixed terms (mixing the various Jacobi coordinates) that are known to have a great influence in the light quark case have been considered.
It is our purpose in this work to use a standard constituent quark model that provides with a good description of the meson and baryon spectra and also the baryon-baryon phenomenology for the description of the open-charm mesons. For this purpose, we will address the study of hadrons with zero baryon number described as clusters of quarks confined by a realistic interaction. The model is based on the assumption that the constituent quark mass appears because of the spontaneous breaking of the original $SU(3)_L \otimes SU(3)_R$ chiral symmetry at some momentum scale. As a consequence of such a symmetry breaking, quarks acquire a constituent mass and Goldstone bosons are exchanged between the quarks. Beyond the chiral symmetry breaking scale, one expects the dynamics to be governed by QCD perturbative effects, that are taken into account through the one-gluon-exchange potential. Finally, any model imitating QCD should incorporate another nonperturbative effect, confinement. It remains an unsolved problem to derive confinement from QCD in an analytic manner. The only indication we have on the nature of confinement is through lattice studies, showing that $q\bar{q}$ systems are well reproduced at short distances by a linear potential. Such
TABLE I: $c\bar{s}$ masses (QM), in MeV, below 3 GeV. Experimental data (Exp.) are taken from Ref. [17].
| $nL$ $J^P$ | State | QM | Exp. |
|------------|---------|--------|---------------|
| $1S \ 0^-$ | $D_s$ | 1981 | 1968.49±0.34 |
| $2S \ 0^-$ | — | 2699 | — |
| $1S \ 1^-$ | $D_s^*$ | 2112 | 2112.3±0.5 |
| $2S \ 1^-$ | — | 2764 | — |
| $1P \ 0^+$ | $D_{sJ}^*(2317)$ | 2489 | 2317.8±0.6 |
| $2P \ 0^+$ | — | 2966 | — |
| $1P \ 1^+$ | $D_{sJ}(2460)$ | 2578 | 2459.6±0.6 |
| $1P \ 1^+$ | $D_{s1}(2536)$ | 2543 | 2535.35 ± 0.34 ± 0.5 |
| $1P \ 2^+$ | $D_{s2}(2573)$ | 2582 | 2572.6±0.9 |
| $1D \ 1^-$ | — | 2873 | — |
| $1D \ 2^-$ | — | 2883 | — |
| $1D \ 3^-$ | — | 2882 | — |
potential can be physically interpreted in a picture in which the quark and the antiquark are linked with a one-dimensional color flux tube. The spontaneous creation of light-quark pairs may give rise to a breakup of the color flux tube, what has been proposed that translates into a screened potential [16], in such a way that the potential saturates at some interquark distance. Explicit expressions of the interacting $q\bar{q}$ and $q\bar{q}$ potentials and a more detailed description of the model can be found in Refs. [12, 14] where the various parameters are given.
A thoroughly study of the full meson spectra has been presented in Ref. [14], with special attention in Ref. [11] to the open-charm sector. Using this model we have calculated the $c\bar{s}$ masses up to 3 GeV listed in Table I. It can be seen how the open-charm states are easily identified with standard $c\bar{q}$ mesons except for the cases of the $D_{sJ}^*(2317)$ and the $D_{sJ}(2460)$. This behavior is shared by almost all quark potential model calculations [10]. Although the situation from lattice QCD is far from being definitively established, similar difficulties have been observed both in quenched and unquenched approaches [18]. The same conclusion may also be drawn from heavy quark symmetry arguments. Within this approach the scalar $c\bar{s}$ state belongs to the $j = 1/2$ doublet, but since the $j = 3/2$ doublet is identified with the narrow $D_{s2}(2573)$ and $D_{s1}(2536)$ (with total widths of $15^{+5}_{-4}$ MeV and < 2.3 MeV, respectively) the scalar state is expected to have a much larger width than the one measured for the $D_{sJ}^*(2317)$ [19]. Thus, one possibility for these states beyond the naive $q\bar{q}$ assignment is to interpret them as four–quark resonances within the quark model. The results obtained for the $cn\bar{s}\bar{n}$ and $cn\bar{n}\bar{n}$ configurations in Ref. [11] using the constituent quark model outlined above are shown in Table II. It can be seen that the $I = 1$ and $I = 0$ states obtained are far above the corresponding strong decay threshold and therefore should be broad, what rules out a pure four–quark interpretation of the positive–parity open-charm mesons.
As discussed above, for $P-$wave mesons the hadronic dressing is in a $S-$wave, thus physical states may correspond to a mixing of two– and four–body configurations, Eq. (1).
TABLE II: $cns\bar{n}$ and $cn\bar{n}\bar{n}$ masses, in MeV.
| | $cns\bar{n}$ | $cn\bar{n}\bar{n}$ |
|----------------|--------------|--------------------|
| $J^P = 0^+$ | | |
| $I = 0$ $I = 1$| | |
| 2731 | 2699 | 2841 |
| 2793 | 2505 |
TABLE III: Probabilities (P), in %, of the wave function components and masses (QM), in MeV, of the open-charm mesons with $I = 0$ (left) and $I = 1/2$ (right) once the mixing between $q\bar{q}$ and $qq\bar{q}\bar{q}$ configurations is considered. Experimental data (Exp.) are taken from Ref. [17] for $I = 0$ and from Ref. [6] for $I = 1/2$.
| | $I = 0$ | $J^P = 0^+$ | $J^P = 1^+$ | $I = 1/2$ | $J^P = 0^+$ |
|----------------|---------|-------------|-------------|-----------|-------------|
| QM | 2339 | 2847 | | | |
| Exp. | 2317.8 ± 0.6 | – | | | |
| P($cns\bar{n}$)| 28 | 55 | | | |
| P($c\bar{s}1^3P$)| 71 | 25 | | | |
| P($c\bar{s}2^3P$)| ~ 1 | 20 | | | |
| QM | 2421 | 2555 | | | |
| Exp. | 2459.6 ± 0.6 | 2535.35 ± 0.34 ± 0.5 | | | |
| P($cns\bar{n}$)| 25 | ~ 1 | | | |
| P($c\bar{s}1^1P$)| 74 | ~ 1 | | | |
| P($c\bar{s}1^3P$)| ~ 1 | 98 | | | |
| QM | 2241 | 2713 | | | |
| Exp. | 2308 ± 17 ± 32 | – | | | |
| P($cn\bar{n}\bar{n}$)| 46 | 49 | | | |
| P($c\bar{n}1P$)| 53 | 46 | | | |
| P($c\bar{n}2P$)| ~ 1 | 5 | | | |
In the isoscalar sector, the $cns\bar{n}$ and $c\bar{s}$ states get mixed, as it happens with $cn\bar{n}\bar{n}$ and $c\bar{n}$ for the $I = 1/2$ case. The parameter $\gamma$ was fixed in Ref. [11] to reproduce the mass of the $D_{sJ}(2317)$ meson, being $\gamma = 240$ MeV. The results obtained are shown in Table III. From these results one can appreciate that the description of the positive parity open-charm mesons improves when four–quark components are considered.
With respect to the new resonance reported by BABAR, it can be see from Tables I and III that among all possibilities only three states are close to its experimental mass, 2856.6 ± 1.5 ± 5.0 MeV. They correspond to the $0^+ c\bar{s} + cns\bar{n}$ (45% and 55% probability respectively) excitation and the $1^-$ and $3^-$ $c\bar{s}$ $D-$waves, being their energies 2847, 2873, and 2882 MeV. All other possibilities, like for instance the $2S$ $1^-$ or the $2P$ $2^+$, are more than 100 MeV above or below the experimental energy. The $2S$ $1^-$ excitation obtained within our model with an energy of 2764 MeV is a good candidate to be identified with the $D_{sJ}(2700)$ reported by Belle. Concerning the broad bump reported by BABAR at 2.7 GeV, if different from the $D_{sJ}(2700)$, two states appear as possible candidates, the $2S$ $0^+$ radial $c\bar{s}$ excitations and the isovector $0^+$ $cns\bar{n}$ ground state, both of them with a mass of 2699 MeV.
From the analysis of the masses alone it is not possible to distinguish among all candidates for the new resonances. However, the structure of the $D_{sJ}^*$ mesons could be scrutinized apart from their masses, also through the study of their decay widths. The strong decay width of a hypothetical $J^P = 0^+$ $D_{sJ}(2860)$, either $c\bar{s}$ or $cns\bar{n} + c\bar{s}$, into $D^* K$ or $DK^*$ is forbidden due to quantum number conservation. This is consistent with the absence of $DK^*$ or $D^* K$
signals in the experimental data [1]. The ratio $\Gamma[D_{sJ}(2860) \to D^*K]/\Gamma[D_{sJ}(2860) \to DK]$ has been studied for several different quantum numbers using the $^3P_0$ model [20] and arguments based on heavy quark expansions [21]. Some of their results are quoted in Table IV. A possible assignment $L = 2$ $J^P = 1^-$ would result in a too large $\Gamma[D_{sJ}(2860) \to DK]$ decay width, although very different values are obtained depending on the approach considered, while the $L = 2$ $J^P = 3^-$ $\Gamma[D_{sJ}(2860) \to D^*K]/\Gamma[D_{sJ}(2860) \to DK]$ ratio seems to indicate a sizable $\Gamma[D_{sJ}(2860) \to D^*K]$ decay width that has not yet been observed. Concerning the electromagnetic decay widths of the $J^P = 0^+$ candidate, the formalism necessary to study the $\Gamma[c\bar{s} \to c\bar{s} + \gamma]$, $\Gamma[cs\bar{s}\bar{n} \to cs\bar{s}\bar{n} + \gamma]$, and $\Gamma[cs\bar{s}\bar{n} \to c\bar{s} + \gamma]$ processes has been described in detail in Ref. [11], where it was proposed that the ratio $R = D_{sJ}(2460) \to D_s^+\gamma/D_{sJ}(2460) \to D_s^{*+}\gamma$ could be an important tool to distinguish between a $q\bar{q}$ structure ($R \approx 1$) or a $qq\bar{q}\bar{q} + q\bar{q}$ one ($R \approx 100$) for the open-charm mesons. Following this formalism one obtains for the electromagnetic decay $\Gamma[D_{sJ}(2860)\,[0^+] \to D_s^*\gamma]=13.67$ keV. At the same time, if a pure $1P$ $c\bar{s}$ structure is assumed for the $D_{sJ}(2317)$ then the $0^+ D_{sJ}(2860)$ should correspond to a $2P$ excitation. One can also evaluate this decay obtaining a much smaller value, $\Gamma[D_{sJ}(2860)\,[0^+] \to D_s^*\gamma]=1.8$ eV, due to the presence of a node in the $2P$ wave function. Therefore, the mixed scenario would produce a sizable value for the electromagnetic decay width $\Gamma[D_{sJ}(2860) \to D_s^*\gamma]$ only if an scalar state with a dominant four–quark component is present.
If the $D_{sJ}(2860)$ is definitively confirmed as a scalar meson, this will point to the existence of a non-strange partner with an energy of 2713 MeV and an important four–quark component (49%), being this result in the same line as the one reported in Ref.[13]. Furthermore, an isovector $1^+ cs\bar{s}\bar{n}$ state with a mass of 2793 Mev is also predicted.
The interpretation we have just presented for the new resonances measured by BABAR and Belle within a formalism that includes the mixing of two– and four–quark states has also been used to account for the other experimentally observed open-charmed states and also for the light-scalar mesons within the same constituent quark model [11, 12]. It is therefore the first time that a coherent analysis of all known states within the meson spectra in terms of two– and four–quark states is performed, what gives us confidence on the mechanism proposed. Nonetheless, one should not forget that in the literature there is a wide variety of interpretations for the open-charm mesons. Therefore, the final answer could only be obtained from precise experimental data that would allow to discriminate between the predictions of different theoretical models [22].
As a summary, we have obtained a rather satisfactory description of the open-charm mesons in terms of two– and four–quark configurations, including the new states recently reported by BABAR and Belle. The mixing between these two components is responsible for
the unexpected low mass and widths of the $D_{sJ}(2317)$, $D_{sJ}(2460)$, and $D^*_0(2308)$ and also offers a possible interpretation for the $D_{sJ}(2860)$ as a scalar meson. The electromagnetic and strong decay widths give hints that would help in distinguishing the nature of these states. In particular, the study of the decays $\Gamma[D_{sJ}(2860) \to D^*_s \gamma]$ and $\Gamma[D_{sJ}(2860) \to D^* K]$ are ideally suited for this task. A clear signal for this electromagnetic decay mode together with the absence of the strong one would point to an scalar state with an involved structure in terms of two- and four-quark components. The $1^-$ state at 2708 MeV observed by Belle can be interpreted as a $c\bar{s}$ 2S excitation whereas for the broad bump around 2.7 GeV reported by BABAR two candidates can be found, although more experimental data is needed before drawing any conclusion. We encourage experimentalists on the confirmation of the results reported by BABAR and Belle and on the measurement of the electromagnetic and strong decay widths of the open-charm positive parity states. Such a study would help to distinguish not only among the possible quantum numbers allowed for the new BABAR resonance, but also to clarify the exciting situation of the open-charm mesons and the role played by multiquark configurations in the meson spectra.
This work has been partially funded by Ministerio de Ciencia y Tecnología under Contract No. FPA2007-65748, and by Junta de Castilla y León under Contract No. SA016A17.
---
[1] BABAR Collaboration, B. Aubert et al., Phys. Rev. Lett. 97, 222001 (2006).
[2] Belle Collaboration, J. Brodzicka et al., Phys. Rev. Lett. 100, 092001 (2008).
[3] BABAR Collaboration, B. Aubert et al., Phys. Rev. Lett. 90, 242001 (2003).
[4] CLEO Collaboration, D. Besson et al., Phys. Rev. D 68, 032002 (2003).
[5] Belle Collaboration, Y. Mikami et al., Phys. Rev. Lett. 92, 012002 (2004).
[6] Belle Collaboration, K. Abe et al., Phys. Rev. D 69, 112002 (2004).
[7] FOCUS Collaboration, J.M. Link et al., Phys. Lett. B 586, 11 (2004).
[8] SELEX Collaboration, A.V. Evdokimov et al., Phys. Rev. Lett. 93, 242001 (2004).
[9] BABAR Collaboration, B. Aubert et al., hep-ex/0408087. Belle Collaboration, B. Yabsley, AIP Conf. Proc. 792, 875 (2005). FOCUS Collaboration, R. Kutschke, E831-doc-701-v2.
[10] E.S. Swanson, Phys. Rep. 429, 243 (2006) and references therein.
[11] J. Vijande, F. Fernández, and A. Valcarce, Phys. Rev. D 73, 034002 (2006).
[12] J. Vijande, A. Valcarce, F. Fernández, and B. Silvestre-Brac, Phys. Rev. D 72, 034025 (2005).
[13] E. van Beveren and G. Rupp, Phys. Rev. Lett. 97, 202001 (2006).
[14] J. Vijande, F. Fernández, and A. Valcarce, J. Phys. G 31, 481 (2005).
[15] Y. Suzuki and K. Varga, Lecture Notes in Physics M 54, 1 (1998); J. Vijande, F. Fernández, A. Valcarce, and B. Silvestre-Brac, Eur. Phys. J. A 19 383 (2004).
[16] G.S. Bali, Phys. Rep. 343, 1 (2001).
[17] C. Amsler et al., Phys. Lett. B667, 1 (2008).
[18] J. Hein et al., Phys. Rev. D 62, 074503 (2000); G.S. Bali, Phys. Rev. D 68, 071501(R) (2003); UKQCD Collaboration, P. Boyle, Nucl. Phys. B (Proc. Supp.) 63, 314 (1998); et al., Nucl. Phys. B (Proc. Supp.) 53, 398 (1997); A. Dougall, et al. Phys. Lett. B 569, 41 (2003).
[19] N. Isgur and M. B. Wise, Phys. Lett. B 237, 527 (1990); ibid Phys. Lett. B 232, 113 (1989).
[20] B. Zhang, X. Liu, W.-Z. Deng, and S.-L. Zhu, Eur. Phys. J. C 50 617 (2007).
[21] P. Colangelo, F. De Facio, R. Ferrandes, and S. Nicotri, Prog. Theor. Phys. Suppl. 168 202 (2007); P. Colangelo, F. De Fazio, and S. Nicotri, Phys. Lett. B 642 48 (2006).
[22] C. Amsler and N.A. Tornqvist, Phys. Rep. 389, 61 (2004) and references therein.
|
The Astounding Effectiveness of Dual Language Education for All
Virginia P. Collier and Wayne P. Thomas
George Mason University
Abstract
Our longitudinal research findings from one-way and two-way dual language enrichment models of schooling demonstrate the substantial power of this program for enhancing student outcomes and fully closing the achievement gap in second language (L2). Effect sizes for dual language are very large compared to other programs for English learners (ELLs). Dual language schooling also can transform the experience of teachers, administrators, and parents into an inclusive and supportive school community for all. Our research findings of the past 18 years are summarized here, with focus on ELLs’ outcomes in one-way and two-way, 50:50 and 90:10, dual language models, including heritage language programs for students of bilingual and bicultural ancestry who are more proficient in English than in their heritage language.
Key Concepts
This is not just a research report, this is a wakeup call to the field of bilingual education, written for both researchers and practitioners. We use the word astounding in the title because we have been truly amazed at the elevated student outcomes resulting from participation in dual language programs. Each data set is like a mystery because you never know how it’s all going to turn out when you start organizing a school district’s data files for analyses. But, after almost two decades of program evaluation research that we have conducted in 23 large and small school districts from 15 different states, representing all regions of the U.S. in urban, suburban, and rural contexts, we continue to be astonished at the power of this school reform model.
The Pertinent Distinction: Enrichment vs. Remediation
Enrichment dual language schooling closes the academic achievement gap in L2 and in first language (L1) students initially below grade level, and for all categories of students participating in this program. This is the only program for English learners that fully closes the gap; in contrast, remedial models only partially close the gap. Once students leave a special remedial program and join the curricular mainstream, we find that, at best, they make one year’s
progress each school year (just as typical native English speakers do), thus maintaining but not further closing the gap. Often, the gap widens again as students move into the cognitive challenge of the secondary years where former ELLs begin to make less than one year’s progress per year. We classify all of the following as remedial programs: intensive English classes (such as those proposed in the English-only referenda in California, Arizona, and Massachusetts), English as a second language (ESL) pullout, ESL content/sheltered instruction (when taught as a program with no primary language support), structured English immersion, and transitional bilingual education. These remedial programs may provide ELLs with very important support for one to four years. But, we have found that even four years is not enough time to fully close the gap. Furthermore, if students are isolated from the curricular mainstream for many years, they are likely to lose ground to those in the instructional mainstream, who are constantly pushing ahead. To catch up to their peers, students below grade level must make more than one year’s progress every year to eventually close the gap.
In contrast to remedial programs that offer “watered down” instruction in a “special” curriculum focused on one small step at a time, dual language enrichment models are the curricular mainstream taught through two languages. Teachers in these bilingual classes create the cognitive challenge through thematic units of the core academic curriculum, focused on real-world problem solving that stimulate students to make more than one year’s progress every year, in both languages. With no translation and no repeated lessons in the other language, separation of the two languages is a key component of this model. Peer teaching and teachers using cooperative learning strategies to capitalize on this effect serve as an important stimulus for the cognitive challenge. Both one-way and two-way enrichment bilingual programs have this power.
Differences in One-way and Two-way Dual Language Education
One-way
We define one-way programs as demographic contexts where only one language group is being schooled through their two languages. For example, along the U.S.-Mexican border, many school districts enroll students mainly of Hispanic-American heritage. Some students are proficient in English, having lost their heritage language. Others are very proficient in Spanish and just beginning to learn English. Whatever mix of English and Spanish proficiency is present among the student population, an enrichment dual language program brings these students together to teach each other the curriculum through their two heritage languages. Similar sociolinguistic situations are present along the U.S.-Canadian border for students of Franco-American heritage. Other examples of demographic contexts for one-way dual language programs can be found among American-Indian schools working on native language revitalization, as well as in urban linguistic enclaves where very few native English speakers enroll in inner city schools.
Implementers of one-way programs must make their curricular decisions to meet the needs of their student population, so the resulting program design can be quite different from that of a two-way program. But, the basic principles are the same—a minimum of six years of bilingual instruction (with eight years preferable for full gap closure in L2 when there are no English-speaking peers enrolled in the bilingual classes), separation of the two languages of
instruction, focus on the core academic curriculum rather than a watered-down version, high cognitive demand of grade-level lessons, and collaborative learning in engaging and challenging academic content across the curriculum.
**Two-way**
Two-way programs have the demographics to invite native-English-speaking students to join their bilingual and ELL peers in an integrated bilingual classroom. Two-way classes can and should include all students who wish to enroll, including those who have lost their heritage language and speak only English. These bilingual classes do not need to enroll exactly 50% of each linguistic group to be classified as two-way, but it helps the process of L2 acquisition to have an approximate balance of students of each language background. For our data analyses, we have chosen a ratio of 70:30 as the minimum balance required to have enough L2 peers in a class to stimulate the natural second language acquisition process.
In addition to enhanced second language acquisition, two-way bilingual classes resolve some of the persistent sociocultural concerns that have resulted from segregated transitional bilingual classes. Often, negative perceptions have developed with classmates assuming that those students assigned to the transitional bilingual classes were those with “problems,” resulting in social distance or discrimination and prejudice expressed toward linguistically and culturally diverse students enrolled in bilingual classes. Two-way bilingual classes taught by sensitive teachers can lead to a context where students from each language group learn to respect their fellow students as valued partners in the learning process with much knowledge to teach each other.
**Our Research Methodology**
For researchers to replicate our work, we have written two publications available on the Internet that define our research methodology. Because of the limitations of space in this short journal article, we refer readers to these two publications to study the details of our approaches to research design. The first publication (Thomas & Collier, 1997a) provides an overview of some major issues for researchers conducting school program effectiveness studies, including common misconceptions. Section III and Appendix A of this publication are especially pertinent. This first publication was written mainly for school policy makers and provides an overview of our findings to date from many program evaluations that we conducted with individual school districts in several regions of the U.S.
The second publication (Thomas & Collier, 2002) gives a more detailed picture of the complete process that we go through in designing each study with each school district. This includes an overview of the research design in Section II, details of data collection and analyses for each individual study in the findings sections for each district, and appendices that provide sample data structure and a data collection instrument that we developed. We have also made considerable efforts to disseminate our five-stage research design through papers presented at several annual meetings of the American Educational Research Association (AERA), in sessions sponsored by Division H (School Evaluation and Program Development) that were designed to create an interactive dialogue with other researchers focused on evaluation of school programs and interested in replicating our research methodology.
Overall, the methodology of the field of program evaluation provides us with the foundation for our choices in research design. For us, appropriate program evaluation methods
provide all of the rigor of traditional quantitative research, plus the necessary qualitative sensitivity to program nuances, implementation, and evolution that traditional research typically lacks. Our large sample sizes also allow us to better assess true program effect sizes than small sample, focused studies. Since 1985, we have been analyzing many long-term databases collected by school districts in all regions of the U.S. To date, we have collected the largest set of quantitative databases gathered for research in the field of bilingual/ESL education, with over 2 million student records analyzed (one student record includes all the school district records for one student collected during one school year). Quantitative data collected from each school system includes data stored on magnetic media in machine-readable files from their registration centers, student information system databases, and testing databases, as well as data from other specialized offices that work with linguistically and culturally diverse students. In each school district site we also collect qualitative data, including source documents across many years; detailed interviews with central office administrators, school board members, principals, teachers, and community members; and, school visits and classroom observations.
The goal of our research is to analyze the great variety of education services provided for linguistically and culturally diverse students in U.S. public schools and the resulting academic achievement of these students as measured by all the tests given to them by the school district in both L1 (when available) and in English (which is for most of these students their L2). Our participating school districts work with us as collaborative research partners, and the results of the data analyses inform and influence their practices. Overall, this research provides guidance for school districts to make policy decisions that are data-driven regarding the design, implementation, evaluation, and reform of the education of linguistically and culturally diverse students. This article is focused on our research findings from many of these program evaluations, illustrating the patterns of the data findings in one-way and two-way dual language programs. We focus on these two enrichment program types in this article because we have found that they result in the highest student outcomes in the long term when following students throughout their elementary school years and continuing to follow them throughout their secondary years when possible. When we report on student outcomes, our longitudinal research is focused on gap closure rather than primarily on pre-post gains without a context. The following section explains the difference between our type of analyses and the requirements of the current federal legislation.
Gap Closure Research and the No Child Left Behind Act of 2001
In the current environment of high-stakes testing with consequences for schools that fail to meet the expressed goals, gap-closure research can help to clarify how students are doing as a measure of school program effectiveness. Two aspects of the 2001 federal legislation connect closely to the research that we have conducted for the past 18 years. First, we applaud the focus on achievement gap closure rather than group gains as the measure of success. Second, we have, for many years, encouraged the school districts with whom we work to collect data that can be disaggregated into meaningful student groups with adequate yearly progress goals for all groups. To illustrate these two concepts, if achievement gap closure for ELLs were taken seriously as a more appropriate measure of program effects, the English-only press releases stating that ELLs in California have made great gains would not be published in the popular media since they do not provide the contextual information of gains made by native-English speakers during the same period. The real picture is that ELL gains have been insufficient to lessen the gap. In fact, gap closure analyses of ELLs in California receiving English-only
instruction reveal that when their gains are compared to native-English speakers’ gains, the gap has actually remained the same or widened since Proposition 227 was approved by voters in 1998 (Parrish, Linquanti, Merickel, Quick, Laird, & Esra, 2002; Thompson, DiCerbo, Mahoney, & MacSwan, 2002). Later in this article, we will illustrate this finding with our own data analyses comparing ELLs’ achievement in Houston, Texas, with ELLs’ achievement in California.
So, the federal legislation appropriately focuses on two meaningful concepts—gap closure and disaggregation. But, the focus in the current legislation on cross-sectional, rather than longitudinal analyses of student outcomes, is misguided and inappropriate. We firmly believe that the best way to conduct methodologically appropriate research on gap closure, with disaggregated groups, is to conduct longitudinal research on the same students across time, rather than cross-sectional high-stakes comparisons of schools that compare one group of students in a given grade to a completely different group in the same grade the following year. Following the same students over a long period of time (longitudinal research), leads to clear findings on gap closure and program effectiveness. This is especially true in high-stake decisions (e.g. school sanctions) that may be inaccurately made when two different groups of students are compared over time.
Another serious problem with the current federal legislation is the assumption (based not on research, but on political expediency) that ELLs should be on grade level in English in three years. In every study conducted, we have consistently found that it takes a six to eight years, for ELLs to reach grade level in L2, and only one-way and two-way enrichment dual language programs have closed the gap in this length of time. No other program has closed more than half of the achievement gap in the long term. This means that while ELLs are working on closing the gap by making more than one year’s progress in their L2 with every year of school, they should be tested on grade level in their L1. Requiring grade-level curricular testing in students’ L1 provides an important measure that students are keeping up with cognitively challenging grade-level work while closing the gap in English. Once ELLs learners have reached full parity with native-English speakers, a curricular test in English should yield just as valid and reliable a score as it does for native-English speakers. But, while ELLs learners are still closing the gap, a test score in English will under estimate their true achievement.
For the U.S., L1 testing in languages other than Spanish is probably not feasible, but excellent tests are available in Spanish, the language of 75% of the language-minority students in the U.S. Since Spanish speakers are the majority among ELLs and one of the groups least well served by U.S. schools (as measured by high school completion), quality teaching and testing in Spanish can be a crucial step towards closing the achievement gap in English. The results of data analyses of student outcomes in dual language programs demonstrate this very powerfully.
**Student Outcomes**
**Houston Independent School District, Texas**
Our largest school district research site is Houston Independent School District, with over 210,000 students, 54% of whom are Hispanic, 33% African-American, 10% Euro-American, and 75% of the total student enrollment on free or reduced lunch. More details about this urban school district are provided in our national research report, in the second findings chapter (Thomas & Collier, 2002). In 1996, the Houston ISD Multilingual Programs Department chose to gradually implement the 90:10 dual language program that they had developed as a
model for all Houston ISD schools that were teaching the curriculum through Spanish and English. Since all elementary schools in Houston are required by Texas state law to offer a bilingual program for ELLs whose home language is spoken by 20 or more students in a single grade, transitional classes with certified bilingual teachers were already in place for Spanish speakers across all schools. With the initial success of two-way bilingual classes implemented in two elementary schools, then Superintendent Rod Paige and the Houston school board approved the expansion of one-way and two-way dual language schools throughout the school district.
As of 2002, 56 one-way (labeled developmental bilingual in Houston) and two-way (labeled bilingual immersion) dual language programs have been implemented, for grades K-8th. Because some schools were not yet ready to implement dual language, the Houston multilingual staff approved 90:10 as the model for transitional classes as well as dual language classes for consistency as students move from school to school. This was an unusual and creative decision. The 90:10 model provides intensive instruction in the minority language, in this case Spanish, for pre-kindergarten, kindergarten, and $1^{st}$ grade gradually increasing academic time in English to 50% of the instructional time by $5^{th}$ grade. Reading is taught first in Spanish, with formal English language arts introduced in second grade. Student outcomes in this program have been very high, in both Spanish and English, on the difficult national norm-referenced tests—the Stanford 9 and Aprenda 2 (see Thomas & Collier, 2002). Examples from some of our analyses are provided in the following Figures 1 and 2, illustrating cross-sectional Spanish and English reading outcomes for $1^{st}$–$5^{th}$ grades. For our next research report, we are working on longitudinal analyses of student achievement data from Houston ISD for grades $1^{st}$–$8^{th}$.
Figure 1
Houston ISD ELL Achievement by Program On the 2000 Aprenda 2 in Spanish Reading
| Grade | 90:10 Transitional Bilingual Ed | 90:10 Developmental Bilingual Ed | 90:10 Two-way Bilingual Immersion |
|-------|---------------------------------|----------------------------------|----------------------------------|
| 1 | 58 | 60 | 60 |
| 2 | 60 | 63 | 65 |
| 3 | 58 | 60 | 60 |
| 4 | 55 | 55 | 58 |
| 5 | 51 | 55 | 62 |
Total ELLs in 90:10 Transitional Bilingual Education N=6240
Total ELLs in 90:10 Developmental Bilingual Education N=5642
Total ELLs in 90:10 Two-way Bilingual Immersion N=1574
© Copyright Wayne P. Thomas and Virginia P. Collier, 2003
In these analyses, comparison schools were carefully matched to be similar in terms of neighborhood and percentage of students of low socioeconomic background served. As can be seen in the figures, native-Spanish speakers (initially classified as beginning ESL students) in the two-way dual language (bilingual immersion) schools were at or above grade level in both English and Spanish in 1st – 5th grades. In English achievement, at all grade levels, ELLs in the two-way classes outscored ELLs in the other two bilingual program types by 7 Normal Curve...
Equivalents (NCEs) or more, a very statistically significant difference. In this 90:10 model, these ELLs across all three programs performed astoundingly high in Spanish achievement, well above grade level at the 55th to 65th NCE (60th to 76th percentile) for Grades 1-5, with only the transitional students down to the 51st NCE by fifth grade (and their Spanish instruction was being phased out at this grade). This high achievement in Spanish significantly influenced their high achievement in English, in comparison to what we have seen in other school districts implementing little or no primary language support.
English learners attending one-way dual language (developmental bilingual) by fifth grade were achieving at the 55th NCE (60th percentile) in Spanish, higher achievement than in transitional 90:10 bilingual classes, and the 41st NCE (34th percentile) in English, about the same as their counterparts in transitional 90:10 classes. But their higher performance in Spanish and their continuing academic work in both languages in the middle school years, we predict, will lead to grade-level achievement in English by eighth grade, as we have seen in findings from other school districts. Houston’s transitional bilingual classes are phased out in the secondary years, so that students from transitional elementary feeder schools move into all-English instruction at middle school level, before they have completely closed the gap in English. We would predict from our analyses from other school districts that this will lead to somewhat lower achievement in English than that of graduates of the one-way and two-way dual language programs. The long-term goal of the school district is to gradually transform all transitional bilingual classes to enrichment dual language.
Heritage Language Programs in Maine
Another example of student outcomes in dual language programs from our recent research (Thomas & Collier, 2002) is the experience of two rural school districts in northern Maine, located on the border with Canada, very close to both French-speaking and English-speaking Canadian provinces. Over 90 percent of the students in these two school districts are of Franco-American/Acadian heritage. Their grandparents still speak French, but their parents were reprimanded for using French in school and they came to view their regional variety of French as a street language not worthy of academic use. Given the economic downturn of the region with few jobs opening for young adults, some of the school board members proposed that they try a bilingual immersion program to develop the students’ lost heritage language. Their ultimate goal was to keep some of their young people in the region, for economic revitalization, by developing businesses operated in both French and English.
Approximately half of the parents chose for their children to be schooled in this 50:50 dual language program, with equal instructional time for the two languages, for Grades K-12. The other half of the students chose to remain in all-English instruction. Both groups were of similar background, socio-economically and ethno-linguistically. As can be seen in our longitudinal findings in Figure 3, the bilingually schooled students benefited enormously from their schooling in two languages. After four years of the dual language program, former English learners who were achieving at the 40th NCE (31st percentile) before the program started had reached the 62nd NCE (72nd percentile) in English reading on the Terra Nova, well above grade level.
The heritage language, French, has been in strong decline in this region over the past half-century. Yet those families who have chosen for their children to be schooled in both French and English are experiencing dramatic renewal of their heritage language at no cost to their children’s English achievement. The high academic achievement of the bilingually schooled children is an added benefit that has amazed the parents. The community goal with this bilingual program is to produce more student graduates who are academically proficient in both languages of the community, for economic revitalization of the region. There are many parallels between this situation and that of school districts serving Spanish speakers in the
Longitudinal Comparisons of Program Effectiveness for English Learners
These two examples from Texas and Maine are among the many fascinating results that continue to astound us in our ongoing analyses. Our six-lined Figure illustrating our longitudinal findings when comparing the effectiveness of six program types for English learners (Figure 6 in Thomas & Collier, 1997a, available on the Internet) continues to be confirmed as we place the results from each succeeding data set from each program evaluation that we conduct into the overall picture of program effectiveness. This six-lined Figure examines the longitudinal K-12 picture of student achievement on norm-referenced tests in English reading across the curriculum. All lines in the Figure represent English learners who started their schooling in the U.S. with no proficiency in English, who were enrolled in a special program for English learners during their elementary school years and who stayed in the same school district throughout their schooling, allowing us to follow their progress over time.
Both one-way and two-way bilingual programs lead to grade-level and above-grade-level achievement in second language, the only programs that fully close the gap. Groups of English learners attending one-way bilingual classes typically reach grade level achievement in second language by 7th or 8th grade, scoring slightly above grade level through the remainder of their schooling. With the stimulus of native-English-speaking peers in two-way bilingual classes, groups of English learners typically reach grade level achievement in second language by 5th or 6th grade, reaching an average of the 61st NCE or the 70th percentile by the eleventh grade.
This is truly astounding achievement when you consider that this is higher achievement than that of native-English speakers being schooled through their own language, and who have all the advantages of nonstop cognitive and academic development and sociocultural support. Native-English speakers' language and identity is not threatened, because English is the power and status language and they know it, so they have a huge advantage in confidence that they can make it in school, from a sociocultural perspective. Yet English learners can outpace native-English speakers year after year until they reach grade level in their second language, when they are schooled in a high quality enrichment program that teaches the curriculum through their primary language and through English.
Outcomes of Dual Language for Teachers, Administrators, and Parents
The astounding effectiveness of dual language education extends beyond student outcomes, influencing the school experience of all participants. As the program develops and matures, teachers, administrators, and parents in formal and informal interviews all express an awareness that they are part of something very special. Most adults connected to the program begin to view it as a school reform, where school is perceived positively by the whole school community. The respect and nurturing of the multiple cultural heritages and the two main languages present in the school lead to friendships that cross social class and language boundaries. Teachers express excitement, once they have made it through the initial years of planning and implementing an enrichment dual language model, that they love teaching now and would never leave their jobs. They feel they have lots of support, once the staff development and teacher planning time is in place for this innovation. Teachers can see the difference in their students' responsiveness and engagement in lessons. Behavior problems lessen because students feel valued and respected as equal partners in the learning process.
Administrators of dual language schools talk about the enormous amount of planning time needed and the complications of what they are doing. But they add that they absolutely love their jobs and are fully committed to making dual language work for the whole community. Those who serve as principals of whole-school models of dual language tend to stay in their positions for many years, stating that it has changed their life and makes work a great joy. Principals agree that the first years of implementation are not easy, but the end results are worth the hard work. A principal’s commitment to and vision of this reform requires great sensitivity to culturally and linguistically diverse communities and the willingness to stick with the decision to implement a full enrichment model that enhances the achievement of all student groups.
Parents of both language groups tend to participate much more actively in the school, because they feel welcomed, valued, and respected, and included in school decision-making. Often teachers and administrators of dual language schools create after-school activities that welcome family members into lifelong learning partnerships for all ages. Examples of flourishing parent-school partnerships in dual language schools are provided in our federal research report, especially in the findings from Maine and Oregon (Thomas & Collier, 2002).
**Factors Affecting Gap Closure in Dual Language Programs**
While dual language programs are astoundingly successful, in comparison to other bilingual/ESL programs developed for English learners, variations in program design and the tests chosen to measure gap closure can produce different results in program effectiveness. Here are some issues that program designers and researchers/evaluators might consider during the planning stages of implementing a new program. These issues also apply to existing dual language programs that want to improve their particular model. All programs, including dual language schools in existence for a long time, are a work in progress, as educators respond to the varying needs of their students.
**Test Difficulty**
The average and range of item difficulty on a test vary from one measure to another. Easier tests measure an unrealistically small gap. If your state test has set levels of mastery for each grade level that are lower than average, the lower standards are easier to attain, and the test will indicate an artificially small gap between those who have mastered the curriculum and those who have not. But when students reach the end of high school and their expectation is to continue in a four-year university, they must reach a cutoff score on the admissions test, which is usually a more difficult nationally normed test such as the SAT. Students who have only been tested on the easier tests will feel that they have been misled by their schools when the gates to higher education are closed for them. For this reason, we recommend that school districts use a norm-referenced measure at least once in the secondary school years, testing across the curriculum. This gives students an indicator of how they are performing in relation to students across the country, as they move toward graduation and eventual competition with students from other school districts and states in their adult roles of work and in higher education.
English learners just beginning acquisition of the English language should be tested in their primary language and not in English on a norm-referenced curricular test, while they are acquiring basic academic English. (In a dual language program, the primary language testing
continues throughout the program.) After two years of English acquisition, we find that groups of English learners generally test at around the 8th to 12th percentile (20th to 25th NCE) on a norm-referenced test in English reading across the curriculum. This can be considered their baseline score. Then we follow their progress across time, to see that they are closing the gap in their second language, making more than one year’s progress with each additional year of school, until they reach grade level (50th NCE or percentile).
Program Implementation
How the program is implemented can influence the rate at which English learners close the gap. Important principles of dual language include a minimum of six years of bilingual instruction with English learners not segregated, a focus on the core academic curriculum rather than a watered-down version, high-quality language arts instruction in both languages and integrated into thematic units, separation of the two languages with no translation or repeated lessons in the other language, use of the non-English language at least 50 percent of the instructional time and as much as 90 percent in the early grades, and use of collaborative and interactive teaching strategies. How faithful teachers are to these principles can strongly influence the success of the program, and the principal is a key player in making the model happen as planned. Thus a crucial component of this school reform is an active and committed principal who hires qualified teachers and plans collaboratively with staff, providing for ongoing staff development and planning time. The principal also helps to create community partnerships and oversees program implementation and the ongoing evaluation of the program, including student performance on tests.
The quality of and fidelity to these implementation characteristics can lead to significant differences in student achievement. For example, we charted the progress of three dual language programs from first through sixth grade, measuring student performance in English reading across the curriculum each year, as shown in Figure 4. Two programs closed the gap at the rate of 6 NCEs per year, while one program closed the gap at the rate of 3.5 NCEs per year. While all of this is outstanding progress, it will take the students making 3.5 NCE gains an extra 2-3 years to reach grade level achievement in second language. *The difference in the lower-achieving dual language program is that the English learners were separated from the native-English speakers for a two-hour English language arts block.* Year after year, in this program, the English learners went down the hall to their ESL teacher for two hours during the English language arts time, rather than the two groups being instructed together. While the difference between these two conditions is small in a given year, its cumulative effect is quite significant over several years.
Variations in Rate of Annual Gain Among Selected Dual Language Schools
NCEs
Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Grade 6
- Program A -- 5.75 NCEs per year
- Program B -- 6 NCEs per year
- Program C -- 3.5 NCEs per year
© Copyright Wayne P. Thomas and Virginia P. Collier, 2003
Type of Dual Language Program
We have now analyzed enough data from four major variations of dual language to illustrate the annual expected gain for each. These four variations are one-way 90:10, one-way
50:50, two-way 90:10, and two-way 50:50. In Figure 5, we have included the annual gain expected in NCEs for each dual language variation on the norm-referenced test in English, the annual effect size, and the percentage of the academic achievement gap in second language that has been closed by the end of fifth grade, for English learners who had no proficiency in English when they began the dual language program in kindergarten. As can be seen in the Figure, two-way 90:10 programs reach the highest levels of achievement in the shortest amount of time, and one-way 50:50 programs need continuation of the program throughout the middle school years to completely close the achievement gap in English. All four dual language program variations reach much higher achievement levels than transitional bilingual programs, because primary language grade-level schooling is continued for more years in dual language programs, and this is the key to accelerated growth in English, in the long term.
Figure 5
Achievement Gap Closure For English Learners in Dual Language Programs—What Can We Expect?
| Program Type | Annual Gap Closure | Annual Effect Size | % of Gap Closed by Grade 5 |
|--------------------|--------------------|--------------------|-----------------------------|
| One-way 90:10 | 3 - 5 NCEs | 0.14 - 0.24* | 70% - 100% + |
| One-way 50:50 | 3 NCEs | 0.14 | 70% |
| Two-way 90:10 | 4 - 6 NCEs | 0.19 - .29* | 95% - 100% + |
| Two-way 50:50 | 3.5 - 5 NCEs | 0.14 - 0.24* | 70% - 100% + |
*= meaningful and significant annual effect
Notes:
(1) Using norm referenced tests – a difficult test measures the true gap size, an easier test under-estimates the gap
(2) ELLs started at grade K with no exposure to English
(3) Achievement gap = 1.2 national standard deviations
© Copyright Wayne P. Thomas and Virginia P. Collier, 2003
Is the English-only Mainstream More Effective than Dual Language Mainstream Classes?
We have the best answer to this question from our data analyses from Houston ISD. Since Houston is a huge school district, there are many students in every program type. The research division of Houston ISD was able to identify 1,599 students who entered Houston
schools as beginning ESL students, but their parents refused special services for their children. Against the counsel of the Houston educators, these parents preferred to place their children in the English mainstream with no bilingual or ESL support. In our federal report (Thomas & Collier, 2002) the results of this decision are graphically illustrated in Figure C-1. While these students were on grade level in second grade when they first took the Stanford 9, with each succeeding grade as the curriculum gets cognitively more complex, this group was doing less and less well. By 11th grade, those remaining in school were scoring at the 25th NCE (12th percentile) but the majority of this group did not complete high school.
We took these results and compared the Houston Stanford 9 results to California’s Stanford 9 results, as seen here in Figure 6. After more than 12 months of intensive ESL classes under Proposition 227, we found that ESL students’ achievement was remarkably similar to the Houston “refusers.” *In other words, in its effect on English learners’ achievement, California’s Proposition 227 is virtually the same as no special program at all.* Other English learners in both Texas and California who received some type of special services, either transitional bilingual education or content ESL and/or dual language, are coming closer to closing or have closed the achievement gap, with enough years of schooling. We strongly recommend that parents who refuse bilingual/ESL services for their children should be informed that their children’s long-term academic achievement will probably be much lower as a result. While the curricular mainstream may appear to speed their children’s acquisition of basic English, it does not lead to long-term academic success in English.
Figure 6
*California 2001 and Texas 1999 Stanford 9 Total Reading*
The Next Steps
Dual language models of schooling are spreading rapidly as more and more principals hear about this school reform. In many states—especially in Texas, New Mexico, New York, California, Washington, Illinois, and the Washington, D.C. metropolitan area—dual language is expanding to many new schools. Websites that provide locations of dual language schools and their characteristics and contact information include: www.cal.org/twi; www.texastwoway.org; www.duallanguagenm.org; and www.cde.ca.gov/el/twoway/directory.html. Other websites that provide extensive publications and research reports on dual language include www.crede.ucsc.edu and www.ncela.gwu.edu. For lack of space in this article, we have not provided a literature synthesis, but we have written many research syntheses that report on our research findings and those of many other researchers that have contributed to the foundation.
and knowledge base for dual language schooling (Collier, 1989, 1992a, 1992b, 1995a, 1995b, 1995c; Collier & Thomas, 1989, 1999a, 1999b, 1999c, 2002, in press; Thomas, 1992; Thomas & Collier, 1997a, 1997b, 1999, 2002, 2003, in press; Thomas, Collier & Abbott, 1993). The following sources are also full of citations on both research findings and implementation strategies in dual language education (Bilingual Research Journal, Spring, 2002; Calderon & Minaya-Rowe, 2003; Cloud, Genesee & Hamayan, 2000; Freeman, 1998; Howard & Christian, 2002; Lindholm-Leary, 2000, 2001; Montone & Loeb, 2000; NABE News, July/August, 2003).
A next major step for researchers to take is to produce the next generation of bilingual education researchers who will conduct program evaluation research, to refine what particular forms of dual language programs are most effective. As more and more dual language schools develop, many variations in implementation are evolving. Evolution of the model may lead to even higher achievement, but researchers may also identify less effective forms of implementation. This is an exciting time for researchers to join with educators in collaborative efforts for meaningful school reform.
For example, Professor Leo Gomez at the University of Texas-Pan American has over the past decade forged a collaborative research relationship with dual language schools in South Texas that are implementing a promising form of dual language education in one-way demographic contexts along the U.S.-Mexican border. Professor Kathryn Lindholm-Leary at San Jose State University in California has conducted the largest number of longitudinal studies on student achievement in two-way dual language schools in California. Pauline Dow in Canutillo ISD, Texas, has initiated a whole-school-district model of one-way dual language schooling with a comprehensive system for data collection and long-term evaluation of the program as it evolves.
Annual conferences focusing on dual language education are spreading to many states. Two-way CABE was the first in 1993, with others following California's example in New Mexico, Texas (with several annual regional two-way dual language conferences), New York, Illinois, Connecticut, and the state of Washington. These conferences are important for bilingual educators to focus on this enrichment model, for planning implementation strategies, staff development, networking, parent advocacy, and reports on the research.
Clearly dual language education is a school reform whose time has come. It is a school model that even the English-only advocates endorse, because it is an inclusive model for all students, and all student groups benefit from participating. The research results are promising, but our work as researchers has just begun. Let's get the next generation of researchers working on longitudinal analyses and analyzing all the details of this school reform. We may be astounded at dual language's impact on our own lives as educators and researchers, since we are all, together, lifelong learners.
References
Calderon, M.E., & Minaya-Rowe, L. (2003). Designing and implementing two-way bilingual programs: A step-by-step guide for administrators, teachers, and parents. Thousand Oaks, CA: Corwin Press.
Cloud, N., Genesee, F., & Hamayan, E. (2000). Dual language instruction: A handbook for
Collier, V.P. (1989). How long? A synthesis of research on academic achievement in second language. *TESOL Quarterly, 23*, 509-531.
Collier, V.P. (1992a). The Canadian bilingual immersion debate: A synthesis of research findings. *Studies in Second Language Acquisition, 14*, 87-97.
Collier, V.P. (1992b). A synthesis of studies examining long-term language minority student data on academic achievement. *Bilingual Research Journal, 16*(1-2), 187-212.
Collier, V.P. (1995a). *Acquiring a second language for school*. Washington, DC: National Clearinghouse for English Language Acquisition. [www.ncela.gwu.edu](http://www.ncela.gwu.edu)
Collier, V.P. (1995b). *Promoting academic success for ESL students: Understanding second language acquisition for school*. Woodside, NY: New Jersey Teachers of English to Speakers of Other Languages-Bilingual Educators. (Available from Bastos Educational Publications at 1-800-662-0301.)
Collier, V.P. (1995c). Second language acquisition for school: Academic, cognitive, sociocultural, and linguistic processes. In J.E. Alatis et al. (Eds.), *Georgetown University Round Table on Languages and Linguistics 1995* (pp. 311-327). Washington, DC: Georgetown University Press.
Collier, V.P., & Thomas, W.P. (1989). How quickly can immigrants become proficient in school English? *Journal of Educational Issues of Language Minority Students, 5*, 26-38.
Collier, V.P., & Thomas, W.P. (1999a, August/September). Making U.S. schools effective for English language learners, Part 1. *TESOL Matters, 9*(4), 1, 6. [www.tesol.org/pubs/articles/1999/tm9908-01.html](http://www.tesol.org/pubs/articles/1999/tm9908-01.html)
Collier, V.P., & Thomas, W.P. (1999b, October/November). Making U.S. schools effective for English language learners, Part 2. *TESOL Matters, 9*(5), 1, 6. [www.tesol.org/pubs/articles/1999/tm9910-01.html](http://www.tesol.org/pubs/articles/1999/tm9910-01.html)
Collier, V.P., & Thomas, W.P. (1999c, December/January). Making U.S. schools effective for English language learners, Part 3. *TESOL Matters, 9*(6), 1, 10. [www.tesol.org/pubs/articles/1999/tm9912-01.html](http://www.tesol.org/pubs/articles/1999/tm9912-01.html)
Collier, V.P., & Thomas, W.P. (2002). Reforming education policies for English learners means better schools for all. *The State Education Standard, 3*(1), 30-36. (The quarterly journal of the National Association of State Boards of Education, Alexandria, Virginia)
Collier, V.P., & Thomas, W.P. (in press). Predicting second language academic success in English using the Prism Model. In J. Cummins & C. Davison (Eds.), *Kluwer International Handbook of English language teaching*. Dordrecht, The Netherlands: Kluwer.
Freeman, R.D. (1998). *Bilingual education and social change*. Clevedon, UK: Multilingual Matters.
Howard, E.R., & Christian, D. (2002). *Two-way immersion 101: Designing and implementing a two-way immersion education program at the elementary level*. Santa Cruz, CA and Washington, DC: Center for Research on Education, Diversity & Excellence. [www.cal.org/crede/pubs/edpractice/EPR9.html](http://www.cal.org/crede/pubs/edpractice/EPR9.html)
Lindholm-Leary, K.J. (2000). *Biliteracy for a global society: An idea book on dual language education*. Washington, DC: National Clearinghouse for English Language Acquisition. www.ncela.gwu.edu
Lindholm-Leary, K.J. (2001). *Dual language education*. Clevedon, UK: Multilingual Matters.
Montone, C., & Loeb, M. (2000). *Implementing two-way immersion programs in secondary schools*. Santa Cruz, CA, and Washington, DC: Center for Research on Education, Diversity & Excellence. www.cal.org/crede/pubs/edpractice/EPR5.html
Parrish, T.B., Linquanti, R., Merickel, A., Quick, H.E., Laird, J., & Esra, P. (2002). *Effects of the implementation of Proposition 227 on the education of English learners, K-12: Year 2 report*. Palo Alto, CA: American Institutes for Research, and San Francisco: WestEd.
Thomas, W.P. (1992). An analysis of the research methodology of the Ramirez study. *Bilingual Research Journal, 16*(1-2), 213-245.
Thomas, W.P., & Collier, V.P. (1997a). *School effectiveness for language minority students*. Washington, DC: National Clearinghouse for English Language Acquisition. www.ncela.gwu.edu/ncbepubs/resource/effectiveness/index.html
Thomas, W.P., & Collier, V.P. (1997b). Two languages are better than one. *Educational Leadership, 55*(4), 23-26.
Thomas, W.P., & Collier, V.P. (1999). Accelerated schooling for English language learners. *Educational Leadership, 56*(7), 46-49.
Thomas, W.P., & Collier, V.P. (2002). *A national study of school effectiveness for language minority students' long-term academic achievement*. Santa Cruz, CA, and Washington, DC: Center for Research on Education, Diversity & Excellence. www.crede.ucsc.edu/research/liaa/I.I_final.html
Thomas, W.P., & Collier, V.P. (2003). The multiple benefits of dual language. *Educational Leadership, 61*(2), 61-64.
Thomas, W.P., & Collier, V.P. (in press). *What we know about: Effective instructional approaches for language minority learners*. Arlington, VA: Educational Research Service.
Thomas, W.P., Collier, V.P., & Abbott, M. (1993). Academic achievement through Japanese, Spanish, or French: The first two years of partial immersion. *Modern Language Journal, 77*, 170-179.
Thompson, M.S., DiCerbo, K.E., Mahoney, K., & MacSwan, J. (2002, January 25). Exito en California? A validity critique of language program evaluations and analysis of English learner test scores. *Education Policy Analysis Archives, 10*(7). www.epaa.asu.edu/epaa/v10n7
|
Analyses of turbulence in a wind tunnel by a multifractal theory for probability density functions
| 著者別名 | 有光 敏彦 |
|----------|----------|
| journal or publication title | Fluid dynamics research |
| volume | 44 |
| number | 3 |
| page range | 031402 |
| year | 2012-05 |
| 権利 | (C) IOP Publishing 2012 |
| URL | http://hdl.handle.net/2241/117422 |
doi: 10.1088/0169-5983/44/3/031402
Analyses of turbulence in a wind tunnel by a multifractal theory for probability density functions
Toshihico Arimitsu\textsuperscript{1}\textsuperscript{†}, Naoko Arimitsu\textsuperscript{2} and Hideaki Mouri\textsuperscript{3}
\textsuperscript{1}Faculty of Pure and Applied Sciences, University of Tsukuba, Tsukuba, Ibaraki 305-8571, JAPAN
\textsuperscript{2}Faculty of Environment and Information Sciences, Yokohama National University, Yokohama, Kanagawa 240-8501, JAPAN
\textsuperscript{3}Meteorological Research Institute, Tsukuba, Ibaraki 305-0052, JAPAN
E-mail: email@example.com
Abstract. The probability density functions (PDFs) for energy dissipation rates, created from time-series data of grid turbulence in a wind tunnel, are analyzed in a high precision by the theoretical formulae for PDFs within multifractal PDF theory which is constructed under the assumption that there are two main elements constituting fully developed turbulence, i.e., coherent and incoherent elements. The tail part of PDF, representing intermittent coherent motion, is determined by Tsallis-type PDF for singularity exponents essentially with one parameter with the help of new scaling relation whose validity is checked for the case of the grid turbulence. For the central part PDF representing both contributions from the coherent motion and the fluctuating incoherent motion surrounding the former, we introduced a trial function specified by three adjustable parameters which amazingly represent scaling behaviors in much wider area not restricted to the inertial range. From the investigation of the difference between two difference formulae approximating velocity time-derivative, it is revealed that the connection point between the central and tail parts of PDF extracted by theoretical analyses of PDFs is actually the boundary of the two kinds of instabilities associated respectively with coherent and incoherent elements.
Keywords: Multifractal, Fat tail, Intermittency, Turbulence, Energy dissipation rates
\textsuperscript{†} Corresponding author: firstname.lastname@example.org
1. Introduction
There are several keystone works (Mandelbrot 1974, Parisi and Frisch 1985, Benzi et al 1984, Halsey et al 1986, Meneveau and Sreenivasan 1987, Nelkin 1990, Hosokawa 1991, Benzi et al 1991, She and Leveque 1994, Dubrulle 1994, She Z-S and Waymire 1995, Arimitsu T and N 2000a, 2000b, 2001, 2002, Arimitsu N and T 2002, Biferale et al 2004, Chevillard et al 2006) providing the multifractal aspects for fully developed turbulence. Only a few works (Benzi et al 1991, Arimitsu T and N 2001, 2002, Arimitsu N and T 2002, Biferale et al 2004, Chevillard et al 2006) analyze the probability density functions (PDFs) for physical quantities representing intermittent character. The other works deal with only the scaling property of the system, e.g., comparison of the scaling exponents of velocity structure function. Among the researches analyzing PDFs, multifractal probability density function theory (MPDFT) (Arimitsu T and N 2001, 2002, 2011, Arimitsu N and T 2002, 2011) provides the most precise analysis of the fat-tail PDFs. MPDFT is a statistical mechanical ensemble theory constructed by the authors (T.A. and N.A.) in order to analyze intermittent phenomena providing fat-tail PDFs.
To extract the intermittent character of the fully developed turbulence, it is necessary to have information of self-similar hierarchical structure of the system. This is realized by producing a series of PDFs for responsible singular quantities with different lengths
\[
\ell_n = \ell_0 \delta^{-n}, \quad \delta > 1 \quad (n = 0, 1, 2, \cdots)
\]
that characterize the regions in which the physical quantities are coarse-grained. The value for $\delta$ is chosen freely by observers. Let us assume that the self-similar structure of fully developed turbulence is such that the choice of $\delta$ should not affect the theoretical estimation of the values for the fundamental quantities characterizing the turbulent system under consideration. A&A model within the framework of MPDFT itself tells us that this requirement is satisfied if the scaling relation has the form (Arimitsu T and N 2011, Arimitsu N and T 2011)
\[
\ln 2/(1 - q) \ln \delta = 1/\alpha_- - 1/\alpha_+.
\]
Here, $q$ is the index associated with the Rényi entropy (Rényi 1961) or with the Havrdá-Charvat and Tsallis (HCT) entropy (Havrdá and Charvat 1967, Tsallis 1988); $\alpha_\pm$ are zeros of the multifractal spectrum $f(\alpha)$ (see below in section 2). The multifractal spectrum is uniquely related to the PDF for $\alpha$ (see (4) below). The PDF of $\alpha$ is related to the tail part of PDFs within MPDFT for those quantities revealing intermittent behavior whose singularity exponents can have values $\alpha < 1$, e.g., the energy dissipation rates, through the variable transformation between $\alpha$ and the physical quantities (see (3) below for the case of the energy dissipation rates $\varepsilon_n$). With the new scaling relation (2), observables have come to depend on the parameter $\delta$ only in the combination $(1-q)\ln \delta$. The difference in $\delta$ is absorbed in the entropy index $q$. §
§ Since almost all the PDFs that had been provided previously were for the case where $\delta = 2$, it has
In the preceding papers, we analyzed PDFs for energy transfer rates (Arimitsu T and N 2011) and PDFs for energy dissipation rates (Arimitsu N and T 2011), which are given in figure 11 of Aoyama et al (2005), with the help of the new scaling relation, and checked the independence of the PDFs from $\delta$. It was found that the adjustable parameters for the central part PDF provide us with $\delta$-independent scaling behaviors as functions of $r/\eta$, and that the scaling properties are satisfied in much wider region not restricted to inside of the inertial range. However, the number of data points used in drawing the PDFs in figure 11 of Aoyama et al (2005) is not enough, especially, for the precise analyses of the central part of the PDFs performed in Arimitsu T and N (2011) and Arimitsu N and T (2011). Therefore, we will perform, as one of the aims of the present paper, the same analyses, which were done for DNS, with the help of PDFs created from wind tunnel turbulence with a higher enough resolution in order to make sure if the characteristics discovered previously with rather poor resolution at the central part are correct or not. Since we have the raw time-series data taken from wind tunnel turbulence, we can create PDFs for energy dissipation rates with enough resolutions fit to our needs.
In this paper, we analyze the PDFs for energy dissipation rates extracted out from the time series of the velocity field of a fully developed turbulence which were observed by one of the authors (H.M.) in his experiment conducted in a wind tunnel (Mouri et al 2008). In section 2, we present the formulae of theoretical PDFs within A&A model which are necessary in the following sections for the analyses of PDFs obtained from the experimental turbulence. In section 3, we analyze the observed PDFs for energy dissipation rates in a high precision with the theoretical PDF within A&A model of MPDFT, and verify the proposed assumption related to the magnification $\delta$. In section 4, in order to see what information we can extract out from the time-series data, we compare two different PDFs for energy dissipation rates created from the time series data with different approximation for temporal derivative. We may learn from this how to treat the central part of PDFs to derive the information of incoherent fluctuating motion around the coherent turbulent motion. Summary and discussion are provided in section 5.
2. Singularity exponent and PDFs for energy dissipation rates
MPDFT is constructed under the assumption, following Parisi and Frisch (1985), that for high Reynolds number the singularities distribute themselves in a multifractal way in real physical space. The singularities stem from the invariance of the Navier-Stokes (N-S) equation for an incompressible fluid under the scale transformation $\vec{x} \rightarrow \vec{x}' = \lambda \vec{x}$, been possible to analyze PDFs (Arimitsu T and N 2001, 2002, Arimitsu N and T 2002) with the scaling relation $1/(1-q) = 1/\alpha_- - 1/\alpha_+$ proposed by Costa et al (1998) and Lyra and Tsallis (1998) in connection with the $2^\infty$ periodic orbit. The orbit having the marginal instability of zero Liapunov exponent appears at the threshold to chaos via a period-doubling bifurcation in one-dimensional dissipative maps.
accompanied by the scale changes $\vec{u} \to \vec{u}' = \lambda^{\alpha/3} \vec{u}$ in velocity field, $t \to t' = \lambda^{1-\alpha/3} t$ in time and $p \to p' = \lambda^{2\alpha/3} p$ in pressure with an arbitrary real number $\alpha$, in the limit of large Reynolds number, i.e., the contribution from the dissipation term in N-S equation, which is proportional to the kinematic viscosity $\nu$, is negligibly small compared with the convection term. In treating an actual turbulent system, however, the value $\nu$ is fixed to a finite value unique to the material of fluid prepared for an experiment. We should keep in mind that the dissipation term can become effective depending on the region under consideration, since the term breaking the invariance does exist, i.e., non-zero (see the discussion in the following).
The invariance under the scale transformation leads to the scaling property
$$\varepsilon_n / \epsilon = (\ell_n / \ell_0)^{\alpha - 1}$$
for the energy dissipation rate $\varepsilon_n$ averaged in the regions with diameter $\ell_n$. Here, we put $\varepsilon_0 = \epsilon$ whose value is assumed to be constant. The energy dissipation rate becomes singular for $\alpha < 1$, i.e., $\lim_{n \to \infty} \varepsilon_n = \lim_{n \to \infty} \ell_n^{\alpha - 1} \to \infty$. The degree of singularity is specified by the singularity exponent $\alpha$ (Parisi and Frisch 1985).
Let us consider $\alpha$ to be a stochastic variable whose PDF $P^{(n)}(\alpha)$ is given by the Rényi or HCT type function (Arimitsu T and N 2000a, 2000b, 2001, 2002, 2011, Arimitsu N and T 2002, 2011):
$$P^{(n)}(\alpha) \propto \left[1 - (\alpha - \alpha_0)^2 / (\Delta \alpha)^2\right]^{n/(1-q)}$$
with $\Delta \alpha = [2X/(1-q) \ln \delta]^{1/2}$. The domain of $\alpha$ is $\alpha_{\text{min}} \leq \alpha \leq \alpha_{\text{max}}$ with $\alpha_{\text{min}}$ and $\alpha_{\text{max}}$ being given by $\alpha_{\text{min/max}} = \alpha_0 \mp \Delta \alpha$. $q$ is the entropy index.\footnote{The function (4) is the MaxEnt PDF derived from the Rényi entropy or from the HCT entropy with two constraints, one is the normalization condition and the other is a fixed $q$-variance (Tsallis 1988). This choice of PDF is also quite natural since the Rényi entropy and the HCT entropy are directly related to the generalized dimension (Hentschel and Procaccia 1983) describing those systems containing multifractal structures (Grassberger 1983). Note that for the HCT entropy the relation is given with the help of the $q$-exponential (Tsallis 2001) which is a function satisfying a scaling invariance (Suyari and Wada 2006) and reduces to the ordinary exponential for $q \to 1$.}
From (4), we have for $n \gg 1$ the expression of the multifractal spectrum
$$f(\alpha) = 1 + \ln \left[1 - (\alpha - \alpha_0)^2 / (\Delta \alpha)^2\right] / (1-q) \ln \delta.$$
The independence of $f(\alpha)$ from $n$ is interpreted as a manifestation of the existence of self-similar hierarchical structure responsible for the intermittent fluid motion of turbulence.
The three parameters $\alpha_0$, $X$ and $q$ appeared in $P^{(n)}(\alpha)$ are determined as the functions of the intermittency exponent $\mu$ with the help of the three conditions. One is the energy conservation law $\langle \varepsilon_n \rangle = \epsilon$. Another is the definition of the intermittency exponent $\mu$, i.e., $\langle (\varepsilon_n / \epsilon)^2 \rangle = (\ell_n / \ell_0)^{-\mu}$. The last condition is the scaling relation (2) with $\alpha_\pm$ being the solution of $f(\alpha_\pm) = 0$, which is a generalization of the one introduced by Tsallis and his coworkers (Costa et al 1998, Lyra and Tsallis 1998) to which (2) reduces when $\delta = 2$. Here, the average $\langle \cdots \rangle$ is taken with $P^{(n)}(\alpha)$. The parameter $q$ is determined, altogether with $\alpha_0$ and $X$, as a function of $\mu$ only in the combination
The difference in $\delta$ is absorbed into the entropy index $q$, therefore changing the zooming rate $\delta$ may result in picking up the different hierarchy, containing the entropy specified by the index $q$, out of self-similar structure of turbulence. As the parameters are dependent on $q$ only in the combination $(1-q)\ln\delta$, we are naturally led to the replacement of $n$ in the expression of $P^{(n)}(\alpha)$ in (4) with $n = \tilde{n}/\ln\delta$. If $\tilde{n}$ does not depend on $\delta$, $P^{(n)}(\alpha)$ becomes also independent of $\delta$.¶ Note that, with the new number $\tilde{n}$, $\ell_n$ introduced in (1) reduces to
$$\ell_n = \ell_0 e^{-\tilde{n}}.$$
MPDFT provides us with a systematic framework to make a connection between the PDF $P^{(n)}(\alpha)$ of the singularity exponent $\alpha$ and the PDF of the observed quantity such as the energy dissipation rate representing intermittent singular behavior in its time-evolution. The element of fluid motion specified by the singularity exponent satisfying $\alpha < 1$ takes care of the intermittent large (singular) spikes observed in the time-evolution of energy dissipation rate, and contributes to the tail part of PDF for energy dissipation rates (see figure 1 (a) and (b)). This element is directly related to a coherent hierarchical structure such as the multi-scale Cantor set characterized by the multifractal spectrum $f(\alpha)$. Therefore, the fluid motion controlled by this element is referred to as a *coherent* motion. There is another element of fluid motion due to the symmetry breaking term, i.e., the dissipation term in N-S equation, which produces fluctuation of fluid surrounding the coherent turbulent motion. This element contributes mainly to the central part of PDF (see figure 1 (a) and (b)). The fluid motion provided by this element is referred to as an *in-coherent* motion. Note that the central part of the PDF is constituted of two elements, i.e., the in-coherent and coherent motions.
Based on the above consideration, we assume that the probability $\Pi_3^{(n)}(\varepsilon_n)d\varepsilon_n$ can be, generally, divided into two parts as
$$\Pi_3^{(n)}(\varepsilon_n)d\varepsilon_n = \Pi_{3,S}^{(n)}(\varepsilon_n)d\varepsilon_n + \Delta\Pi_3^{(n)}(\varepsilon_n)d\varepsilon_n$$
(see figure 1 (a) and (b)). The first term describes the coherent motion, i.e., the contribution from the abnormal part of the physical quantity $\varepsilon_n$ due to the fact that its singularities distribute themselves multifractal way in real space. This is the part representing a coherent turbulent motion given in the limit $\nu \to 0$ but is not equal to zero ($\nu \neq 0$). The second term represents the contribution from the incoherent fluctuating motion. The normalization of PDF is specified by $\int_0^\infty d\varepsilon_n \Pi_3^{(n)}(\varepsilon_n) = 1$. We assume that the coherent contribution is given by (Arimitsu T and N 2001)
$$\Pi_{3,S}^{(n)}(\varepsilon_n)d\varepsilon_n = \bar{\Pi}_{3,S}^{(n)}P^{(n)}(\alpha)d\alpha$$
with the variable transformation (3). For the expression of $\bar{\Pi}_{3,S}^{(n)}$, see Arimitsu N and T (2011).
¶ The introduction of $\tilde{n}$ is intimately related to the infinitely divisible process (Dubrulle 1994, She and Waymire 1995). It is confirmed by the observation in the present paper that $\tilde{n}$ is independent of $\delta$ and has values of $\mathcal{O}(1)$ (see table 2). Then, taking the limit $\delta \to 1+$ with a fixed value of $\tilde{n}$, one has an infinitely divisible distribution. A detailed investigation of A&A model from this view point will be given elsewhere in the near future.
**Figure 1.** Two kinds of divisions of PDF $\Pi_3^{(n)}(\varepsilon_n)$. One into $\Pi_{3,S}^{(n)}(\varepsilon_n)$ and $\Delta \Pi_3^{(n)}(\varepsilon_n)$ are given on (a) linear and (b) log scale in the vertical axes. The other into $\hat{\Pi}_{3,cr}^{(n)}(\xi_n)$ and $\hat{\Pi}_{3,tl}^{(n)}(\xi_n)$ is given in (c) on log scale. The open circles represent an experimental PDF for energy dissipation rates. The contribution of $\Delta \Pi_3^{(n)}(\varepsilon_n)$ to the tail part $\hat{\Pi}_{3,tl}^{(n)}(\xi_n)$ is negligibly small.
Let us introduce another division of the PDF (see figure 1 (c)), i.e.,
$$\hat{\Pi}_3^{(n)}(\xi_n) = \hat{\Pi}_{3,cr}^{(n)}(\xi_n) + \hat{\Pi}_{3,tl}^{(n)}(\xi_n),$$
where $\hat{\Pi}_3^{(n)}(\xi_n)$ is introduced by the relation $\hat{\Pi}_3^{(n)}(\xi_n)d\xi_n = \Pi_3^{(n)}(\varepsilon_n)d\varepsilon_n$ with the variable transformation $\xi_n = \varepsilon_n/\langle \varepsilon_n^2 \rangle_c^{1/2}$ where the cumulant average $\langle \cdots \rangle_c$ is taken with the PDF $\Pi_3^{(n)}(\varepsilon_n)$. The two parts of the PDF, the tail part $\hat{\Pi}_{3,tl}^{(n)}(\xi_n)$ and the central part $\hat{\Pi}_{3,cr}^{(n)}(\xi_n)$, are connected at $\xi_n = \xi_n^*$ under the conditions that they have the common value and the common log-slope there. Note that $\xi_n^*$ is related to $\varepsilon_n^*$ by $\xi_n^* = \varepsilon_n^*/\langle \varepsilon_n^2 \rangle_c^{1/2}$ and to $\alpha^*$ by (3). The value of $\alpha^*$ is determined for each PDF as an adjusting parameter in the analysis of PDFs obtained by ordinary or numerical experiments.
When one creates a PDF from the time-evolution data for microscopic energy dissipation rate, he puts each realization into an appropriate bin according to the value $\varepsilon_n$ which is obtained by averaging the microscopic energy dissipation rates in each time interval corresponding to the length $\ell_n$. For larger $\varepsilon_n$ values belonging to the tail part domain of the PDF, most of the realizations in a bin at the interval $\varepsilon_n \sim \varepsilon_n + d\varepsilon_n$ come from the time interval containing at least one intermittently large spike (singular spike) of microscopic energy dissipation rates. The bin may have negligibly small proportion of the number of realizations coming from those intervals with only fluctuations compared to the number of realizations with at least one singular spike. On the other hand, for smaller $\varepsilon_n$ values belonging to the central part PDF domain, the number of realizations coming from the time intervals containing singular spikes with smaller heights is about the same order as the number of realizations from the time intervals containing only fluctuations, since the height of singular spikes contributing to this bin must be about the same height as fluctuations.
Under the above interpretation, it may be reasonable to assume that, for the
tail part of PDF $\tilde{\Pi}_{3,\text{tl}}^{(n)}(\xi_n)$, the contribution from the first term $\Pi_{3,S}^{(n)}(\varepsilon_n)$ in (7) to the intermittent rare events dominates, and the contribution from the second term $\Delta \Pi_3^{(n)}(\varepsilon_n)$ to the events is negligible, i.e.,
$$\tilde{\Pi}_{3,\text{tl}}^{(n)}(\xi_n) \ d\xi_n = \Pi_{3,S}^{(n)}(\varepsilon_n) \ d\varepsilon_n$$ \hspace{1cm} (9)
for $\xi_n^* \leq \xi_n$. For $0 \leq \xi_n \leq \xi_n^*$, as there is no theory for the central part of PDF $\tilde{\Pi}_{3,\text{cr}}^{(n)}(\xi_n)$ at present, we put
$$\tilde{\Pi}_{3,\text{cr}}^{(n)}(\xi_n)d\xi_n = \tilde{\Pi}_3^{(n)} e^{-[g_3(\xi_n)-g_3(\xi_n^*)]} \ (\ell_n/\ell_0)^{1-f(\alpha^*)} \ (\bar{\xi}_n/\xi_n^*) \ d\xi_n$$ \hspace{1cm} (10)
with $\tilde{\Pi}_3^{(n)} = \tilde{\Pi}_{3,S}^{(n)} \sqrt{|f''(\alpha_0)|/2\pi|\ln(\ell_n/\ell_0)|}/\bar{\xi}_n$ and a trial function of the Tsallis-type
$$e^{-g_3(\xi_n)} = (\xi_n/\xi_n^*)^{\theta-1}$$
$$\times \{1 - (1-q') [\theta + f'(\alpha^*)] [(\xi_n/\xi_n^*)^{w_3} - 1]/w_3\}^{1/(1-q')}$$ \hspace{1cm} (11)
containing minimal number of adjustable parameters, i.e., $q'$, $\theta$ and $w_3$. The parameter $w_3$ is adjusted by the property of the experimental PDFs around the connection point; $q'$ is the entropy index different from $q$ in (4); $\theta$ is determined by the property of PDF near $\xi_n = 0$. For the expression of $\bar{\xi}_n$, see Arimitsu N and T (2011). The contribution to $\tilde{\Pi}_{3,\text{cr}}^{(n)}(\xi_n)$ comes both from coherent and incoherent motions (see figure 1).
The reason why we chose the trial function (11) for the central part PDF is because it is a natural generalization of the $\chi$-square distribution function for the variable $y_n = (\xi_n/\xi_n^*)^{w_3}$. The observed value of $q'$ is in the range $1.03 \leq q' \leq 1.09$ (see table 2). Note that in the limit $q' \to 1$ the trial function reduces to the $\chi$-square distribution function for $y_n$. The quantity $(\theta + w_3 - 1)/w_3$ provides us with an estimate for the number of independent degrees of freedom for the dynamics contributing to the central part of PDF.
3. Verification of assumptions through the analyses of experimental PDFs
3.1. Experimental setup and extraction of PDFs
By means of the theoretical formula within MPDFT summarized in the last section, we analyze PDFs of energy dissipation rates created from the time series data (Mouri et al 2008) for the turbulence produced by a grid in a wind tunnel (see table 1). Measurements are performed by a hot-wire anemometer with a crossed-wire probe placed on the centerline of the tunnel at 4 m downstream from the grid. It is expected that turbulence around the probe is homogeneous in both stream-wise and span-wise directions, as the cross-section 16 cm $\times$ 16 cm of each open square surrounded by the rods constituting the grid is small enough compared with the cross-section 3 m $\times$ 2 m of the wind tunnel. The cross section of a rod is 4 cm $\times$ 4 cm. We also expect that the turbulence around the probe is isotropic even for larger scales since the values of RMS one-point velocity fluctuations for span-wise and stream-wise components are almost equal (see table 1). There are still possible pitfalls about the assumption of isotropy (Biferale and Procaccia 2005) because of the difference of values between the
Table 1. Parameters of the grid turbulence in a wind tunnel (Mouri et al 2008). The inertial range is determined as the region where the second moment of the velocity differences for longitudinal component scales with the exponent $2/3$ with respect to the distance between the positions of two velocities used to derive the velocity difference.
| Quantity | Value |
|-----------------------------------------------|------------------------|
| Microscale Reynolds number $\text{Re}_\lambda$| 409 |
| Kolmogorov length $\eta$ | 0.138 mm |
| Kinematic viscosity $\nu$ | $1.42 \times 10^{-5}$ m$^2$ sec$^{-1}$ |
| Mean velocity of downstream wind $U$ | 21.16 m sec$^{-1}$ |
| Mean energy dissipation rate $\langle \varepsilon \rangle = 15\nu \langle (\partial v/\partial x)^2 \rangle /2$ | 7.98 m$^2$ sec$^{-3}$ |
| Correlation length of longitudinal velocity | 17.9 cm |
| Inertial range | $50 \lesssim r/\eta \lesssim 150$ |
| RMS of span-wise velocity fluctuations $\langle v^2 \rangle^{1/2}$ | 1.06 m sec$^{-1}$ |
| RMS of stream-wise velocity fluctuations $\langle u^2 \rangle^{1/2}$ | 1.10 m sec$^{-1}$ |
| Sampling interval $\Delta t$ | $1.43 \times 10^{-5}$ sec |
| Number of data points | $4 \times 10^8$ |
averaged energy dissipation rates estimated with the span-wise velocity component $v$, i.e., $15\nu \langle (\partial v/\partial x)^2 \rangle /2 = \langle \varepsilon \rangle = 7.98$ m$^2$ sec$^{-3}$, and the one estimated with the stream-wise velocity fluctuation $u$, i.e., $15\nu \langle (\partial u/\partial x)^2 \rangle = 8.58$ m$^2$ sec$^{-3}$ (Mouri et al 2008). However, as the difference is less than 10%, we expect that anisotropy, even if it exists, may not affect the following analyses seriously.
Assuming isotropy of the grid turbulence, we adopted the surrogate $15\nu (\partial v/\partial x)^2 /2 = 15\nu (\partial v/\partial t)^2 /2U^2$ for the energy dissipation rate (Cleve et al 2003, Mouri et al 2008) with the mean velocity $U$ of downstream wind (see table 1) where $x$-axis is chosen to the direction of the mean flow in a wind tunnel and $v$ is the span-wise velocity component. Here, we used Taylor’s frozen hypothesis in replacing the variable from time $t$ to space $x$ (see Mouri et al (2008) for detail). For the estimation of $\partial v/\partial t$, we use here the difference formula
$$\frac{\partial v}{\partial t} \simeq \Delta^{(3)}v/\Delta t = \left\{ 8 \left[ v(t + \Delta t) - v(t - \Delta t) \right] \right.$$ $$\left. - [v(t + 2\Delta t) - v(t - 2\Delta t)] \right\} /12\Delta t$$
where $\Delta t$ is the sampling interval observing velocity (see table 1). With this formula, we can have a better estimate of the velocity time derivative by means of $\Delta^{(3)}v/\Delta t$ without contamination up to the term of $\mathcal{O}(\Delta t)^3$. We represent the local energy dissipation rates derived from (12) by the symbol $\varepsilon$, i.e., $\varepsilon = 15\nu (\Delta^{(3)}v/\Delta t)^2 /2U^2$.
In creating the experimental PDFs for energy dissipation rates, $4 \times 10^8$ data points are put into $2 \times 10^4$ bins along the $\xi_n$ axis. We discarded those bins containing the number of data points less than 25. Note that the average number of data points per bin is $2 \times 10^4$. In drawing the created PDFs for energy dissipation rates, not all the bins but every $10^2$ bins are plotted for better visibility. The experimental PDF in the region near the right-most end points are scattered because of the lower statistics due to smaller number of data points in the bins located there (see figure 2 (a) and figure 4 (a)).
3.2. Analyses of experimental PDFs
The experimental PDF is analyzed with the help of the theoretical formula for PDF by the following procedure: (i) Pick up three experimental PDFs with consecutive $r$ values, say, $r_1$, $r_2 = r_1 \delta$ and $r_3 = r_1 \delta^2$. (ii) With a trial $\mu$ value, analyze each of the three experimental PDFs to find out tentative but the best values $q'$, $w_3$, $\theta$, $\alpha^*$ and $n_i = \ln(r_i/\ell_0)/\ln \delta$ ($i = 1, 2, 3$) for the theoretical PDF. (iii) Check if the differences $n_3 - n_2$ and $n_2 - n_1$ are close to 1 or not. (iv) If not, change $\mu$ value, and repeat the processes (ii) and (iii) until one arrives at the set of best fit parameters under the condition $n_3 - n_2 = n_2 - n_1 \simeq 1$ within a settled accuracy. (v) With thus determined common $\mu$ value, determine the best fit values $q'$, $w_3$, $\theta$ and $\alpha^*$ for each of other PDFs which are not picked out for the above processes (i) to (iv). One notices that $n_i - n_{i-1} \simeq 1$ are satisfied automatically for every PDFs created from the experiment.
The PDFs of energy dissipation rates are analyzed in figure 2 for the magnification $\delta = 3$ on (a) log and (b) linear scale in the vertical axes. For better visibility, each PDF is shifted by appropriate unit along the vertical axis. Closed circles are the experimental data points for PDFs for the cases $r/\eta = 21.9, 65.7, 197$ and $591$ from the smallest value (top) to the largest value (bottom) where $r$ corresponds to $\ell_n$. Solid lines represent the theoretical PDFs given by (8) with (9) and (10). The parameters necessary for the theoretical PDF of A&A model are determined as $(1-q)\ln \delta = 0.393$, $\alpha_0 = 1.15$ and $X = 0.310$, which turn out to be independent of $\delta$. Other parameters are listed in table 2 (a) and table 3 (a). We performed the same analyses for other magnifications, $\delta = 2$ and 5, and found that the extracted value $\mu = 0.260$ for the intermittency exponent is common to three cases in which PDFs are created with the different values of magnification, i.e., $\delta = 2, 3, 5$. It means that, within the analysis of the energy dissipation rates, the turbulent system under consideration is characterized by a unique
Table 2. Parameters of PDFs created by (a) the formula (12) and (b) the formula (13). For both cases, $\mu = 0.260$ ((1 − $q$)$\ln \delta = 0.393$, $\alpha_0 = 1.15$, $X = 0.310$) giving $q = 0.642$.
| $r/\eta$ | $n$ | $\bar{n}$ | $q'$ | $w_3$ | $\theta$ | $n$ | $\bar{n}$ | $q'$ | $w_3$ | $\theta$ |
|----------|-------|-----------|------|-------|---------|-------|-----------|------|-------|---------|
| 6.57 | 5.50 | 6.04 | 1.03 | 0.250 | 2.10 | 5.20 | 5.71 | 1.03 | 0.250 | 3.50 |
| 21.9 | 4.00 | 4.39 | 1.02 | 0.250 | 3.50 | 4.00 | 4.39 | 1.04 | 0.380 | 5.30 |
| 65.7 | 3.00 | 3.30 | 1.05 | 0.490 | 4.10 | 3.00 | 3.30 | 1.04 | 0.450 | 5.00 |
| 197 | 1.60 | 1.76 | 1.06 | 0.780 | 4.50 | 2.00 | 2.20 | 1.07 | 0.750 | 6.00 |
| 591 | 0.60 | 0.416 | 1.09 | 1.25 | 5.80 | 0.580 | 0.637 | 1.09 | 1.24 | 6.20 |
Table 3. Connection points between the central and the tail part PDFs created by (a) the formula (12) and (b) the formula (13). $\langle \varepsilon \rangle = 7.98$ m$^2$ sec$^{-3}$.
| $r/\eta$ | $\alpha^*$ | $\varepsilon_n^*/\langle \varepsilon \rangle$ | $\xi_n^*$ | $\alpha^*$ | $\varepsilon_n^*/\langle \varepsilon \rangle$ | $\xi_n^*$ |
|----------|------------|---------------------------------|-----------|------------|---------------------------------|-----------|
| 6.57 | 0.750 | 4.53 | 2.30 | 0.750 | 4.17 | 3.56 |
| 21.9 | 0.700 | 3.74 | 3.56 | 0.550 | 7.22 | 5.24 |
| 65.7 | 0.500 | 5.20 | 5.25 | 0.500 | 5.20 | 6.25 |
| 197 | 0.280 | 3.54 | 13.6 | 0.300 | 4.66 | 13.2 |
| 591 | 0.180 | 1.72 | 16.0 | 0.180 | 1.69 | 15.3 |
$\mu$ value as it should be.
Figure 3. $r/\eta$ (= $\ell_n/\eta$) dependence of (a) $\bar{n}$, (b) $\alpha^*$ and (c) $\theta$. In (a), the data points extracted by the present analysis via (12) are plotted by closed circles for $\delta = 2$, by closed squares for $\delta = 3$, by closed triangles for $\delta = 5$, whereas those extracted by the DNS analysis (Arimitsu N and T 2011) are plotted by symbols nabla for $\delta = 2^{1/4}$, by times for $\delta = 2^{1/2}$, by pluses for $\delta = 2$. The empirical formula for the present (DNS) analysis is given by $\bar{n} = -2.39 \log_{10}(r/\eta) + 7.31$ ($\bar{n} = -2.33 \log_{10}(r/\eta) + 8.74$). In (b) and (c), the data points extracted by the present analyses via (12) (via (13)) are plotted by closed (open) circles for $\delta = 2$, by closed (open) squares for $\delta = 3$, by closed (open) triangles for $\delta = 5$. Solid (dashed) lines are the empirical formulae (b) $\alpha^* = -0.326 \log_{10}(r/\eta) + 1.05$ ($\alpha^* = -0.285 \log_{10}(r/\eta) + 0.966$) and (c) $\theta = 1.83 \log_{10}(r/\eta) + 0.460$ ($\theta = 1.32 \log_{10}(r/\eta) + 2.70$). The empirical formulae are obtained by using all the data points for different values of $\delta$. The inertial range for the present (DNS) analysis is the region between the vertical dash-dotted (dotted) lines.
The dependence of $\bar{n}$ on $r/\eta$ (= $\ell_n/\eta$) for the present analysis with the series of PDFs
derived through (12) is given in figure 3 (a) by closed circles for $\delta = 2$, by closed squares for $\delta = 3$ and by closed triangles for $\delta = 5$. The empirical formula for $\tilde{n}$ obtained by making use of all the data points for $\delta = 2, 3$ and $5$ with the method of least squares has the expression $\tilde{n} = -1.03 \ln(r/\eta) + 7.31$ which is drawn by a solid line (lower line in the figure). This proves the correctness of the assumption that the fundamental quantities of turbulence are independent of $\delta$. We also include in the figure, for comparison, the data points of $\tilde{n}$ for $4096^3$ DNS taken from figure 4 in Arimitsu N and T (2011) and the empirical formula $\tilde{n} = -1.01 \ln(r/\eta) + 8.74$ (upper solid line) derived with the data points by the method of least squares. For the DNS, $\mu = 0.345$ (Arimitsu N and T 2011). How much $\tilde{n}$ data points are scattered from the empirical formula (see figure 3 (a)) and also from the theoretical formula (6) with $\ell_n = r$ provides us with a measure how much we perform appropriate extraction of parameters. The data points of $\tilde{n}$ for the turbulence in the wind tunnel are scattered more compared with those for the turbulence in $4096^3$ DNS, as the time-series raw data from wind tunnel include indispensable measurement errors associated with readout processes, e.g., mainly, electrical noises.
The $r/\eta$ ($= \ell_n/\eta$) dependences of $\alpha^*$ and $\theta$ are given, respectively, in figure 3 (b) and (c) by closed circles for $\delta = 2$, by closed squares for $\delta = 3$ and by closed triangles for $\delta = 5$, which are extracted from the series of PDFs derived through (12). The solid line in each figure, (b) and (c), represents an empirical formula obtained from all the data points for $\delta = 2, 3$ and $5$ by the method of least squares. These figures prove again the correctness of the assumption that the fundamental quantities of turbulence are independent of $\delta$. The value of $q'$ is found to be about $q' = 1.05$ (see table 2 (a)). We found that $w_3$ is also independent of $\delta$ and has a common line $\log_{10} w_3 = 0.372 \log_{10}(r/\eta) + \log_{10} 0.112$. Note that the empirical formulae for $\tilde{n}$, $\alpha^*$ and $\theta$ is effective only for the region $r/\eta \gtrsim 2$ since $\theta$ should satisfy $\theta > 1$ (see figure 3 (c)). We observe that the parameters $q'$, $\theta$, $w_3$ for the central part PDF and the connection point $\alpha^*$ have scaling behaviors in much wider region not restricted to inside of the inertial range.
4. Comparison of PDFs produced with full and less contaminations
In this section, we analyze the PDFs for the energy dissipation rates derived from the time-series data with the difference formula
$$\partial v/\partial t \simeq \Delta^{(0)}v/\Delta t = [v(t + \Delta t) - v(t)]/\Delta t$$
in order to study what difference comes out compared with the PDFs analyzed in section 3 which is derived by means of the difference formula (12). Note that the formula (13) estimates the values of velocity time derivative with $\Delta^{(0)}v/\Delta t$ which may contain full contamination, i.e., from the 1st order term with respect to $\Delta t$. We introduce the symbol $\varepsilon^{(0)}$ for the local energy dissipation rates derived from (13), i.e., $\varepsilon^{(0)} = 15\nu(\Delta^{(0)}v/\Delta t)^2/2U^2$.+
We observe that $\langle\varepsilon^{(0)}\rangle = 8.08 \text{ m}^2 \text{ sec}^{-1}$ which is larger than $\langle\varepsilon\rangle$.
Figure 4. Comparison of PDFs for energy dissipation rates $\Pi_3^{(n)}(\varepsilon/\langle\varepsilon\rangle)$ and $\Pi_3^{(n)}(\varepsilon^{(0)}/\langle\varepsilon\rangle)$ created, respectively, with the formulae (12) and (13). In (a) and (d)–(f), closed (open) circles and full (dashed) lines are, respectively, the experimental and theoretical PDFs for $\Pi_3^{(n)}(\varepsilon/\langle\varepsilon\rangle)$ ($\Pi_3^{(n)}(\varepsilon^{(0)}/\langle\varepsilon\rangle)$) with $\mu = 0.260$. PDFs in (a) represent for the cases $r/\eta = 6.57$ (top), 21.9 (middle) and 65.7 (bottom), shifted by $-2$ unit along the vertical axis for better visibility. The magnification of the central part PDFs for each $r (= \ell_n)$ is given in (d)–(f). The relative difference $\Delta_n = [\Pi_3^{(n)}(\varepsilon^{(0)}/\langle\varepsilon\rangle) - \Pi_3^{(n)}(\varepsilon/\langle\varepsilon\rangle)]/\Pi_3^{(n)}(\varepsilon/\langle\varepsilon\rangle)$ is given for (b) $r/\eta = 6.57$ and (c) 65.7, in which closed circles (full lines) are experimental (theoretical) $\Delta_n$. The error bar is the standard deviation of 100 hidden (not appeared in the figures) data points for $\Delta_n$ which locate between the adjacent data points for $\Delta_n$ appeared in the figures. The shown error bars are thinned out. Note that (a) and (b)–(f) are, respectively, drawn on log and linear scale in the vertical axes.
dissipation rates, we took the same procedure as used in section 3.
We compare, in figures 4 (a) and (d)–(f), the PDFs of energy dissipation rates $\Pi_3^{(n)}(\varepsilon/\langle\varepsilon\rangle)$ and $\Pi_3^{(n)}(\varepsilon^{(0)}/\langle\varepsilon\rangle)$ which are created, respectively, with the help of formulae (12) and (13). Note that the arguments for every PDFs are scaled by $\langle\varepsilon\rangle$ which does not depend on $r (= \ell_n)$. In figure 4 (a) each PDF is displayed on log scale in vertical axis for the cases $r/\eta = 6.57$ (top), 21.9 (middle) and 65.7 (bottom), which are shifted by $-2$ unit along the vertical axis for better visibility. The magnification of their central part PDFs are displayed in figures 4 (d) $r/\eta = 6.57$, (e) 21.9 and (f) 65.7 on linear scale in vertical axis. The closed (open) circles and the full (dashed) lines are, respectively, the experimental and theoretical PDFs for $\Pi_3^{(n)}(\varepsilon/\langle\varepsilon\rangle)$ ($\Pi_3^{(n)}(\varepsilon^{(0)}/\langle\varepsilon\rangle)$) with $\mu = 0.260$. Note that the values of the intermittency exponent $\mu$ for $\Pi_3^{(n)}(\varepsilon/\langle\varepsilon\rangle)$
and for $\Pi_3^{(n)}(\varepsilon^{(0)}/\langle\varepsilon\rangle)$ turn out to be the same. Other parameters are listed in table 2 and table 3.
The relative differences $\Delta_n = [\Pi_3^{(n)}(\varepsilon^{(0)}/\langle\varepsilon\rangle) - \Pi_3^{(n)}(\varepsilon/\langle\varepsilon\rangle)]/\Pi_3^{(n)}(\varepsilon/\langle\varepsilon\rangle)$ for $r/\eta = 6.57$ and 65.7 are given, respectively, in figures 4 (b) and (c), in which closed circles (full lines) represent experimental (theoretical) mean values of $\Delta_n$. The error bar in these figures is the standard deviation of 100 hidden (not appeared in the figures) data points for $\Delta_n$ which locate between the adjacent data points for $\Delta_n$ appeared in the figures. These figures show that the mean relative difference $\Delta_n$ in the region of central part of PDFs is about 10 times larger than the mean relative difference in the region of tail part.\footnote{Note that the connection points of $\Pi_3^{(n)}(\varepsilon/\langle\varepsilon\rangle)$ ($\Pi_3^{(n)}(\varepsilon^{(0)}/\langle\varepsilon\rangle)$) for $r/\eta = 6.57$ and 65.7 locate, respectively, at $\varepsilon^*/\langle\varepsilon\rangle = 4.06$ ($\varepsilon^{(0)*}/\langle\varepsilon\rangle = 4.17$) and $\varepsilon^*/\langle\varepsilon\rangle = 5.20$ ($\varepsilon^{(0)*}/\langle\varepsilon\rangle = 5.20$).} The small negative but nearly constant mean values of $\Delta_n$ in the tail part region tell us that the tail of $\Pi_3^{(n)}(\varepsilon/\langle\varepsilon\rangle)$ and that of $\Pi_3^{(n)}(\varepsilon^{(0)}/\langle\varepsilon\rangle)$ are parallel with each other, which gives the reason why we obtained the same $\mu$ value for both PDFs. We observe that the error bars in the tail part region become larger toward the rightmost end of PDF, which may be attributed to the smaller number of realizations in each bin there. Actually, the length of an error bar associated with a bin is quite close to the value $\sqrt{1/N + 1/N^{(0)}}$ which estimates the standard deviation of $\Pi_3^{(n)}(\varepsilon^{(0)}/\langle\varepsilon\rangle)/\Pi_3^{(n)}(\varepsilon/\langle\varepsilon\rangle)$ with the help of the number of the realizations $N$ ($N^{(0)}$) in the bin under consideration for $\Pi_3^{(n)}(\varepsilon/\langle\varepsilon\rangle)$ ($\Pi_3^{(n)}(\varepsilon^{(0)}/\langle\varepsilon\rangle)$). The numbers of realizations in figure 4 (c) are, for example, $N = 21$, $N^{(0)} = 23$ for the rightmost error bar, $N = 206$, $N^{(0)} = 192$ for the fifth error bar from the rightmost error bar, $N = 31508$, $N^{(0)} = 31327$ for an error bar at $\varepsilon/\langle\varepsilon\rangle \approx 5$ and $N = 2861811$, $N^{(0)} = 2886837$ for an error bar at around the peak point of $\Pi_3^{(n)}(\varepsilon/\langle\varepsilon\rangle)$ where $\varepsilon/\langle\varepsilon\rangle \approx 0.25$. The $\varepsilon$-dependence of the mean values of $\Delta_n$ in the central region indicates that the central part of $\Pi_3^{(n)}(\varepsilon^{(0)}/\langle\varepsilon\rangle)$ around its peak point moves to rightwards relative to the central part of $\Pi_3^{(n)}(\varepsilon/\langle\varepsilon\rangle)$. On the other hand, from the $\varepsilon$-dependence of the mean values of $\Delta_n$ in the tail region, it may be appropriate to interpret that the tail part of $\Pi_3^{(n)}(\varepsilon^{(0)}/\langle\varepsilon\rangle)$ moves to leftwards relative to the tail part of $\Pi_3^{(n)}(\varepsilon/\langle\varepsilon\rangle)$. If the number of realizations in each bin are increased, i.e., statistics are raised, we expect that the standard deviations of $\Delta_n$ should reduce their values and that the fluctuation of the mean values of $\Delta_n$ in the tail region should disappear. The difference of the squared time derivatives of (13) and (12) gives $(\Delta^{(0)}v/\Delta t)^2 - (\Delta^{(3)}v/\Delta t)^2 = (\partial v/\partial t)(\partial^2 v/\partial t^2)\Delta t + O(\Delta t)^4$. From the direction of the relative horizontal shift of the PDFs, we know that the net contributions of the velocity component $v$ for the region around the peak point and of the tail region satisfy, respectively,
$$
(\partial v/\partial t) \ (\partial^2 v/\partial t^2) > 0 \quad \text{and} \quad (\partial v/\partial t) \ (\partial^2 v/\partial t^2) < 0.
$$
Taking into account the smallness of the gradient of tail part PDFs, we see that the absolute value of the latter in (14) is quite large compared with the former value.
The dependence of $\alpha^*$ and $\theta$ on $r/\eta$ ($= l_n/\eta$) are given, respectively, in figure 3 (b) and (c) by open circles for $\delta = 2$, by open squares for $\delta = 3$ and by open triangles
for $\delta = 5$, which are extracted from the series of PDFs cleated with (13). The dashed line in each figure, (b) and (c), represents an empirical formula obtained from all the data points for $\delta = 2$, 3 and 5 by the method of least squares. These figures prove again, even for the case of full contamination, the correctness of the assumption that the fundamental quantities of turbulence are independent of $\delta$. We also found that $w_3$ is independent of $\delta$ and has a common line $\log_{10} w_3 = 0.318 \log_{10} (r/\eta) + \log_{10} 0.141$. The value of $q'$ is found to be about $q' = 1.05$ (see table 2 (b)).
There is only a slightly visible difference of the lines for $\alpha^*$, $w_3$ and of the values $q'$ between those obtained from the two kinds of PDFs, one with less contamination (12) and the other with full contamination (13) (see figure 3 (b); see also table 2 and table 3). The significant difference appears in the $r/\eta$ dependence of $\theta$ which are shown in figure 3 (c). The difference in $\theta$ explains the shift of the peak points between the two PDFs (see figures 4 (d)–(f)).
5. Summary and Discussion
The new scaling relation (2) is essential for the parameters $\alpha_0$, $X$ and $q$, associated with the tail part PDF, to be determined self-consistently as functions of the intermittency exponent $\mu$, and to be independent of the magnification rate $\delta$. On the other hand, we introduced the trial function (11) for the central part PDF with three adjustable parameters $q'$, $w_3$ and $\theta$, and found that these parameters are also independent of $\delta$, and satisfy scaling behaviors in wider area not restricted to the inertial range.
The independence of $\tilde{n}$ from $\delta$ ensures the uniqueness of the PDF of $\alpha$ for any value of $\delta$. The comparison between the empirical formulae for $\tilde{n}$ given in figure 3 (a) and the theoretical formula (6) provides us with the estimation $\ell_0 = 20.6$ cm which is about the same as the correlation length 17.9 cm or the separation 20 cm of the axes of adjacent rods forming the grid. Here, we are assuming that the empirical formulae are effective even for $r/\eta \lesssim 2$ (see the discussion in section 3 about the effective region of $r/\eta$). Note that $\ell_0$ gives an estimation of the diameter of the largest eddy within the energy cascade model.
As for the parameters appeared in the trial function for the central part PDFs, $\exp[-g(\xi_n)]$ in (11), the discoveries that $q' \simeq 1.05$ and that $\theta$ and $\ln w_3$ reveal scaling properties are quite attractive for the research looking for the nature of the fluctuations surrounding the coherent turbulent motion of fluid. The fact that the value $q'$ is quite close to 1 indicates that the HCT type function in (11), i.e., the part giving $\exp[-g(\xi_n)](\xi_n^*/\xi_n)^{\theta-1}$, is close to an exponential function. There is no theoretical prediction yet, which is based on an ensemble theoretical aspect or on a dynamical aspect starting with the N-S equation, to produce the formula for the central part PDF that represents the contributions both of the coherent turbulent motion providing intermittency and of incoherent fluctuations (background flow) around the coherent motion. If one could succeed to formulate a dynamical theory which produces properly the formula for the central part of PDFs starting with the N-S equation, it may provide
us with the physical meaning of the parameters $q'$, $\theta$ and $w_3$, and with an appropriate pathway to the dynamical approach, e.g., the renormalization group approach, to fully developed turbulence. A study to this direction is in progress.
Introducing two difference formulae (12) and (13) for the estimate of $\partial v/\partial t$, i.e., $\Delta^{(3)}v/\Delta t$ with less contamination and $\Delta^{(0)}v/\Delta t$ with full contamination, we performed a trial for the extraction of information from PDFs by comparing two kinds of PDFs for energy dissipation rates, $\Pi_3^{(n)}(\varepsilon/\langle\varepsilon\rangle)$ and $\Pi_3^{(n)}(\varepsilon^{(0)}/\langle\varepsilon\rangle)$ with $\varepsilon \propto (\Delta^{(3)}v/\Delta t)^2$ and $\varepsilon^{(0)} \propto (\Delta^{(0)}v/\Delta t)^2$. We observed that the intermittency exponents for the two kinds of PDFs turn out to take the same value $\mu = 0.260$ (see table 2 and table 3 for other parameters). Through the accurate analyses of PDFs, it was also revealed that the parameters for $\Pi_3^{(n)}(\varepsilon/\langle\varepsilon\rangle)$ and for $\Pi_3^{(n)}(\varepsilon^{(0)}/\langle\varepsilon\rangle)$ are independent of $\delta$ thanks to the new scaling relation (2), and that they show quite similar scaling behaviors extending to the regions with smaller and larger $r/\eta$ values outside the inertial range (see figure 3). The connection points $\alpha^*$ of the tail and central parts of the PDFs take almost the same value for each $r/\eta$ (see table 3 and figure 3 (b)). It is found that, among the parameters controlling the central part, only $\theta$ has a relatively larger deviation between the two different PDFs (see table 2 and figure 3 (c)), which is related to the shift of the peak point occurred between the two kinds of PDFs. Other parameters $q'$ and $w_3$ do not have significant difference among the two PDFs (see table 2).
Observing the relative difference $\Delta_n$ between $\Pi_3^{(n)}(\varepsilon^{(0)}/\langle\varepsilon\rangle)$ and $\Pi_3^{(n)}(\varepsilon/\langle\varepsilon\rangle)$ in figures 4 (b) and (c) with the values $\varepsilon^*_n/\langle\varepsilon_n\rangle = 4.53$ for $r/\eta = 6.57$ and $\varepsilon^*_n/\langle\varepsilon_n\rangle = 5.20$ for $r/\eta = 65.7$, we notice that the connection point $\varepsilon^*_n$ of the center part PDF and the tail part PDF provides us with the boundary dividing two regions according to their nature of stability specified by the inequalities in (14). It seems to tell us that the net behavior of incoherent motion of fluid contributing mainly around the peak point (central part) of PDF is an unstable time-evolution, whereas that of coherent turbulent motion contributing mainly to the tail part of PDF is a stable time-evolution. The former may be attributed to a manifestation of fluctuations, whereas the latter to the characteristics of intermittency. Note that we assumed that the central part $\hat{\Pi}_{3,\text{cr}}^{(n)}(\xi_n)$ is constituted of two contributions, one from the coherent contribution $\Pi_{3,S}^{(n)}(\varepsilon_n)$ and the other from the incoherent contribution $\Delta\Pi_{3}^{(n)}(\varepsilon_n)$, and that almost all the contribution to the tail part $\hat{\Pi}_{3,t}^{(n)}(\xi_n)$ comes from the coherent intermittent motion of turbulence. Further investigation about these outcomes and their interpretation is necessary, which we leave as one of the attractive future problems.\footnote{We observed that there is no visible difference between $\Pi_3^{(n)}(\varepsilon/\langle\varepsilon\rangle)$ and the PDF extracted with the formula $\partial v/\partial t \simeq [v(t + \Delta t) - v(t - \Delta t)]/2\Delta t$ which is correct without contamination up to the term of $O(\Delta t)$.}
Let us close this paper by noting the studies in progress which are deeply related to the present work. It has been revealed (Motoike and Arimitsu 2012) that the new scaling relation (2) is intimately related to the $\delta^\infty$ periodic orbits ($\delta \geq 3$) located at the threshold to chaos via the $\delta$ ($\geq 3$) times ramification (bifurcation) in $\delta$-period windows in the
chaotic region, for example, of the logistic map. The self-similar nesting structure of $\delta^k$-period windows ($k = 1, 2, 3, \cdots$) can be an origin of intermittent coherent motion in fully developed turbulence. We expect that further investigation to this direction to extract a message provided by the new scaling relation may lead us to a novel interpretation of fully developed turbulence. We are also performing a precise comparison between the results extracted in this paper for the grid turbulence in a wind tunnel and those for $4096^3$ DNS turbulence by raising the resolution of PDFs, i.e., by creating more data points for PDFs. The results of these studies will be published elsewhere in the near future.
**Acknowledgments**
The authors (T.A. and N.A.) would like to thank Prof. T. Motoike, Dr. K. Yoshida, Mr. M. Komatsuzaki and Mr. K. Takechi for fruitful discussion.
**References**
Aoyama T, Ishihara T, Kaneda Y, Yokokawa M, Itakura K and Uno A 2005 Statistics of energy transfer in high-resolution direct numerical simulation of turbulence in a periodic box *J. Phys. Soc. Jpn.* **74** 3202–3212
Arimitsu N and Arimitsu T 2002 Multifractal analysis of turbulence by using statistics based on non-extensive Tsallis’ or extensive Rényi’s entropy *J. Korean Phys. Soc.* **40** 1032–1036
Arimitsu N and Arimitsu T 2011 Verification of the scaling relation within MPDFT by analyzing PDFs for energy dissipation rates out of $4096^3$ DNS *Physica* A **390** 161–176
Arimitsu T and Arimitsu N 2000a Analysis of Fully Developed Turbulence in terms of Tsallis Statistics *Phys. Rev. E* **61** 3237–3240
Arimitsu T and Arimitsu N 2000b Tsallis statistics and fully developed turbulence *J. Phys. A: Math. Gen.* **33** L235–L241 [CORRIGENDUM: 2001 **34** 673–674]
Arimitsu T and Arimitsu N 2001 Analysis of turbulence by statistics based on generalized entropies *Physica* A **295** 177–194
Arimitsu T and Arimitsu N 2002 PDF of velocity fluctuation in turbulence by a statistics based on generalized entropy *Physica* A **305** 218–226
Arimitsu T and Arimitsu N 2011 Analysis of PDFs for energy transfer rates from $4096^3$ DNS — Verification of the scaling relation within MPDFT *J. Turbulence* **12** 1–25
Benzi R, Paladin G, Parisi G and Vulpiani A 1984 On the multifractal nature of fully developed turbulence and chaotic systems *J. Phys. A: Math. Gen.* **17** 3521-3531
Benzi R, Biferale L, Paladin G, Vulpiani A and Vergassola M 1991 Multifractality in the statistics of the velocity gradients in turbulence *Phys. Rev. Lett.* **67** 2299–2302
Biferale L, Boffetta G, Celani A, Devenish B J, Lanotte A and Toschi F 2004 Multifractal statistics of Lagrangian velocity and acceleration in turbulence *Phys. Rev. Lett.* **93** 064502-1-4
Biferale L and Procaccia I 2005 Anisotropy in Turbulent Flows and in Turbulent Transport *Phys. Rep.* **414** 43–164
Chevillard L, Castaing B, Lévêque E and Arneodo A 2006 Unified multifractal description of velocity increments statistics in turbulence: Intermittency and skewness *Physica* D **218** 77–82
Cleve J, Greiner M and Sreenivasan K R 2003 On the effects of surrogacy of energy dissipation in determining the intermittency exponent in fully developed turbulence *Europhys. Lett.* **61** 756–761
Costa U M S, Lyra M L, Plastino A R and Tsallis C 1997 Power-law sensitivity to initial conditions within a logistic-like family of maps: Fractality and nonextensivity *Phys. Rev. E* **56** 245–250
Dubrulle B 1994 Intermittency in fully developed turbulence: log-Poisson statistics and generalized scale covariance *Phys. Rev. Lett.* **73** 959–962
Grassberger P 1983 Generalized dimension of strange attractors *Phys. Rev. Lett.* A **97** 227–229
Halsey T C, Jensen M H, Kadanoff L P, Procaccia I and Shraiman B I 1986 Fractal measures and their singularities: The characterization of strange sets *Phys. Rev.* A **33** 1141–1151
Havrda J H and Charvat F 1967 Quantification methods of classification processes: Concepts of structural $\alpha$ entropy *Kybernetica* **3** 30–35
Hentschel H G E and Procaccia I 1983 The infinite number of generalized dimensions of fractals and strange attractors *Physica* D **8** 435–444
Hosokawa I 1991 Turbulence models and probability distributions of dissipation and relevant quantities in isotropic turbulence *Phys. Rev. Lett.* **66** 1054–1057
Lyra M L and Tsallis C 1998 Nonextensivity and multifractality in low-dimensional dissipative systems *Phys. Rev. Lett.* **80** 53–56
Mandelbrot B B 1974 Intermittent turbulence in self-similar cascades: Divergence of high moments and dimension of the carrier *J. Fluid Mech.* **62** 331–358
Meneveau C and Sreenivasan K R 1987 The multifractal spectrum of the dissipation field in turbulent flows *Nucl. Phys.* B (Proc. Suppl.) **2** 49–76
Motoike T and Arimitsu T 2012 in preparation to submit
Mouri H, Hori A and Takaoka M 2008 Fluctuations of statistics among subregions of a turbulence velocity field *Phys. Fluids* **20** 035108-1–6
Nelkin M 1990 Multifractal scaling of velocity derivatives in turbulence *Phys. Rev.* A **42** 7226-7229
Parisi G and Frisch U 1985 *Turbulence and predictability in geophysical fluid dynamics and climate dynamics* (New York: North-Holland/American Elsevier) pp 84-87
Rényi A 1961 On measures of entropy and information *Proc. of the 4th Berkeley Symp. on Mathematical Statistics and Probability* (Berkeley: USA) pp 547–561
She Z-S and Leveque E 1994 Universal scaling laws in fully developed turbulence *Phys. Rev. Lett.* **72** 336–339
She Z-S and Waymire E C 1995 Quantized energy cascade and log-Poisson statistics in fully developed turbulence *Phys. Rev. Lett.* **74** 262–265
Suyari H and Wada T 2006 Scaling property and Tsallis entropy derived from a fundamental nonlinear differential equation *Proc. of the 2006 Int. Symp. on Inf. Theory and its Appl.* (ISITA2006) pp 75–80 (*Preprint* cond-mat/0608007)
Tsallis C 1988 Possible generation of Boltzmann-Gibbs statistics *J. Stat. Phys.* **52** 479–487
Tsallis C 2001 Nonextensive statistical mechanics and thermodynamics: Historical background and present status *Nonextensive Statistical Mechanics and Its Applications* ed S Abe and Y Okamoto (Berlin: Springer-Verlag) pp 3–98
|
MAIN RETRIEVAL SYSTEMS’ PARAMETERS ANALYZE
João Ferreira
Instituto Superior de Engenharia de Lisboa
firstname.lastname@example.org
Alberto Rodrigues da Silva
INESC-ID, Instituto Superior Técnico
email@example.com
José Delgado
Instituto Superior Técnico
firstname.lastname@example.org
ABSTRACT
We explore retrieval effectiveness of vectorial, link analyses, probabilistic and classification retrieval system parameters and compare the different performance in a controlled environment (using WT10g collection from TREC). Main retrieval parameters are captured from related to: (1) retrieval methods (e.g., vectorial, link analyses, probabilistic and classification); (2) main internal system parameters (e.g., query length, URL length, phrase, feedback, index); (3) and also internal system parameters combination (e.g., combination of different query and URL lengths, phrase, feedback and index) and we concluded about most important parameters, retrieval method performance and that combination can improve results. We analyses many cases from around 500 retrieval systems using our own modular platform, WebSearchTester.
KEYWORDS
Vectorial, Links Analyses, Retrieval Information, Classification, and Probabilistic.
1. INTRODUCTION
How do we find information on the Web? It is an old question far from being solved. Web information is distributed, decentralized and huge in size. The Web can be viewed as one big virtual document collection. The findings from traditional Information Retrieval (IR) research (traditional IR means text-based approaches), however, may not always be applicable in the Web setting. The Web document collection, massive in size and diverse in content, context, format, purpose and quality, challenges the validity of previous research findings based on relatively small and homogeneous test collections. Also, some traditional IR approaches may be applicable in theory, but may not be possible or practical to implement in a Web IR system. For instance, the size, distribution and dynamic nature of information on the Web make it difficult, if not impossible, to construct a complete and up-to-date data representation required for an ideal IR system. In addition, conventional evaluation measures, such as precision, recall, and even relevance, may no longer be applicable to Web IR, where a test collection representative of dynamic and diverse Web data is all but impossible to construct.
To further complicate the matter, information seeking on the Web is quite diverse in characteristics and unpredictable in nature. Web searchers come from all kinds of reasons motivated by all types of information need. The wide range of experience, knowledge, motivation, need, and purpose of Web searchers means that searchers can express wide ranges of information needs in a wide variety of ways with various criteria for satisfying their needs.
At the same time, the Web is rich with new types of information not present in most previous test collections. Hyperlinks, usage statistics, document mark up tags and bodies of topic hierarchies such as Yahoo present an opportunity to leverage the Web-specific document characteristics in novel approaches that go beyond the term-based retrieval framework of traditional IR.
This paper explores the question of identified the most important parameters of the methods: link analysis, content analysis, and classification-based, is divided in 4 sections. Firstly, we introduce the problem, secondly we describe individual retrieval systems. Third section we represent the results and on section four the conclusion.
2. RETRIEVAL SYSTEMS
To control and have measures of systems performance we use an controlled environment provide by TREC. As the source for these experiences we use the WT10g collection [1], which is a ten-gigabyte subset of the 1997 Web crawl by the Internet Archive, consists of 1.7 million Web documents, 100 TREC queries (topics 451-550), and official NIST relevance judgments. The WT10g collection also includes the connectivity data, which provides lists of inlinks and outlinks of all documents in the collection.
As classification systems for the Web lack an ideal Web directory, we use Yahoo <http://yahoo.com> due to its size and popularity. Yahoo is the largest and the most widely used Web directory, and consists of 14 top categories with over 645,000 sub-categories that contain almost 3 million Web pages, which are classified and annotated by over 150 professional Yahoo cataloguers.
VSM
The text-based retrieval component is based on a Vector Space Model (VSM) using the SMART length-normalized term weights as implemented in OpenFts <http://openfts.sourceforge.net/>. For implementation details, see [2]. From text, tags are removed; stop words and weights are based on $Lnu$ document term weight (1):
$$d_{ik} = \frac{(1 + \log(f_{ik}))/(1 + \log(avg\_f_i))}{(1.0 - slope) * p_i + slope * t}$$ \hspace{1cm} (1)
with the slope of 0.3 for document terms [3], where $f_{ik}$ is the number of times term $k$ appears in document $i$; $avg\_f_i$ is the average in-document frequency for document $i$; $t$ is the number of unique terms in the collection and $p_i$ is the average number of unique terms in a document $i$. The formula for $ltc$ query term weight is:
$$q_k = \frac{(\log(f_k) + 1) * idf_k}{\sqrt{\sum_{j=1}^{l} (\log(f_j) + 1) * idf_j}}$$ \hspace{1cm} (2)
where $f_k$ is the number of times term $k$ appears in the query; and $idf_k$ is the inverse document frequency [4] of term $k$. The denominator is a document length normalization factor, which compensates for the length variation in queries. Documents were ranked in decreasing order of the inner product of document and query vectors,
$$q^T d_i = \sum_{k=1}^{l} q_k d_{ik}$$ \hspace{1cm} (3)
For feedback, we use the top ten positive and top two negative weighted terms from the top three ranked documents of the initial retrieval results. These terms were used to expand the initial query in a pseudo-feedback retrieval process based on the adaptive linear model. The basic approach of the adaptive linear model, which is based on the concept of preference relations from decision theory [5], is to find a solution vector that will rank a more-preferred document before a less-preferred one [6]. The solution vector is arrived at via an error-correction procedure, which begins with a starting vector $q_{(0)}$ and repeats the cycle of “error-correction” until a vector is found that ranks documents according to the preference order estimation based on relevance feedback [7]. The error-correction cycle $i$ is defined by
$$q_{(i+1)} = q_{(i)} + \alpha b$$ \hspace{1cm} (4)
where $\alpha$ is a constant, and $b$ is the difference vector resulting from subtracting a less-preferred document vector from a more preferred one [8].
We tested 36 VSM based on the combination of four parameters (notation;p/m/l;c/t/d/0/1;0/1): query length (small(p), medium(m), large(l)), term sources (body (c), header (t), document all (d)), phrase use (1=yes;0-no) and feedback use (1=yes;0=no).
Table 1. Notation used for VSM retrieval system. (v$Query_Lenght$Index$Phrases$Feedback)
| Sistem | Query Lenght | Index | Phrases | Feedback |
|--------|--------------|----------------|---------------|------------|
| v * * * * | p-short | d - complete document | 0- without | 0- no |
| | m-medium | c- body document | l- with | l - yes |
| | l-long | r-header document | F- combination | |
| | F-combination | | | |
Link analyses
The HITS system’s algorithm was modified by adopting a couple of improvements from other HITS-based approaches. As implemented in the ARC algorithm [9], the root set was expanded by 2 links instead of 1 link (i.e. expand $S$ by all pages that are 2 link distance away from $S$). All intrahost links and stoplist URLs were eliminated from the hub and authority score computations. Stoplist URLs, defined as Web pages with very high indegree, were selected from the list of URLs with indegree greater than 500. Also, the edge weights by [9], which essentially normalize the contribution of authorship by dividing the contribution of each page by the number of pages created by the same author, was used to modify the HITS formulae as follows:
$$a(p) = \sum_{q \rightarrow p} h(q) \times auth\_wt(q, p)$$ \hspace{1cm} (5)
$$h(p) = \sum_{p \rightarrow q} a(q) \times hub\_wt(p, q)$$ \hspace{1cm} (6)
In the formulae above, $auth\_wt(q, p)$ is $1/m$ for page $q$, whose host has $m$ documents pointing to $p$, and $hub\_wt(p, q)$ is $1/n$ for page $q$, which is pointed to by $n$ documents from the host of $p$. To compute the edge weights of the modified HITS algorithm as well as to eliminate intrahost links, one must first establish a definition of a host to identify the page authorship (i.e. documents belonging to a given host are created by the same author). Though host identification heuristics employing link analysis might be ideal, we opted for simplistic host definitions based on URL lengths. Short host form was arrived at by truncating the document URL at the first occurrence of a slash mark (i.e. ‘/’), and long host form from the latest occurrence. We tested 6 Hits systems based on 2 parameters: host definition (short (p), long (l)); seed set from VSM systems ((p) Vpc10, (m)Vmc10, (l)Vlc10).
Table 2. Notation used for HITS retrieval system. (h$Seed_set$Site_length)
| Sistem | Seed set (v*c10) | Site length |
|--------|------------------|-------------|
| h * * | short (p) | short (p) |
| | medium (m) | long (l) |
| | Long (l) | combination (F) |
| | combination (F) | |
Classification
The Web Directory search was implemented based on the Term Match (TM) method. TM takes a simpler approach of finding categories in which query terms occur by extending the typical category search implementation of Web directory services.
The first phase of the TM method, which produces a ranked list of categories for a query, matches query terms to terms in the Yahoo sitemap files (i.e. category labels, Yahoo site titles and descriptions, URLs) to find a set of matching nodes in the classification hierarchy and generates a ranked category list in the following manner:
1. For each matching category, (i) compute $tfc$ (number of unique query terms in the category label); (ii) compute $tfs$ (number of unique query terms in the site title and description) in all its sites; (iii) compute $pms$ (proportion of sites with query terms in the category).
2. Rank the matching categories in the descending order of $tfc$, $tfs$, and $pms$.
Note that categories ranked via sorting by multiple variables in such an order that the terms in category labels, which are likely to be highly “powerful”, are given precedence over terms in site titles or descriptions. This ranking approach is similar to how Yahoo ranks its search results except that it combines the category and site match results while collapsing the site match results to their parent categories.
The second phase of the TM method is to expand query vector (the class centroid in the TM method) that is built from the best matching categories to produce a ranked list of the WT10g documents. The expanded query vector of the TM method is a vector of selected category terms with normalized term-category association weights. The parameters tested for the TM systems are the number of top categories used, the WT10g term index and terms for pseudo-feedback. The combination of the parameters (3 top categories (1/2/3), 4 WT10g term index (body text, no phrase (1) body text, phrase (2) body+header, no phrase (3) body+header, phrase (4), 2 for feedback use(1-yes,0-no)) resulted in 24 TM systems.
Table 3. Notation used for classification based retrieval system.
| Sistema | # Top Cat | Index | Feedback |
|---------|-----------|-------|----------|
| t * * * | 1 | body doc no phrase (1) | body doc with phrase (2) | 0 |
| | 2 | header doc no phrase (3) | header doc with phrase (4) | 1 |
| | 3 | F | F | F |
| | F | | | |
tm$# top cat$index$feedback
Probabilistic
The probabilistic approach involves a two-step process of contingency table construction and association weight calculation. If each document $D_i$ in a collection is regarded as a multi-set $a_i$ of $m$ document terms and $b_j$ of $n$ query terms, i.e. $D_i = (\{a_{i1}, \ldots, a_{im}\}; \{b_{j1}, \ldots, b_{jn}\})$, the associations contained in a particular document $D_i$ consists of all the ordered pairs that can be formed from $a_{im}X b_{jn}$ document subparts. For each term $A$ and a category $B$ (i.e. $a_{im}-b_{jn}$ pair), a contingency table is formed containing the counts for each of the possible combinations of $A$ and $B$:
| AB | A→B |
|------|-----|
| ¬AB | ¬A→B|
where “→” denotes the absence of some event. The possible combinations are $AB$, where the events both occur; $A→B$, where event $A$ occurs without $B$; ¬$AB$, where $B$ occurs without $A$; and finally, ¬$A→B$ where neither $A$ nor $B$ occurs. As each of the pairs for each document is considered, contingency tables are constructed and updated. When all the pairs and contingency tables have been recorded after processing all the documents in a collection, the strength of the associations can be computed for each document_term-query_term pair using a likelihood ratio statistic as the measure of association. The strength of association is computed by the following formula:
$$\lambda^2 = 2 \left[ \ln \frac{L(p_1, k_1, n_1)}{L(p, k_1, n_1)} + \ln \frac{L(p_2, k_2, n_2)}{L(p, k_2, n_2)} \right], \quad (7)$$
where: $\ln(L(p, k, n)) = k \ln p + (n - k) \ln(1 - p)$; $p_1 = \frac{k_1}{n_1}$; $p_2 = \frac{k_2}{n_2}$; $p = \frac{k_1 + k_2}{n_1 + n_2}$; $k_1 = AB$, $n_1 = AB + ¬AB$, $k_2 = A→B$, and $n_2 = A→B + ¬A→B$.
For each entry document_term-query_term pair, the strength of association is calculated. Documents are order by the some of all document terms associated with query terms like: document_term$_1$-query_term$_1$ with weights $w_1$ and document_term$_1$-query_term$_2$ with weights $w_2$ can be collapsed into category$_1$ with weights $(w_1 + w_2)$.
Table 4. Notation used for Probabilistic retrieval system.
| sistem | Index WT10g (Phrases) | Query length | feedback |
|--------|-----------------------|--------------|----------|
| | header (t) | short (p) | no (1) |
| | document (d) | medium (m) | yes (1) |
| | F | long (l) | F |
| | | Combination (F) | |
3. RESULTS
Individual systems: Among the VSM System parameters tested, which are query length, term source, use of phrase terms, and use of pseudo-feedback, query length and term source were found to be most influential to the retrieval outcome. The influence of query length, which may be related to the amount of information, seems intuitive. Regardless of other parameter combinations, longer queries performed better than shorter queries in all cases except when system performances were degraded by the adverse effect of header text terms. The system performance order with regard to query length and term source can be written as:
\[ \text{vlc*} > \text{vmc*} > \text{vld*} > \text{vpc*} > \text{vmd*} > \text{vpd*} > \text{v*t*} \]
(8)
The adverse effect of header text terms (i.e. HTML titles, meta keywords and descriptions, headings text marked up by \(<\text{H}>\) tags) appear even more pronounced when results are grouped by term source. All body text systems (\(v*c*\)) performed better than ones using complete document (\(v*d*\)) except when shorter queries were used (i.e. vld*/vmd* > vpc*). The severe degradation of performance introduced by the use of header text terms can be seen in Figure 1, where the performance differentials between the top body text and the top header text systems by all measures are quite pronounced. It is somewhat surprising that header text had such an adverse effect on retrieval performance. Ideally, document titles and headings, not to mention meta description and keywords should contain important concepts of the document, and thus be beneficial for retrieval. The fact that the results show otherwise could be due to the nature of Web documents, which may sometimes be intentionally misleading or constructed with less attention to content and more attention to appearance than purely textual documents.
The use of phrases resulted in only a marginal increase in performance. Similarly, the use of pseudo-feedback resulted in a slight decrease in performance in most cases. In comparison with TREC official runs, the best VSM systems for short and long queries fall between the third and fourth quadrants in system ranking. Even so, the average precision of the best VSM system is roughly twice that of the top TM system, four times the top HITS. The best performing VSM system, measured by average precision, was vlc10 (long query, body text, phrase, and no feedback). The best HITS system was hpm (short host, seed set system of vmc10) for topics 451-500. The best TM system, which differed over topic sets as HITS did, was t221 (top 2 categories, body text, phrase, no feedback) for topics 451-500. The best probabilistic system was pdp0, index from document, short query and no feedback.
Figure 1. Results of VSM systems for topics 451-500.
Figure 2. Results of HITS systems for topics 451-500. (hpopt, is produced through all relevant document previous known).
Figure 3. Results of TM (left) and Probabilistic (right) systems for topics 451-500.
Figure 4. Resume internal combination of parameters for topics 451-500.
Notation for all systems is in tables 1 to 4; RRN = Total Number of Relevant documents; avgP = average precision averaged over queries; optF = optimum F; R-P = R-Precision; P@k = Precision at rank k.
In general, the most influential system parameter appears to be the query length. It is interesting to note that VSM and HITS systems benefit from longer queries, whereas TM systems perform better with shorter queries. Host definition, which determines the elimination of intrahost links and computation of link edge weights, seems to be a crucial parameter for HITS systems.
Internal system parameters combination: We implemented, two of the most common combination formulas, the Similarity Merge [10,11,12,13] Weighted Sum [14,15,16,17].
The Similarity Merge (SM) combination formula, originally introduced by Fox and Shaw [10,11] and refined by [12, 13], computes the combination score of a document by the sum of normalized component scores boosted by the retrieval overlap. In order to address the issue of combining a large number of systems with uneven distribution across methods, the overlap count was normalized by the number of systems in a given method. Equation (9) below describes the SM combination formula used to merge and rank documents retrieved by different systems:
\[ CS = (\sum NS_i) \ast \frac{olp}{m(i)} \quad (9); \text{ where: } FS = \text{combination score of a document;} \]
\[ NS_i = \frac{(S_i - S_{\min})}{(S_{\max} - S_{\min})} \quad (10); \text{ } NS_i = \text{normalized score of a document by system } i, \text{ is computed by Lee’s min-max formula (1996, 1997), where } S_i \text{ is the retrieval score of a given document and } S_{\max} \text{ and } S_{\min} \text{ are the maximum and minimum document scores by system } I; \text{ } olp = \text{number of systems that retrieved a given document; } m(i) = \text{number of systems in a method to which system } i \text{ belongs. This formulae is identified on figure 4, by suffix ‘a’ at end.} \]
To compensate for the differences among combination component systems, the *Weighted Rank Sum* (WRS) formula, which uses rank-based scores (e.g. 1/rank) in place of document scores of WS formula, was tested:
\[ CS = \sum(w_i \ast RS_i), \quad (11); \text{ where: } FS = \text{combination score of a document, } w_i = \text{weight of system } i, \text{ } RS_i = \text{rank-based score of a document by system } i. \text{ This formula is identified on figure 4, by suffix b at end.} \]
Comparing the best performances of SM and WRS combination with the best baseline system reveals some interesting patterns of interplay between the combination formula and the retrieval method. In both VSM and TM combination, WRS closely shadows the baseline system while SM falls below the baseline performance. In HITS combination, however, SM results are the best by all performance measures while WRS seems to overtake and surpass the baseline performance at lower ranks.
### 4. RESULTS DISCUSSION AND CONCLUSIONS
In order to investigate the effects of various evidence source parameters, 36 text-based systems based on the Vector Space Model, 6 link-based systems using the HITS algorithm, 24 classification-based and 12 probabilistic based systems using Yahoo category term matching approach were implemented to produce 78 sets of retrieval results for each of the 100 WT10g topics. The retrieval results were then combined in a comprehensive manner within each method as well as across methods using a score-based and a rank-based combination formula producing additional 192 VSM, 24 HITS, 120 TM and 72 Probabilistic systems. In total we produce 488 systems using a common platform [2] to avoid big effort on the construction of systems. Analysis of results suggests that *query length and host definition are the most influential system parameters for retrieval performance*.
For VSM and HITS systems that use the VSM results as the seed documents, longer queries produced far better results than shorter queries, while shorter queries affected better results in TM systems. The host definition, which directly influences both the elimination of intrahost links and link weight computation of the HITS algorithm, turned out to be a crucial parameter for HITS systems, with the shorter definition is clearly superior to the longer definition.
For HITS systems, the quality of the seed document set, both in the number of relevant documents and the richness of link topology appeared to be vital for their effectiveness. Even the optimum HITS system, using the seed set of all known relevant documents, produced disappointing results due to many queries that produced only a small number of relevant documents and the possibly truncated and spurious link topology of WT10g. In fact, 83 out of 100 seed sets produced by the best VSM system were composed of 83% or more non-relevant documents, which severely handicapped the maximum performance threshold of HITS systems. Among the retrieval systems tested, VSM systems clearly outperformed other systems, with TM systems showing better results than HITS systems. In general, average precisions of VSM systems were roughly twice as good as TM systems and four times the average precisions of HITS systems. Internal combination of VSM and TM systems behaved similarly in that combination detracted from the baseline performance although combining TM system results degraded baseline results much more severely than VSM combination when using the SM formula. Combinations in general show more relevant documents identified and SM formula shows improvements at HITS systems and degradation of results at TM and VSM systems. WRS formula shows similar results in all systems (figure 4).
REFERENCES
[1] http://www.ted.cmis.csiro.au/TRECWeb/access_to_data.html.
[2] Ferreira, João; Silva, Alberto; Delgado, José (2004). *Infraestrutura modular de teste para pesquisa de informação*. Proceedings of the IADIS Conferencia Ibero-Americana WWW/Internet 2004 - October 7 - 8, 2004.
[3] Yang Y. & Pederson J. O. (1997). *Feature selection in statistical learning of text categorization*. Proceedings of the 14th International Conference on Machine Learning.
[4] Sparck J. K. (1971). Automatic Keyword Classification for Information Retrieval. London: Butterworth.
[5] Fishburn P. C., 1970 *Utility theory for decision making*, John Wiley, New York.
[6] Wong S. K. M. Yao Y. Y. Salton G. & Buckley C., 1991. *Evaluation of an adaptive linear model*. JASIS, 42 723-730.
[7] Sumner, R. G., Jr., & Shaw, W. M., Jr., 1997. An investigation of relevance feedback using adaptive linear and probabilistic models. In E. M. Voorhees & D. K. Harman (Eds.), The Fifth Text REtrieval Conference (TREC-5).
[8] Sumner, R. G., Jr., Yang, K., Akers, R., & Shaw, W. M., 1998, Jr.. *Interactive retrieval using IRIS: TREC-6 experiments*. In E. M. Voorhees & D. K. Harman (Eds.), The Sixth Text REtrieval Conference (TREC-6).
[9] Kleinberg, J., 1997. *Authoritative sources in a hyperlinked environment*. Proceeding of the 9th ACM-SIAM Symposium on Discrete Algorithms.
[10] Fox E. A. & Shaw J. A., 1994. *Combination of multiple searches*. In D. K. Harman (Ed.). TREC-2.
[11] Fox, E. A., & Shaw, J. A., 1995. *Combination of multiple searches*. In D. K. Harman (Ed.), The Third Text Retrieval Conference (TREC-3) Washington, DC: U.S. Government Printing Office, NIST Spec. Publ. 500-225, 105-108.
[12] Lee J. H., 1996. *Combining multiple evidence from different relevance feedback methods* (Tech. Rep. No. IR-87). Amherst: University of Massachusetts Center for Intelligent Information Retrieval.
[13] Lee J. H., 1997. *Analyses of multiple evidence combination*. Proceedings of the ACM SIGIR Conference on Research and Development in IR, 267-276.
[14] Bartell, B. T., Cottrell, G. W., & Belew, R. K., 1994. *Automatic combination of multiple ranked retrieval systems*. Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval.
[15] Larkey, L. & Croft, W. B., 1996. *Combining Classifiers in Text Categorization*. Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval, 289-297.
[16] Modha, D. & Spangler, W. S., 2000. *Clustering hypertext with applications to Web searching*. Proceedings of the 11th ACM Hypertext Conference, 143-152.
[17] Thompson, P., 1990. A combination of expert opinion approach to probabilistic information retrieval, part 1: The conceptual model. Information Processing & Management, 26(3), 371-382.
|
Vitamin $B_{12}$ Deficiency: A New Risk Factor for Breast Cancer?
A prospective epidemiologic study found a threshold level for serum vitamin $B_{12}$ below which an increased risk of breast cancer among postmenopausal women was observed. This is the first observation to suggest that $B_{12}$ status may influence breast carcinogenesis and therefore may be a modifiable risk factor for breast cancer prevention.
Breast cancer is the most common cancer in women. It is estimated that 176,300 new cases of breast cancer will be diagnosed in 1999 in the United States and 43,700 deaths will be directly attributable to the cancer.\(^1\) The magnitude of the problem is similar in most developed nations. Breast cancer strikes women of all ages, races, ethnicities, socioeconomic strata, and geographic locales. Until recently, the incidence of breast cancer appeared to be rising. This trend held true, even when corrected for an increased detection rate owing to improved technology and heightened public awareness. Recent data from the National Cancer Institute now show a decline in breast cancer mortality, suggesting that earlier detection and/or improvements in treatment may be making the prognosis somewhat more favorable.
Breast cancer is a heterogenous disease whose etiology for the most part is unknown.\(^2\) However, it is generally agreed that there are three broad determinants of breast cancer risk. One determinant is heredity, such as carrying a germline mutation in the BRCA1 or BRCA2 gene. Another determinant is hormonal and reproductive factors. Women have a 180-fold higher risk of breast cancer than men, and among women, age at menarche, age at menopause, age at first childbirth, and parity influence breast cancer risk. A third determinant is environment, such as socioeconomic status, lifestyle habits, and diet.
Many dietary components have been evaluated in epidemiologic or animal studies for their influence on breast cancer risk. One of the first dietary components to be implicated in the high incidence of breast cancer was dietary fat.\(^3\) Correlations observed between breast cancer rates and per capita animal fat consumption in different countries have been followed by studies on the association between individual fat consumption and breast cancer risk in women. However, prospective studies have failed to show a clear association between a woman’s daily total fat intake and breast cancer risk. Alcohol also is a well-established dietary risk factor in breast cancer. Studies with alcohol have shown that moderate consumption increases breast cancer risk by approximately 10%, reportedly by altering estrogen metabolism.
On the other hand, most case-control studies have suggested that a higher intake of fiber, vegetables, and fruits protect against breast cancer. Vegetables and fruits are rich in fiber, minerals, anticarcinogenic phytochemicals, and antioxidant vitamins. A high fiber intake is thought to reduce estrogen levels by decreasing absorption of estradiol formed in the colon, and several phytochemicals have shown a protective effect in experimental models of mammary gland carcinogenesis. Numerous epidemiologic studies have investigated antioxidant vitamins such as $\beta$-carotene, vitamin C, and vitamin E for a protective effect against breast cancer. The results of case-control and prospective studies, however, are unconvincing.\(^3\)
The recent report by Wu et al. suggests a relationship between serum vitamin $B_{12}$ concentrations and breast cancer risk.\(^4\) They conducted a nested case-control study using resources from the Washington County (Maryland) serum bank to investigate the incidence of breast cancer and prediagnostic serum levels of folate, $B_{12}$, and vitamin $B_6$. Among postmenopausal women at blood donation, observed associations of $B_{12}$ and breast cancer suggested a threshold effect: an increased risk of cancer was observed in the quintile of subjects who possessed the lowest $B_{12}$ levels compared with the higher four-fifths of the control distribution. On the other hand, the investigators found no evidence of an association between breast cancer incidence and blood levels of folate, vitamin $B_6$, or homocysteine, all of which, like $B_{12}$, are involved in one-carbon metabolism.
To my knowledge, this is the first time a relationship between blood levels of $B_{12}$ and breast cancer incidence has been examined. As is true of most observational studies, it is not known whether the low $B_{12}$ status in these subjects is a factor that enhances breast cancer developFigure 1. The interrelationships of vitamin $B_{12}$, folate, and methionine. THF = tetrahydrofolate; DHF = dihydrofolate; SAM = S-adenosylmethionine; SAH = S-adenosylhomocysteine.
Vitamin $B_{12}$ (cobalamin) has been a challenging problem in biochemistry and medicine since the discovery by George Minot and William Murphy in 1926 that pernicious anemia could be treated by feeding the patient large amounts of liver. A possible functional relationship between $B_{12}$ and cancer was established in 1954, when Massey and Rubin described the persistence of abnormal gastric columnar cells in the stomachs of individuals with pernicious anemia, even after the anemia had been successfully treated with $B_{12}$. They postulated that these abnormal cells might represent a transitional cell type between cells that characterize the atrophic gastritis epithelium in pernicious anemia and gastric cancer cells. Two prospective clinical intervention trials subsequently demonstrated a significant degree of attenuation in bronchial metaplasia after supplementation with $B_{12}$ and folate. In 1986 Herbert suggested that the role of $B_{12}$ in preventing carcinogenesis is largely an extension of, and linked to, its roles in normal metabolism, particularly one-carbon unit metabolism. A possible key function in this regard may be its role in DNA methylation whereby hypomethylation “switches on” genes and methylation “switches them off.”
In mammalian tissue, $B_{12}$ is involved in two types of reactions as a coenzyme: 1) rearrangements, exemplified by the conversion of L-methylmalonyl CoA into succinyl CoA, which is the point of entry for some of the carbon atoms of methionine, isoleucine, threonine, and valine into the Kreb’s cycle and 2) biological methylation, as in the synthesis of methionine. A $B_{12}$-containing enzyme removes a methyl group from methyl folate and delivers it to homocysteine, thereby converting homocysteine to methionine and regenerating tetrahydrofolate, from which 5,10-methylene tetrahydrofolate (necessary for thymidylate synthesis) is made (Figure 1). Because methyl folate may only return to the body’s folate pool via a $B_{12}$-dependent step, a patient with $B_{12}$ deficiency has much of his folate “trapped” as methyl folate, a metabolically inactive form as far as nucleotide synthesis is concerned. This “folate trap” hypothesis helps explain why the hematologic damage of $B_{12}$ deficiency is not clinically distinguishable from that of folate deficiency. In either deficiency, lack of adequate DNA synthesis causes many hemopoietic cells to die in the bone marrow and induces megaloblastosis, or so-called “ineffective erythropoiesis.” Megaloblastosis is also present in all other rapidly duplicating cells of the body, such as in the gastrointestinal epithelium.
The features of $B_{12}$ metabolism suggest two possible mechanisms by which $B_{12}$ depletion might enhance carcinogenesis, and these are quite similar to folate-related carcinogenesis. One potential mechanism is an increase in DNA strand breaks. A folate trap produced by $B_{12}$ deficiency decreases thymidine and purine synthesis and subsequently induces a deoxyribonucleotide pool imbalance. This imbalance causes uracil to be incorporated into DNA instead of thymidine. Uracil in DNA is excised by a repair glycosylase with the formation of a transient single-strand break in the DNA; two opposing single-strand breaks cause a double-strand chromosomal break, which is difficult to repair and which may ultimately turn into a mutation. In a study of healthy elderly men or young adults, increased chromosome breakage correlated either with a deficiency of folate or $B_{12}$ or with elevated levels of homocysteine; dietary supplementation of $B_{12}$ above RDA level minimized this effect.
The other possible mechanism is an alteration of DNA methylation by the methyl folate trap.\textsuperscript{8} Mammalian DNA is methylated at deoxycytidine residues. Nearly all of these methylated residues reside in cytosine-guanine (5'-CpG-3') sequences, and 70–90% of the deoxycytosines in CpG sequences are methylated (which accounts for 3–5% of deoxycytosine in DNA). Even though the entire array of functions of DNA methylation are not yet known, it is an important determinant in gene expression and structural stability of DNA.\textsuperscript{10}
The precise means by which aberrations in DNA methylation play a role in carcinogenesis remain ill-defined, but it is clear that normal patterns of DNA methylation are necessary to maintain cellular homeostasis. Abnormal DNA methylation patterns are characteristic of neoplastic cells. Three different types of aberrant DNA methylation are associated with carcinogenesis.\textsuperscript{12} The first is widespread areas of genomic hypomethylation. Genomic DNA hypomethylation is a common phenomenon in cancers in the colon, lung, stomach, uterus, and cervix. Recently Soares et al. reported that DNA from breast cancer also showed significant hypomethylation compared with DNA from benign lesions and from normal breast tissue.\textsuperscript{13} The second aberrant pattern is regional hypermethylation. These changes usually occur in CpG dinucleotides that are clustered in regions of approximately 1–2 kb in length, called "CpG islands," in or near the promoter and first exon region of genes. Hypermethylation of promoter regions has been implicated in carcinogenesis because it turns off expression of these genes. Hypermethylation has been previously observed in breast cancer-related tumor suppressor genes, such as the estrogen receptor gene and the mammary-derived growth factor gene, as well as a breast cancer susceptibility gene, BRCA1. Aberrant cytosine methylation of the BRCA1 CpG island promoter is one of the candidate mechanisms of BRCA1 repression in sporadic breast cancer.\textsuperscript{14} The third aberrant pattern is an alteration of methylation in specific coding regions. CpG sites are mutational hot spots in many human tumors. Spontaneous or enzymatic deamination of 5-methyl-cytosine to thymine and enzymatic deamination of unmethylated cytosine to uracil ultimately can lead to a C to T mutation. In an animal study,\textsuperscript{15} folate deficiency induced hypomethylation in the hypermutable exons of p53 gene, raising the question whether these might be molecular lesions that precede the mutations commonly seen in this region. In this animal experiment, DNA strand breaks also were increased in the same region. Although DNA methylation is one of the most important epigenetic changes in breast carcinogenesis and B\textsubscript{12} metabolism has been known to be involved in DNA methylation, direct evidence to support this hypothesis is not yet available.
In future breast cancer studies, B\textsubscript{12} depletion should be considered as a potential dietary risk factor. Mechanistic studies also would be of interest.
Another interesting observation in this study is the lack of a relationship between breast cancer incidence and plasma or serum folate level. In 1991 a case control study found that women with breast cancer had a substantially lower ingestion of dietary folate compared with controls without breast cancer, and that dietary folate intake was inversely associated with the risk of developing breast cancer.\textsuperscript{16} In a more recent prospective study, the excess risk of breast cancer associated with alcohol consumption was reduced by adequate folate intake even though total folate intake was not associated with the overall risk of breast cancer.\textsuperscript{17} It is well known that alcohol consumption interferes with folate metabolism, probably by changing the distribution of the different coenzyme forms of folate; this is quite similar to the effect of B\textsubscript{12} deficiency. The present study, however, did not find any associations between plasma or serum folate level and breast cancer incidence or any evidence for a protective effect of B-vitamin supplementation.
We can propose several reasons why the results of epidemiologic studies of folate status and breast cancer incidence are conflicting. First, a growing body of epidemiologic and clinical evidence suggests that, regardless of systemic folate status, the susceptibility to folate depletion varies widely among tissues and may be a factor that predisposes the body to the development of neoplasms originating from these tissues. For example, in animal studies the liver is highly sensitive to low folate diets, whereas the brain is very resistant. It is not known yet whether systemic folate status directly reflects the folate concentrations in breast tissue. Second, plasma or serum folate concentration, which reflects short-term stores, is less reliable as an index for tissue folate levels compared with red blood cells (RBC) folate concentration, which reflects intermediate-term stores. Lashner et al.\textsuperscript{18} found that the RBC folate concentration was significantly correlated with the development of dysplasia and cancer in ulcerative colitis, but serum folate concentration was not. Third, alteration of folate distribution may be more important than total folate level in the tissue. Methylenetetrahydrofolate reductase (MTHFR) is a critical enzyme in folate metabolism. A common mutation of this MTHFR gene (C677T) causes thermolability and reduced activity of MTHFR, and men with the homozygous mutation have half the risk of colon cancer, especially with normal folate status.\textsuperscript{19} The product of MTHFR, 5-methyltetrahydrofolate, provides the methyl group for DNA methylation, whereas its substrate, 5,10-methylenetetrahydrofolate, is required for thymidylate and purine synthesis (Figure 1). The colon cancer protective effect of this MTHFR mutation is related to the increase of 5,10-methylenetetrahydrofolate, especially in normal folate status. This observation suggests that alteration of folate distribution may affect cancer development.
In future studies, folate level in breast tissue or, less
desirably, RBC folate concentration should be considered as the measure of folate status instead of plasma or serum folate levels.
1. Landis SH, Murray T, Bolden S, et al. Cancer statistics, 1999. CA Cancer J Clin 1999;49:8–31
2. Snyderwine EG. Diet and mammary gland carcinogenesis. In: Senn HJ, Gelber RD, Goldhirsch A, et al, eds. Recent results in cancer research. Berlin: Springer-Verlag, 1998;3–10
3. Stoll BA. Breast cancer and the Western diet: role of fatty acids and antioxidant vitamins. Eur J Cancer 1998;34:1852–6
4. Wu K, Helzlsouer KJ, Comstock GW, et al. A prospective study on folate, B\textsubscript{12}, and pyridoxal 5’-phosphate (B\textsubscript{6}) and breast cancer. Cancer Epidemiol Biomarkers Prev 1999;8:209–17
5. Massey BW, Rubin CE. The stomach in pernicious anemia: a cytologic study. Am J Med Sci 1954;481–92
6. Heimburger DC, Alexander CB, Birch R, et al. Improvement in bronchial squamous metaplasia in smokers treated with folate and vitamin B\textsubscript{12}. Report of a preliminary randomized, double-blind intervention trial. JAMA 0988;259:1525–30
7. Saito M, Kato H, Tsuchida T, et al. Chemoprevention effects of bronchial squamous metaplasia by folate and vitamin B\textsubscript{12} in heavy smokers. Chest 1994;106:496–9
8. Herbert V. The role of vitamin B\textsubscript{12} and folate in carcinogenesis. Adv Exp Med Biol 1986;206:293–311
9. Herbert V, Das KC. Folic acid and vitamin B\textsubscript{12}. In: Shils ME, Olson JA, Shike M, eds. Modern nutrition in health and disease, 8th ed. Malvern: Lea & Febiger, 1994;402–25
10. Kim YI. Folate and carcinogenesis: evidence, mechanisms, and implications. J Nutr Biochem 1999;10:66–88
11. Fenech M, Aitken C, Rinaldi J. Folate, vitamin B\textsubscript{12}, homocysteine status and DNA damage in young Australian adults. Carcinogenesis 1998;19:1163–71
12. Counts JL, Goodman JI. Hypermethylation of DNA: an epigenetic mechanism involved in tumor promotion. Mol Carcinog 1994;11:185–8
13. Soares J, Pinto AE, Cunha CV, et al. Global DNA hypomethylation in breast carcinoma. Correlation with prognostic factors and tumor progression. Cancer 1999;85:112–8
14. Dobrovic A, Simpfendorfer D. Methylation of the BRCA1 gene in sporadic breast cancer. Cancer Res 1997;57:3347–50
15. Kim YI, Pogribny IP, Basnakian AG, et al. Folate deficiency in rats induces DNA strand breaks and hypomethylation within p53 tumor suppressor gene. Am J Clin Nutr 1997;65:46–52
16. Graham S, Hellmann R, Marshall J, et al. Nutritional epidemiology of postmenopausal breast cancer in western New York. Am J Epidemiol 1991;134:552–66
17. Zhang S, Hunter DJ, Hankinson SE, et al. A prospective study of folate intake and the risk of breast cancer. JAMA 1999;281:1632–7
18. Lashner BA, et al. Red blood cell folate is associated with the development of dyplasia and cancer in ulcerative colitis. J Cancer Res Clin Oncol 1993;119:549–54
19. Ma J, Stampfer MJ, Giovannucci E, et al. Methylenetetrahydrofolate reductase polymorphism, dietary interactions, and risk of colorectal cancer. Cancer Res 1997;57:1098–102
Mediterranean Diet and Coronary Heart Disease: Are Antioxidants Critical?
There is substantial evidence that several variants of the Mediterranean diet reduce the incidence of coronary heart disease (CHD) and perhaps other chronic conditions. Recently, the final results of the Lyon Diet Heart Study, a randomized secondary prevention trial, indicated that the Mediterranean diet substantially reduces the rate of recurrence after a first myocardial infarction. Data from this study also suggest that the Mediterranean diet protects against CHD through mechanisms that are independent of traditional CHD risk factors. We postulate that the antioxidant properties of several plant foods in the Mediterranean diet may be critical mediators of the beneficial effects of this diet.
This review was prepared by Antonia Trichopoulou, M.D., Effie Vasilopoulou, B.Sc., and Areti Lagiou, M.Sc., Department of Nutrition, National School of Public Health, Athens 11521, Greece.
There are several variants of Mediterranean diet.\textsuperscript{1} Although the diet used on the island of Crete during the 1960s has been credited with substantial health attributes by Ancel Keys and his colleagues in their celebrated ecologic study,\textsuperscript{2} there is no analytical epidemiologic evidence documenting differential effects among the various forms of Mediterranean diet against any particular disease.
The traditional Mediterranean diet may be thought of as having eight components\textsuperscript{3}: high monounsaturated to saturated fat ratio, high consumption of legumes, high consumption of cereals (including bread), high consumption of fruits, high consumption of vegetables, low consumption of meat and meat products, moderate consumption of milk and dairy products, and moderate ethanol consumption.
Studies during the last two decades have provided insight into the mechanisms underlying the benefits of the Mediterranean diet in relation to coronary heart disease (CHD). It has now been established that monoun-
|
Labour Market Prospects for Older Workers
Changes in the demographic structure of the labour force occurring in the 1990s have focused attention on the need for employers to re-assess their human resource strategies. Traditionally employers have relied on young people as a source of labour. However, in Britain the number of 16/19 year olds entering the labour market is now substantially reduced. Alternative sources of labour supply have been identified; most notably women who have never had employment, women returners and older workers.
People in the so called Third Age, currently make up a very substantial part of the UK workforce. In 1990 there were of the order of 3 million workers in employment over the age of 55, two-thirds of whom were male. Older people are to be found in a wide variety of jobs, across all industries and services and in almost every occupational category. Although there are certain areas where older workers are especially likely to be found, such as particular industries (agriculture) or occupational categories (managers), there are few parts of the economy where they are not represented. Given the vast work experience which most of these individuals bring to their jobs, it is clear that this group represents a very important reservoir of skills for the British economy.
There are a number of distinct differences in the distribution of employment for older men and older women. Older female workers are concentrated in 3 main industries: distribution, non-marketed services and miscellaneous services. The most significant occupational group for older women is the ‘other occupations’ category which covers unskilled work, especially cleaning.
Older men are also present in substantial numbers in distribution and non-marketed services, and to a lesser extent in industries such as financial services and...
construction and transport. The occupational structure for older men is less concentrated. Managerial occupations are important in distribution and agriculture, and craft and operative jobs within manufacturing.
**Labour Market Participation of Older Workers**
In the recent past the activity rates of older males have shown a marked decline stemming mainly from rising real incomes and earnings and depressed labour markets. In contrast, for females the trend has been upwards. On the demand side, adjustments in the economy have led to the loss of jobs in low-skilled manual categories where older males have been disproportionately affected, while growth in the service sector has especially benefited females wanting to work part-time. The extent to which these trends are likely to continue has been the focus of recent research conducted by the Institute for Employment Research, supported by the Department of Employment and by the Carnegie Foundation.\(^1\) The IER assessment of employment and occupations to the year 2000 was used to project and analyse the labour market position of older workers. In addition, the likely variations resulting from two alternative macroeconomic scenarios – a *quality and a cost cutting* scenario were explored.
Continuing growth in real incomes and earnings and social trends towards a more equal labour market role for females, suggest that activity rate trends for males and females will continue to diverge. For males, the Institute projects further reductions in the supply, of the order of a quarter of a million, but for older women increases of the order of 150 thousand by the turn of the century. These contrast with official projections of no change by the Employment Department in its *Gazette*.
**Changes in the Structure of Labour Demand**
On the demand side, key features of the projections are the continued growth of employment in the service sector (despite current problems) but further job losses in primary and production industries. The number of part-time jobs is expected to grow most rapidly, especially in those areas typically taken by women. Occupational structure will continue to shift in favour of managerial, professional and associate professional jobs. Manual, blue collar workers will continue to lose jobs, especially those who have few skills. Self-employment is likely to show further growth.
**Implications for Older Workers**
Some of these changes may benefit older workers. For example, increased availability of part-time jobs may provide opportunities for those who wish to continue working but not full-time. To the extent that good quality jobs can be opened up to part-time occupancy, on terms and conditions similar to those working full-time, this would benefit both older men and older women and increasingly legitimise part-time work for men as well as potentially retaining the skills of those who would otherwise move directly from full-time employment to full-time retirement.
Self-employment is also projected to continue to grow relative to employees in employment. This phenomenon may mean better opportunities for older workers since the self-employed tend to have longer working lives. However, as with the quality of part-time jobs, the quality of self-employment may be problematic.
For others, however, the prognosis is less optimistic. Older workers employed in industries in decline, whose skills are rapidly becoming obsolete, will face a very difficult employment situation. These are likely to be concentrated in manufacturing and other production industries, such as coal mining. Older workers are likely to share some of the benefits of future growth, but they will also bear the brunt of those changes associated with the decline of many traditional industries. However, the age structures within individual industries and occupations will alter only slowly so that by 2000 the overall pattern of employment for older workers will not look too different from that in 1990.
This conclusion also applies to the impact of the alternative macroeconomic scenarios. Clearly the experience of particular groups within the workforce will be sensitive to the fortunes of particular sectors, but within these constraints the effects on older workers are unlikely to be very different from those on younger people.
---
**Source: IER estimates**
The fortunes of older workers in the labour market will therefore depend crucially upon the qualifications and work experience that they possess. For those with poor qualifications, few skills and jobs in declining sectors the prospects are grim. Much of the burden of restructuring may fall on older workers. In contrast, those workers who are well qualified, highly skilled and in jobs in growth sectors will face exciting prospects for future employment, as employers struggle to find the skills they require in a dynamic economy. This suggests that adult training could be a key issue in determining whether older workers participate more fully in the benefits of economic growth.
**Prospects for Men and Women**
Third age employment and unemployment mean quite different things to men and women. The variations between their experiences of the labour market during the so-called prime-age period leave them in radically different positions by the time they reach their mid-fifties. In addition, among both groups there are very large differences in choices available regarding work and retirement opportunities between those with good career jobs and those who have poor conditions of employment or who are unemployed. This form of differentiation or polarisation overlays that relating directly to gender.
In promoting opportunity in the third age, whilst being concerned about, for example, skill shortages and excessive pension costs, there is a strong case for starting with the notion of improving the integration of older women within the labour market rather than providing incentives to older men to remain longer than they wish under present arrangements. Concern over third age discrimination should then focus on women who have never had labour market positions that fully reflect their competences rather than on men who have lost or wish to relinquish positions that did.
The precise prospects for older workers depends on the interplay between supply decisions and the recruitment and retention policies of employers. The Institute for Employment Research has also undertaken case study research directly concerned to examine the experience and assessments employers make about their use of older workers.\(^2\)
**Case Studies of Organisations Employing Older Workers**
In general, employers appear to approach the issue of having older workers with considerable circumspection. The extent to which a number of specific concerns – the ability of older people to undertake the job, their willingness to remain in employment, increased costs and absenteeism – are in practice well founded was considered in a comparison of two companies in America (a hotel chain and financial services company) and one establishment of the UK retailer B&Q, all of whom made substantial efforts to recruit older workers. In the case of B&Q, 5 comparison establishments were available, none of which had a specific policy to recruit older workers.
**Employee Performance**
On a range of measures indicative of employee performance older workers match younger workers and, in some cases, supersede them. Older workers employed primarily in response to high labour turnover within the computerised national reservations system of the hotel chain, operate differently from younger workers but either match or supersede them in terms of reservation rates. The ‘experimental’ B&Q store, staffed by employees aged over 50, compares favourably on 4 different indicators of performance with otherwise similar stores staffed predominantly by younger workers.
| Measures of Store Performance at B&Q | Macclesfield Store | Average of Five Comparison Stores |
|-------------------------------------|-------------------|----------------------------------|
| Sales | 100 | 91 |
| Profitability | 100 | 85 |
| Productivity | 100 | 102 |
| Leakage | 100 | 242 |
(Source: B&Q)
Note: With reference to B&Q data, in order to protect confidentiality of specific store data and allow readers to understand differences to the Macclesfield store in percentage terms, the data has been transformed into an index with Macclesfield set to 100.
Labour turnover amongst older workers is substantially lower than that found amongst younger workers. In one American establishment, overall duration of employment amongst younger workers is one year compared to 3 years amongst older workers. In the case of the financial services company, whose labour shortages were dealt with through a recourse to temporary labour and older workers, the latter proved more reliable with higher levels of labour persistence. In the case of B&Q, average staff turnover in the 5 B&Q comparison stores is five times that found in the ‘experimental’ Macclesfield store, despite being located in a similar type of labour market.
| Staff Turnover at B&Q | Macclesfield | 100 |
|-----------------------|--------------|-----|
| Store 1 | 689 |
| Store 2 | 617 |
| Store 3 | 372 |
| Store 4 | 477 |
| Store 5 | 654 |
| 5 Stores Average | 562 |
(Source: B&Q)
Employment Costs
The structure of employment costs (wages, health, pension and insurance payments and training costs) are country specific and are also differentiated by age and family structure. Notwithstanding these differences, however, in total employment costs in all 3 case studies were lower for older workers than for younger workers. The pattern for the component costs is, however, variable. With regard to wage costs, none of the case study organisations determined pay by age per se. However, where seniority is a component in determining wage levels there is a tendency for older workers to earn more because of the likelihood that they will remain in the job longer than younger workers. Higher pay also reflects higher productivity where profit sharing or bonus schemes operate.
Health care cost are particularly relevant in the American case studies. There is no evidence that these costs are higher amongst older workers – than younger workers, in large part due to the self-selection process. Older workers who elect for employment tend to be healthy. This in turn has an impact on absenteeism which is consistently lower than amongst younger workers. The employing organisations all required workers to make use of new technologies. Employers’ expectations of a likely increase in training times and therefore costs, (due to a belief in the limitations of older workers vis-a-vis new technologies) are not substantiated by the data. Thus, while it is clear that different advantages and disadvantages accrue to different age groups of workers, the indicative evidence is that, overall, older workers are as important and productive a resource to employers as younger workers.
Notes
1. The results reported here are based on research funded by the Department of Employment and by the Carnegie Inquiry into the Third Age. Findings from the latter wide ranging study will be published in full later this year.
2. These studies were supported by a grant from the Commonwealth Fund.
The Institute
The Institute for Employment Research was established by the University of Warwick in 1981. The IER has a diverse portfolio of research projects funded by organisations which include the Employment Department Group, the Economic and Social Research Council, the Department of Social Security, the Commission of the European Communities and a number of local authorities.
Current research includes: forecasting occupational change and skill requirements at the national and local levels, the evaluation of employment policies and programmes, recruitment processes, international aspects of labour markets, and the likely labour market consequences of completion of the European single market.
|
The results of the research project «Development of Algorithms and Program Models for the Analysis of Television and Thermal Images» (code VC 200.16.13) have been presented. The existing methods and algorithms for processing the television and thermal video images have been analyzed and the new techniques that will allow the researchers to create more effective video devices and systems for special purposes have been offered.
Key words: image, thermal imager, filtration, contour, comparison of objects, and real-time systems.
Nowadays, for the military operations in the daytime and at night, the crucial factor is to be the first who discovers the enemy object against any natural background using automated surveillance systems and to give the operator the image of better quality. This is achieved by improving the quality of television and thermal vision cameras and by using algorithms and technical means for image processing.
The aim of research project implemented at the V.M. Glushkov Institute of Cybernetics of NASU was to enhance the efficiency video equipment for weaponry systems (particularly, for armored vehicles) through the development of algorithms for processing images in visible and infrared bands, which will provide the operator with necessary comprehensive information.
Recently, many methods for image processing have been developed. Among them, there are digital filters that can significantly reduce the impact of noise and blur and, provided they improve their contrast, can increase the detecting ability of TV and thermal monitoring channels.
There is no general theory of image enhancement. When the image is processed for visual interpretation, the ultimate judge of how good is a particular method is the observer. Visual assessment of image quality is a very subjective process that makes the image quality an elusive standard that is used to assess the algorithm effectiveness.
The authors have reviewed and elaborated methods and algorithms that can be a basis for developing a set of software tools for video devices and systems. The operator can select out of them those necessary to address major challenges in particular situation.
Filtering and segmentation make it possible to get an improved image or to highlight individual objects. The algorithms for detection of contours and comparison of them or their individual sections allows the users to recognize objects by shape regardless of shift, rotation, and scale in the presence of noises of different nature. Also, they can be used to adjust the television and thermal images, if not for their matching then, at least, for marking masked thermal objects on the TV images. Panning can improve the reliability of observations with PZT camera. The tracking algorithms will make
it possible to track automatically any objects in the image specified by the operator and to feed the coordinates of objects to the actuators.
In addition, the Glushkov Institute of Cybernetics has developed programs to test the proposed algorithms and the architecture of promising calculators and sensor devices for implementation of labor-intensive algorithms in real time. At the same time, the researchers of V.E. Lashkarev Institute of Semiconductor Physics of NASU have analyzed errors affecting the quality of thermal images and the specific features of image processing. They have developed technical and design decisions related to thermal surveillance devices to increase their apparent ability, reliability, and survivability.
Herein, there are some results of research on how to speed up the algorithms for primary image processing in the video devices built on the basis of signal processors, as well as the proposed method and algorithm for image object comparison.
**PRIMARY IMAGE PROCESSING**
The important point for ensuring the real-time operation of specialized image processing devices is to accelerate video processing and to reduce the control signal delays in the feedback loop. This can be done by combining the input and processing of video information through using the smaller portions such as the lines as substitute for the frames, when inputting the image [1–4].
Modern signal processors underlying the majority of SIPD make it possible to organize a computational process using the direct memory access channels, which provides the input of new pieces of information from video sensor while the processor handles a portion of the previous data. In this case, the processor is largely released from the input process, with its internal resources being used sparingly for handling of information. This means that for processing video data it is enough to hold a regular portion instead of the whole image in memory. The processor internal memory that is much faster than the external one is sufficient for this purpose. For example, the Blecfin processors manufactured by *Analog Devices* (with a clock speed of 600–800 MHz) read the internal RAM for 1.7–1.3 ns, while the external devices like SDRAM (with a clock speed 133 MHz) to be connected to these processors can read data for a period of 22 ns and more, which is much slower (by an order of magnitude).
In this context, it is necessary to adapt or to develop new algorithms that make it possible to implement the serial input and processing of individual lines of video image.
From the technical point of view, the image analysis is processing of image accompanied with separation of individual objects, calculation of their parameters, and logical conclusions on the number, quality, and other characteristics of the selected objects. The task is usually divided into several stages, one of which is the primary image processing. As a rule, it includes:
- Image filtering;
- Segmentation;
- Selection and conversion to other representations of object contours;
- Calculation of initial parameters of image objects and so on.
Image filtering implies the removal of «noise», so the image becomes more «clean». Segmentation is division of the image into parts with similar properties, each representing a separate element or group of elements. After the segmentation, the object contours are selected and transformed to other representations, with the characteristics of objects calculated for subsequent analysis.
The mentioned methods and algorithms of filtering are referred to the window techniques of image processing. These methods are based on sampling of individual pixels with surroundings within a specified window and doing operations with selected values of pixels, as a result of which, a new pixel value is obtained. Masks with different coefficients may be involved. The processing of this type can be done in two ways, in parallel with the input of video data:
1) To organize the storage of several input lines of image in the memory and to generate the results for further steps of processing;
2) To input video data sequentially and to accumulate a few lines of processing results in the memory for subsequent steps of processing.
Number of video data lines to be stored in the memory and the time delay in generation of the results are determined by the mask size in vertical direction.
Image segmentation by texture and color features can be divided into 2 groups:
1) Segmenting by known features and characteristics of zones or areas identified in the image;
2) Segmentation based on the pre-determined image characteristics.
For the first group of algorithms, the typical way is either the image analysis through its scanning by a sliding window or the analysis of individual pixel in the case of some problems of color segmentation. Therefore, for this group of algorithms it is expedient to do processing by inputting image lines in subsequent manner keeping in the memory the number of lines that match the vertical size of sliding window.
For the second group of algorithms, firstly, it is necessary to define the characteristics of the entire image (histograms, statistical parameters, spectral characteristics, etc.), therefore the formation of characteristics and the very segmentation are impossible unless the entire image is obtained and the required characteristics are calculated. When processing the video sequences, the majority of characteristics (e.g., histograms, statistical parameters) certainly can be calculated while inputting the next image. These characteristics can be used for processing the next images.
The authors of [5] have proposed an algorithm that makes it possible to process video data obtained at the previous stages when coding, transforming the contours, and calculating some primary characteristics of objects, without waiting until the entire image frame is input. This is possible because the algorithms of the previous stages (filtering and segmentation) also allow the users to implement image processing and to get results while inputting and processing video data simultaneously.
Thus, obtaining the primary characteristics of objects required for the further analysis with minimal delay with respect to the last line of input video frame is of particular importance for surveillance systems operating in real-time mode.
**IDENTIFICATION OF OBJECTS BY CONTOUR SHAPE**
The authors hereof have proposed a method and an algorithm for comparing the geometric object contours (with accurate definition of scale and orientation) by characteristics of undistorted segments and more accurate numerical estimate of comparison results [6].
Having processed the image and converted it to binary form, the procedure for selecting the contours of analyzed objects and comparing them with the standards is initiated. The procedure is performed in four stages:
1) The contours of analyzed binary objects are separated and converted to the vector form, with their primary parameters such as length of contours, momentums, and coordinates of the centers of gravity calculated. At this stage, one or several selected contours with primary parameters can be saved as standards for the database. The contour is divided (manually or automatically) into several segments. The coordinates of boundary points of the segments are stored;
2) The candidate pairs (the object and the standard) are selected by the characteristics calculated on the basis of initial parameters by comparison with similar geometric standards for the further comparison. It should be noted that several standards can fit one analyzed object by their characteristics. The approximate scaling relations and approximate mutual orientation are calculated for each selected pair;
3) All selected pairs are sorted. The identical segments of the object contour (of the selected pair) are searched for each segment of the standard contour allowing for approximate scale and orientation, as well as the criteria. In other words, there is a search of the start and the end points of the segment. The criteria of segment identity are approximately equal (with a certain error) lengths
of contour segments of the object and the standard, and the ratio of the distance between the end points to the length of each segment. When comparing the lengths an approximate scale is taken into account, with the ratios being invariant to shift, rotation, and scale;
4) The likely scale, shift, and mutual orientation of objects are estimated on the basis of characteristics of identical contour segments. Thereafter, the standard contour is superimposed on the object contour, with degree of their coincidence calculated. The superimposition with the best estimate is deemed the result for approval of decision or further processing. It is assumed that it is a result of the undistorted segment of object contour identical to the corresponding segment of the standard contour. Accordingly, the parameters obtained as a result of superimposition, i.e. scale, shift, and mutual orientation, are deemed accurate and used for the further analysis.
It is possible to reduce the number of superimpositions and, consequently, the time required to get the final result provided the mutual orientation and scaling relations of the object and the standard are defined approximately (insofar as this narrows the area of searching the identical contour segments) and each superimposition procedure for the current selected segment is checked for its feasibility and is either confirmed or denied depending on result of the check-up. This is possible provided the binary image moments of the objects [7, 8] are used, for which, unlike the contours, the number of pixels does not depend on the orientation.
Fig. 1 shows the snapshots constituting the database of military equipment standards, which consist of 12 objects. In Fig. 2, \(a\), there is a silhouette of military equipment object in the case of noise type 1 (a part of the image is cut) to be compared with the standards from the database.
The object and the standards are compared by two methods:
1) *The traditional method* (Fig. 2, \(b\)) when the objects are superimposed by the centers of gravity (the mutual orientation is calculated by the
second-order moments; the scaling relations are computed by moments of zeroth order);
2) The proposed method (Fig. 2, (a)) when the standard contour is divided into 4 segments and the identical segments of the test object contour are searched.
Fig. 2b shows that the comparison by the existing method gives a misleading result as the test object is identified with a completely different standard (number 7). The comparison based on the proposed method identifies the object with the adequate standard (number 8), as showed in Fig. 2(c).
Fig. 3 shows a military equipment silhouette in the case of noise type 2 (the image of other object overlaps that of object to be identified).
The objects were compared with the standards by the same techniques as mentioned above: the existing method (Fig. 3, b) and the proposed method (Fig. 3, c).
Like in the previous case, the comparison by the existing method gives a misleading result. The object is identified with inappropriate standard 2 (Fig. 3, b). In contrast, the comparison by the proposed method identifies the object correctly with standard 1 (Fig. 3, c).
The proposed method implies the analysis of four segments into which the standard contours are randomly divided. The comparison based on identical segments works well even for significantly distorted object contours.
CONCLUSIONS
The following results of R&D project have been obtained:
1) The errors affecting the quality of thermal images have been analyzed; the peculiarities of using algorithms for processing thermal images have been considered; recommendations for their use have been proposed; the use of color schemes (especially, the high-contrast ones) to improve thermal image recognition has been proposed and justified;
2) The algorithms for primary image processing (oversampling and filtering, segmentation according to various criteria, selection and coding of object contours and calculation of their original characteristics) have been considered; techniques to reduce time and to raise effectiveness of the algorithms due to simultaneously input and processing of information using only the internal storage of video device signal processor have been proposed;
3) An original method and algorithms for geometric comparison of partly distorted object contours have been proposed; techniques for the separation of areas where the object and the standard do not match and for the numerical evaluation of comparison with the standard, as well as a technique to speed up the comparison with the standard by calculating and using the moments of inertia of object binary image have been developed. These solutions allow the users to identify the objects by shape regardless of shift, rotation, and scale in the cases of noises of different nature. With the help of these techniques it is possible to adjust television and thermal vision images for marking masked thermal objects on the TV images;
4) Algorithms for creating panoramic images from video sequence, which increases reliability of observation with the help of PTZ camera;
5) The proposed surveillance algorithms can be basis for the systems of surveillance of moving objects (in addition, they ensure significant video compression). The proposed ways to improve computing algorithms can significantly reduce the number of operations required for searching moving objects. The surveillance algorithms make it possible to track automatically the position of object images specified by the operator and transmit their coordinates to the actuators;
6) Design solutions to improve the quality of thermal images have been developed. In particular, a dual channel system of thermal surveillance (the first channel for the range of 3–5 microns and the second one for the range of 8–14 microns) has been proposed to be used. This improves and facilitates the detection, identification, and monitoring of image objects. The stabilization of aperture temperature has been showed to be among the design solutions that can reduce the error;
7) A parallel multiprocessor architecture that can be implemented on serial programmable logic integrated circuits and structures of promising sensor devices [9, 10, 11] with parallel processing of video data directly on the touch matrix (including binarization and separation of image, identification of original characteristics, moments of inertia, etc.) have been offered. These solutions will make it possible to create video devices for real time processing of images.
The development of algorithms for image processing and analysis allowing for their specificity will facilitate the creation of modern, higher-quality video devices and systems for providing the national defense industry with Ukrainian-made visual media. These devices for digital image processing can be used in many other areas, such as industry, transportation, robotics, medical, biological, and other scientific research.
Three patents and eight papers have been published upon the results of this research; the results have been reported at three international conferences.
The project was carried out by V.M. Glushkov Institute of Cybernetics and V.E. Lashkarev Institute of Semiconductor Physics of NASU within the framework of the Program for Support of R&D Projects (Order of the NASU Presidium of 27.02.2013 no. 133) and the project «Development of Algorithms and Software Models for Analyzing TV and Thermal Images» (code VK 200.16.13). The project partner was Fotopylad Scientific and Production Complex, state-owned enterprise,” Cherkasy.
REFERENCES
1. Boyun, V.P.: Perception and Processing of Images by Real-Time Systems. *Artificial Intelligence*, 3(61), 114–125 (2013) (in Ukrainian).
2. Boyun, V.P.: Specific Features of Processing of Signals and Images by Real-Time Systems. *Proceedings of International Scientific Conference on Modern Information Technologies: Problems, Achievements, and Prospects for Development*, 298–299 (2013) (in Ukrainian).
3. Boyun, V.P. and Sabelnikov, P.Yu.: Systemic Approach to Optimization of Input, Perception, and Processing of Signals and Images. *Proceedings of International Scientific Conference on Issues of Optimization of Calculations*, 43–44 (2013) (in Ukrainian).
4. Boyun, V.P.: Intelligent Real-Time Video Systems. Proceedings of International Conference Ukraine – Russia – Skolkovo: United Innovation Environment, 54 (2013) (in Russian).
5. Sabelnikov, P.Yu.: Alignment of Input and Processing of Images in Intelligent Videocamera. *Proceedings of International Scientific Conference on Modern Information Technologies: Problems, Achievements, and Prospects of Development*, 286–288 (2013) (in Ukrainian).
6. Sabelnikov, P.Yu.: Comparison of Contours of Objects with Partially Distorted Shape. *J. of Qafqaz University. Mathematics and Computer Science*, 34, 47–58 (2012) (in Russian).
7. Sabelnikov, P.Yu.: Computation and Use of Binary Image Moments for Geometric Comparison of Objects. *Artificial Intelligence* Scientific and Theoretical Journal, 3(61), 223–232 (2013) (in Russian).
8. Sabelnikov, P.Yu.: Computation and Use of Binary Image Moments for Geometric Comparison of Objects. *Proceedings of International Scientific Conference on Artificial Intelligence and Intelligence System*, 124–127 (2013) (in Russian).
9. Boyun, V.P.: Device for Determining the Location and Parameters of Image Objects. Patent of Ukraine no. 104347. *Bulletin*, 2 (2014) (in Ukrainian).
10. Boyun, V.P.: Sensor Device for Determining the Object Location and Center of Mass. Patent for Useful Model no. 81142. *Bulletin*, 12 (2013) (in Ukrainian).
11. Boyun, V.P.: Sensor Device for Determining the Object Location and Moment of Inertia. Patent for Useful Model no. 82936. *Bulletin*, 16 (2013) (in Ukrainian).
В.П. Боон, П.Ю. Сабельников, Ю.А. Сабельников
Інститут кібернетики
ім. В.М. Глушкова НАН України, Київ
АЛГОРИТМИ АНАЛІЗУ
ТЕЛЕВІЗІЙНИХ І ТЕПЛОВІЗІЙНИХ
ЗОБРАЖЕНЬ У ВІДЕОПРИСТРОЯХ
ТА СИСТЕМАХ СПЕЦПРИЗНАЧЕННЯ
Наведено результати виконання проекту «Розробка алгоритмів та програмних моделей для аналізу телевізійних та тепловізійних зображень» (шифр ВК 200.16.13). Проаналізовано відомі та запропоновано нові методи і алгоритми для обробки телевізійних та тепловізійних відеозображень, які дозволяють створювати більш ефективні відеопристрої та відеосистеми спеціального призначення.
Ключові слова: зображення, тепловізор, фільтрація, контур, порівняння об'єктів, системи реального часу.
В.П. Боон, П.Ю. Сабельников, Ю.А. Сабельников
Інститут кібернетики
ім. В.М. Глушкова НАН України, Київ
АЛГОРИТМЫ АНАЛИЗА
ТЕЛЕВИЗИОННЫХ И ТЕПЛОВИЗИОННЫХ
ИЗОБРАЖЕНИЙ В ВИДЕОУСТРОЙСТВАХ
И СИСТЕМАХ СПЕЦНАЗНАЧЕНИЯ
Представлены результаты выполнения проекта «Разработка алгоритмов и программных моделей для анализа телевизионных и тепловизионных изображений» (шифр ВК 200.16.13). Проанализированы известные и предложены новые методы и алгоритмы для обработки телевизионных и тепловизионных видеозображений, которые позволяют создавать более эффективные видеоустройства и видеосистемы специального назначения.
Ключевые слова: изображение, тепловизор, фильтрация, контур, сравнение объектов, системы реального времени.
Received 06.06.14
|
CONSUMERS’ COMPLAINTS AND COMPLAINT HANDLING AS A CRUCIAL ASPECT OF GOOD MARKET FUNCTIONING
Evelina Spakovica\textsuperscript{1}, Dr.oec., Genadijs Moskvin\textsuperscript{2}, Dr. habil.sc.ing., Marks Moskvin\textsuperscript{3}, Mg.oec.
Abstract. Despite a generally high level of consumer protection guaranteed by the EU legislation, the problems encountered by consumers are still too often left unresolved. At the same time, the fact that consumers do complain when they experience problems is an important feedback mechanism for businesses, allowing businesses to improve their performance. Therefore, the paper presents the analysis of actual consumer behaviour in the EU and Latvia in case if a complaint is necessary to protect their as consumers’ rights, the tendencies for the complaint submission and appeal to public authorities or consumer organizations, or to a seller/provider/manufacturer. The aim of the paper is to analyse the tendencies of complaint submission, the behaviour of consumers when complaint is necessary, and importance of complaining for good market functioning. The study is based on the review of legislation, the documents of the European Commission, and literature on consumer rights’ protection and behaviour. In the study, the authors applied descriptive method and secondary data analysis. Complaints and complaint handling are crucial aspects for a good market functioning. If consumers do not complain when they experience a problem, redress is denied to them, and valuable feedback is lost by the business. A quarter of citizens do not complain when they have a problem. Therefore, both – the consumers and sellers/providers/manufacturers should be more active to solve the experienced problem. Consumers should complain, but sellers/providers/manufacturers – improve the process of complaint handling.
Key words: consumers’ complaints, complaint handling, feedback, consumer behaviour.
JEL code: M390
Introduction
The role of consumers increases owing to sophistication of retail markets. Confident, informed and empowered consumers are the motor of economic change, as their choices drive innovation and efficiency (Commission of the…, 2007). Despite the high level of consumer protection already achieved in the EU, it is still possible to improve fundamentally the situation for the EU consumers. While the technological means are increasingly in place, yet business and consumers’ behaviour lags far behind. According to 2011 figures, 17% of the EU consumers reported that they had encountered problems when buying something in their country (same proportion as in 2010). In Accordance with the Empowerment Report of 2011, the overall financial loss incurred by European consumers due to their encountered problems was estimated at 4 % of the GDP of the EU (European Commission, 2012\textsuperscript{c}).
Therefore, the Commission will work towards the following two specific objectives: 1) improving information and raising awareness of consumer rights and interests among both consumers and traders; 2) building knowledge and capacity for more effective consumer participation in the market (European Commission, 2012\textsuperscript{c}). If consumers are able to play fully their role in the market, making informed choices, and rewarding efficient and innovative businesses, they contribute to stimulating competition and economic growth. On the other hand, markets, where consumers are confused, misled, find it hard to switch or have little choice will be less competitive and generate more consumer detriment, to the expense of the efficiency of the overall economy. Therefore, it is important to identify, which parts of the market are not working well for consumers (European Commission, 2012\textsuperscript{c}).
In this connection, the aim of the paper is to analyse the tendencies of complaint making, the behaviour of consumers when complaint is necessary, and importance of complaining for a good market functioning. In the framework of the research, the following tasks were undertaken: 1) to examine how often consumers encountered a problem with goods or services and their reaction to the experienced problem; 2) to analyse consumers’ propensity to complain as a whole, and find out where they are ready to complain; 3) to understand the reasons for not complaining; 4) to analyse the level of consumers’ confidence in their rights, trust, and satisfaction in the market; 5) to work out recommendations for better complaint handling taking into account the importance of complaint for a good market functioning.
The study is based on the review of legislation, the literature on consumer rights’ protection, and behaviour as well as on statistical data available from the European Commission’s Analytical Reports, the EU Consumer Conditions Scoreboard, and Consumer Markets Scoreboard conducted in 2009-2012. In the study, the authors applied descriptive method and secondary data analysis.
Research results and discussion
Markets that respond more efficiently to consumer demand will perform better in competitiveness and innovation terms and will be more in tune with the lives
\textsuperscript{1} Evelina Spakovica. Latvia University of Agriculture. Tel. + 371 63021041, e-mail address: firstname.lastname@example.org
\textsuperscript{2} Genadijs Moskvin. Latvia University of Agriculture. Tel. + 371 63080687, e-mail address: email@example.com
\textsuperscript{3} Marks Moskvin. E-mail address: firstname.lastname@example.org
and goals of the EU citizens. The outcomes for consumers in economic and non-economic terms are the ultimate arbiter of whether markets are failing or succeeding in terms of citizens’ expectations. However, the final outcomes for consumers are based on consumers’ real experience in the market.
According to the survey of consumers’ opinion (TNS Opinion & Social, 2011), more than one in five respondents (21%) in the EU 27 had encountered a problem with a commodity, a service, a retailer, or a provider in the past 12 months, for which they had legitimate cause for complaint. In Latvia, the respondents had encountered a problem in 16% of cases. However, in some cases consumers might lack awareness what a “legitimate cause for complaint” implies. It is evidenced by the fact that most frequently the reply “have encountered any problem for which they had legitimate cause for complaint” was chosen by the respondents aged 25-39 (26%), the respondents with highest education levels, i.e. educated and older than 20 years (29%), and managers (32%). The respondents that reported the lowest incidence of such problems were among the oldest respondents aged 55 and older (16%) and the less educated who had left school at age 15 or younger (13%), retired persons (15%), and those who had never used a computer (10%). Consequently, those consumers, who knew their rights better, were the ones, who had more encountered a problem (TNS Opinion & Social, 2011).
More than three-quarters of consumers, who had experienced problems in the last 12 months, took some form of action in response to their problems (77%), while 23% took no action at all. Those who took action (multiple answers were possible) were most likely to respond in the form of making a complaint to the retailer or provider (65% of all experiencing a problem). Comparatively smaller number of consumers had made a complaint to the manufacturer (13%) (Figure 1).
By contrast, consumers in Latvia were the least likely to take any action (only 55% took some form of action). That was the second lowest percentage of consumers, who were ready to take some actions in response to experienced problem in the EU 27.
According to the data of the Consumer Conditions Scoreboard (European Commission, 2012*), consumers, who encountered a problem after they had bought something, complained about it to the seller/provider/manufacturer in 80% cases in the EU 27 and in 58% of cases in Latvia. That demonstrates a huge difference between the EU 27 and Latvia: consumers in Latvia are much more passive, compared with the EU 27. Therefore, and especially in Latvia, it is important to encourage consumers to communicate their problems and to seek solutions, since complains benefits not only consumers themselves but also the market as a whole.
The indicator of complaints captures the severity of a problem, given that it takes more time and effort to complain to an official body than to family or friends. In another research, based on the annual market monitoring survey (European Commission, 2012*) it was detected that 76% of consumers who had encountered a problem complained about it to the company, the complaint body, friends, or family. Consumers’ propensity to complain has considerably dropped both for goods and services’ markets, as compared with 2011 (81%) and 2010 (79%).
For all goods and services’ markets, by far the most likely party to be addressed is the seller of the product or the provider of the service, i.e. the immediate and known point of contact (approached by 60% of respondents who encountered a problem). Only 5% of those, who had a problem, addressed their complaint directly to a manufacturer. Complaints addressed to a third party such as a public authority or consumer organisation remained rare (7%) and were more likely to occur in services’ markets (9% as against 4% in goods markets). Finally, almost a third of consumers (31%) shared their problems with friends and family, confirming the importance of “word-of-mouth” in reporting bad experiences.
Having established the fact that very few consumers after having experienced problems had made a complaint to a public authority or a consumer organisation, the surveyors asked those consumers who had not taken any action to explain the reasons for not taking their complaint to the relevant bodies. The most frequently cited reason (multiple answers were possible) for not
making a complaint to a public authority or consumer organisation was the fact that the person had already received a satisfactory result from the retailer/provider of the good/service (44%) (Figure 2.) In Latvia, this percentage is much lower compared with the EU 27, only 29% of consumers had received a satisfactory result from the retailer/provider of the good/service. The retailers and providers in Latvia are not so loyal to consumers and their problems as in the EU 27.
The next most common reason, mentioned by close to a quarter of respondents, was that the sums involved were too small (24%), in Latvia – 28%. Obviously, relatively few respondents did not perform this procedure due to their expectations of an unsatisfactory response or outcome. For example, only 15% believed they were unlikely to get a satisfactory result, and similar number of respondents replied that in their opinion it would take too long or take too much effort (13% each). Whereas, 19% of respondents answered that either it would take too long or it would take too much effort. The reasons mentioned by Latvian consumers were even more important – 25% believed they were unlikely to get a satisfactory result, 24% answered that in their opinion it would take too long, and 31% thought it would take too much effort. It means that consumers in Latvia are less confident in their rights than in the EU 27 (Figure 2).
Moreover, Latvian consumers most likely admitted that their main reason for not complaining to a public authority or a consumer organisation was their opinion that this process would take too much effort. Moreover, the situation was similar in case of complaining to a seller/provider/manufacturer. The study of the Consumer Conditions Scoreboard (European Commission, 2012) shows that even those consumers who felt they had a reason to complain to a seller/provider/manufacturer, did not do that in 20% of cases in the EU 27 and in 42% of cases in Latvia (Figure 3).
The data in Figure 3 reveal a dramatic difference between the EU 27 and Latvia in reaction on suspicion that consumers had reasons for complaining to a seller/provider/manufacturer. Therefore, consumer empowerment seems poor, since 42% of consumers did not complain despite having a reason to do so; it is the second highest rate in the EU. It approves the fact that consumers in Latvia are not confident and do not believe in a positive result of complaining.
The analysis of consumer confidence shows that those interviewees, who feel confident and those who feel protected by consumer law, less often say that taking a complaint to a public authority or a consumer organisation would take too much effort than those, who do not feel that way. Consequently, the lack of awareness undermines the ability of consumers to uphold their rights by not complaining since it would take too much effort, and it is due to consumers’ lack of confidence and poor level of knowledge about consumers’ rights.
The two relatively rarely cited reasons for not initiating a complaint procedure were lack of knowledge to whom to complain to (9% in EU 27 and 10% in Latvia) and lack of confidence in one’s as consumer’s rights (9% in EU 27 and 11% in Latvia). Therefore, the study leads to a conclusion that there are some problems at the EU 27 level connected with not complaining to a public authority or a consumer organisation, whereas consumers in Latvia are less confident, and retailers are less loyal to consumers and their problems compared with the EU 27.
According to the European Consumer Centre (ECC) data (European Consumer, 2012), year after year the pattern of complaints remains basically the same: more than half of them relate to a purchase on the internet and 20% by distance selling. The major concerned sectors are transport, especially by air, recreation and leisure, hotels, and restaurants (respectively, 31.9%, 20.3%, and 11.7% of all complaints). The problems are relatively highly distributed among the product/service itself, the delivery, the price and payment, and the contract terms (respectively 34.1%, 28.6%, 11.1%, and 10.2% of all complaints).
Sellers, providers and manufacturers are not interested in situation, when complaints are submitted directly to consumer organizations. The first reason is that they could be imposed by penalty. The second reason is that they lose a valuable feedback from consumers. They should create their own relationships with consumer for long-term collaboration, based on satisfying of needs and trust.
The trust component measures the extent to which consumers feel confident that businesses comply with consumer protection rules (European Commission, 2012\textsuperscript{a}). Consumer trust is fundamental to well-functioning markets — as Kenneth Arrow observed, “virtually every commercial transaction has within itself an element of trust” (Kenneth A., 1972). Proper enforcement of consumer legislation is also of crucial importance to protect reputable businesses from unfair competition. Consumers’ trust in suppliers’ compliance with consumer protection rules has seen a slight but steady increase over the past three years (European Commission, 2012\textsuperscript{a}). In 2012, less than half of the EU 27 respondents (47%) expressed a high level of trust, while 13% were not confident of businesses’ compliance with consumer protection rules. Trust is evaluated higher in Western and Northern European countries, while in Eastern European countries it is assessed below the EU27 average.
According to data of the Consumer Conditions Scoreboard (European Commission, 2012\textsuperscript{a}), more than six out of ten respondents in 2011 (same proportions as in 2010) believed that public authorities protect their as consumers’ rights (62%) and that retailers (65%) respect these rights.
The “satisfaction” component measures the extent, to which different markets meet consumers’ expectations. Nearly 60% of the EU 27 consumers stated that, overall, the markets surveyed live up to their expectations (score 8-10) (Figure 4). The average score for this component (7.5) has been stable over the past three years.
Goods’ markets score better on this component (as with all other components) than services’ markets with average scores of 7.8 and 7.3, respectively. Consumers in Eastern European countries are considerably less likely to think that markets “deliver” to the desired level, while consumers in Western Europe are more positive in this regard. These regional differences are most striking for the banking and insurance markets.
At the same time, the fact that consumers do complain when they experience problems is an important feedback mechanism for businesses, allowing them to improve their performance and provides useful information for authorities indicating, where policy intervention might be necessary (Figure 6). Dissatisfied consumers can directly address the retailer and/or a third-party organisation dealing with consumer complaints – national authorities, consumer organisations etc. Consumer organisations play an essential role in improving consumer information and knowledge, and identifying market problems, thus, they could provide information to sellers, providers, or manufacturers about consumers’ legislation if the problem arises or some advice is needed for better market functioning. Such help is essential for effective complaint handling and good market functioning since
Consumers who took no further action after unsatisfactory complaint handling
Consumers who were satisfied with complaint handling
Source: authors’ construction based on the European Commission, 2012a
Fig. 5. Consumers’ satisfaction level with complaint handling and decision about further action in Latvia and the EU 27, %
Source: authors’ construction based on Consumer Rights...1999; Procedures for..., 2006; Blackwell R. D., Miniard P.W., 2007.
Fig. 6. Importance of complaint as an aspect of a good market functioning
previous research showed that some problems would also arise when sellers and providers are not informed about their obligations according to consumer legislation. For example, the retailer Eurobarometer survey in 2011 found that only 26% of retailers knew the exact period during which consumers have the right to return a defective product. (Spakovica E., Moskvins G., 2012)
Responsibility for the product quality helps to guarantee that sellers, providers, and manufacturers will satisfy their clients, it will strengthen the trust between parties, and could increase repeated purchases (Blackwell R. D., Miniard P.W., 2007). Similarly, poor complaint handling by companies is both a source of harm to consumers and a missed opportunity to
reinforce consumer loyalty. At the same time, in line with previous years, only around half of those consumers who complained to companies were satisfied with the result (European Commission, 2012\(^5\)) (Figure 5).
The data in the figure demonstrates that in 2011 the level of consumers satisfied with complaint handling was low – in the EU 27 (58% of consumers). However, in Latvia, this level was even lower – 50% in 2011, and it has decreased compared with 2010. It means that situation has not improved during these years. In addition, being not satisfied with how their complaint was handled, most consumers gave up and took no further action (73% of consumers – the highest in the EU). In the EU, the percentage of consumers, who were not ready to take further actions was much lower – 45% (Figure 5).
In case, when consumers complain first to the seller/provider/manufacturer, but not to consumer organizations, it is possible to react on consumers’ dissatisfaction about quality of goods or services, improve it, and prevent consumers’ decision to address complaint to the consumer organization (Figure 6).
Basing on the survey analysis and requirements of legislation, the authors conclude that sellers/providers/manufacturers should play more active role in the process of complaint handling, for example, they should undertake some steps when they receive a complaint:
1) to analyse the situation, requirements of legislation, ask for advice from consumer organizations, and analyse causes of consumer’s dissatisfaction;
2) to work out a complaint handling mechanism for quick and effective reaction on consumer’s complaint;
3) to analyse who was at fault for non-qualitative goods or services. In case, if it was a manufacturer or supplier’s fault, the seller or provider should change its supplier. Whereas, if that was a seller or provider’s fault, then they need to take into account consumer’s opinion, respect it, and to be grateful for the feedback since it gives possibility to detect problems with goods or services and improve their problematic points.
The detection of problematic points based on complaints and complaints’ handling system, helps to increase consumers’ level of satisfaction and provide stable communication and relationship between a consumer and a seller, a provider, or a manufacturer. Moreover, it gives possibility to react more quickly to the challenges of global supply chains and to get timely information about emerging product safety risks for improving quality and safety of goods, services, and competitiveness in the market.
**Conclusions, proposals, recommendations**
1. More than one in five respondents (21%) in the EU 27 has encountered a problem with a good, a service, a retailer, or a provider. More than three-quarters of consumers, who have experienced problems in the last 12 months, took some form of action in response to their problems (77%), while 23% took no action at all.
2. The seller of the product or the provider of the service is by far the most likely party to be addressed by the consumer. Complaints addressed to a third party such as a public authority or consumer organisation remain rare (7%). Consumer organisations play an essential role in the improving consumer information, knowledge, and identifying market problems. Yet, Latvian consumers (31%) are more inclined to consider that complaining to a public authority or a consumer organisation would take too much effort, and Latvia is the only exception in the EU 27 in choosing this substantiation as the main reason for not complaining to a public authority or a consumer organisation.
3. Consumers in Latvia are much more passive compared with the EU 27. One of the reasons is that consumers in Latvia are not confident and do not believe in a positive result of complaining. The second reason is that retailers and providers in Latvia are not so loyal to consumers and their problems as in the EU. Only 29% of consumers received a satisfactory result from the retailer/provider. In addition, being not satisfied with how their complaint was handled, most consumers give up and take no further action (73% of consumers – the highest in the EU). In the EU, the percentage of consumers, who are not ready to take further actions, is much lower – 45%.
4. The lack of awareness undermines the ability of consumers to uphold their rights: not complaining since it would take too much effort is influenced by consumer’s lack of confidence and poor level of knowledge about consumers’ rights.
5. Sellers/providers/manufacturers are not interested in situation, when complaints are submitted directly to consumer organizations. The first reason is that they could be imposed by penalty. The second reason is that sellers/providers/manufacturers lose a valuable feedback from consumers, preventing them from improving their performance. Similarly, poor complaint handling by companies is both a source of harm to consumers and a missed opportunity to reinforce consumer loyalty. At the same time, in line with previous years, only around half of those consumers who complained to companies were satisfied with the result.
6. On basis of the survey analysis and requirements of legislation, it was concluded that sellers/providers/manufacturers should play a more active role in the process of complaint handling and create relationships with consumers for a long-term collaboration, based on satisfaction of needs and trust. Detection of problematic points based on complaints and complaints’ handling system gives possibility to react more quickly on the challenges of global supply chains and get timely information about emerging product safety risks for improving quality and safety of goods, services, and competitiveness in the market. It is also important to encourage consumers to communicate their problems and seek solutions, since their activities provide benefits not only for consumers themselves, but also for the market as a whole.
Bibliography
1. Commission of the European Communities (2007). *EU Consumer Policy Strategy 2007-2013*. Brussels, p.13.
2. *Consumer Rights Protection Law*, 01.04.1999. Retrieved: http://www.ptac.gov.lv/page/271. Access: 26.12.2012
3. European Commission (2012\(^a\)). *Consumer Conditions Scoreboard. Consumers at Home in the Single Market*. 7th edition – May 2012, Brussels, p.126.
4. European Commission (2012\(^b\)). *Consumer Markets Scoreboard. Making Markets Work for Consumers*. 8th edition – December 2012, Brussels, p.125.
5. European Commission (2012\(^c\)). *A European Consumer Agenda - Boosting Confidence and Growth*. Brussels, 22.5.2012, p.16.
6. Kenneth A. (1972) *Gifts and Exchanges*. Philosophy and Public Affairs, 1, p. 357.
7. *Procedures for the Submission and Examination of Consumer Claims Regarding the Non-conformity of Goods or Services with Contract Provisions*. Republic of Latvia, Cabinet Regulation No. 631, Adopted on 1 August 2006. Access: 26.12.2012.
8. Spakovica E., Moskvin G. (2012). Protection of Consumer’s Rights in Cross-Border Shopping by Distance. Economics Science for Rural Development: *Proceedings of the International Scientific Conference*, № 27. Jelgava: LLU, pp. 234 – 240.
9. The European Consumer Centres’ Network (2012). Get Help and Advice on Your Purchase Abroad, 2011 Annual Report, Luxembourg: Publications Office of the European Union, p.46.
10. TNS Opinion & Social (2011). *Special Eurobarometer 342. Consumer Empowerment*. Report, April 2011, Brussels, p.228.
11. Blackwell, R.D., Miniard, P.W., Engel J.F. (2007). Theory of Buyer Behaviour (in Russian). 10-e Izdanije, Sankt-Peterburg: Piter press, s.93-95.
|
WHEREAS:
1. The Central Valley Regional Water Quality Control Board (Regional Board) issued Cleanup and Abatement Order No. 91-720 on April 30, 1991 for three surface impoundments at the Southern Pacific Transportation Company (Southern Pacific) Tracy Yard. The Order requires Southern Pacific to begin closure by March 1, 1992 and to complete closure of the surface impoundments by October 15, 1993.
2. The presence of diesel oil in the vadose zone caused by leakage from the surface impoundments poses an avoidable, continuing threat to ground water, especially during the rainy season when the threat of contaminated, exposed soils leaching additional pollutants to ground water is the greatest.
3. Closing the surface impoundments under the TPCA by October 15, 1992 would avoid exposing residual waste constituents to unnecessary leaching during the 1992-1993 rainy season.
4. This Order amends Cleanup and Abatement Order No. 91-720 (attached) and reaffirms all of the Findings in that Order.
5. Order No. 91-720 is in full force and effect, except as modified by the Order.
6. If Southern Pacific completes the remaining work in accordance with the tasks prescribed in this order, sufficient work will have been completed to fulfill the requirements of the TPCA.
7. Issuance of this Order is exempt from the provisions of the California Environmental Quality Act (Public Resources Code, Section 21000, et seq.) in accordance with Section 15321 (a)(2), Title 14, California Code of Regulations.
The California Regional Water Quality Control Board, Central Valley Region, (hereafter Board) finds that:
1. On 9 July 1990, the Executive Officer issued Cleanup and Abatement Order No. 90-715 (hereafter Order No. 90-715) to the Southern Pacific Transportation Company (hereafter Discharger) due to soil and ground water pollution of organic and inorganic waste constituents.
2. This Order amends and incorporates all the findings in Order No. 90-715.
3. Order No. 90-715 remains in full force and effect except as modified by this amending Order.
4. Order No. 90-715 required the Discharger to submit a Hydrogeological Assessment Report (HAR) completion work plan, submit a completed HAR, submit a feasibility study on remediation of contaminated soils, submit and implement a final closure plan, close the TPCA impoundment, submit and implement a ground water cleanup plan, submit and implement a drainage control plan, and submit a spill prevention and countermeasures control plan.
5. The Discharger submitted a HAR completion report on 3 December 1990; a feasibility study on 4 January 1991; and a Drainage Control and Spill Prevention Control/Countermeasures Plan, plus an Expanded Yard Investigation Workplan for the site, on 1 November 1990. Some additional information, including data interpretation, is necessary to complete the HAR. The feasibility study contained numerous deficiencies and was not approved. The Drainage Control and Spill Prevention Control/Countermeasures Plan and the Expanded Yard Investigation Workplan were acceptable and approved by staff.
6. The Discharger did not submit a ground water treatment plan by 1 February 1991. However, an extension request was submitted by the Discharger (letter dated 2 February 1991). The Discharger did not submit and implement a closure plan by 1 February 1991 and 15 April 1991, respectively, as required by Cleanup and Abatement Order No. 90-715.
7. The Discharger has proposed to eliminate the potential for polluting storm water runoff by cleaning up the surface soil contamination throughout the yard. Details were provided in their 9 November 1990 Drainage Control and Spill Prevention Control/Countermeasures Plan.
8. To assure compliance with the Toxic Pits Cleanup Act and provide for cleanup of polluted portions of the site due to past operational practices, it is necessary to amend Order No. 90-715.
9. Issuance of this Order is exempt from the provisions of the California Environmental Quality Act (Public Resources Code, Section 21000, et seq.), in accordance with Section 15321(a)(2), Title 14, California Code of Regulations.
10. Any person affected adversely by this action of the Board may petition the State Water Resources Control Board (State Board) to review the action. The petition must be received by the State Board within 30 days of the date on which this Order was adopted by this Board. Copies of the law and regulations applicable to filing petitions will be provided upon request.
IT IS HEREBY ORDERED that, pursuant to Sections 25208.4, 25208.6 and 25208.8 of the Health and Safety Code and Sections 13267 and 13304 of the California Water Code, the Discharger shall:
1. COMPLETE ALL TPCA RELATED TASKS:
a. Submit all remaining information and data interpretation (pursuant to staff’s memorandum dated 18 March 1991) necessary to complete the HAR per Section 25208.8 of the Health and Safety Code by 15 July 1991.
b. Submit a Feasibility Study by 1 September 1991 that considers different soil remediation and/or disposal alternatives, volumes, and cost estimates for various cleanup levels and chooses an appropriate alternative. The chosen alternative must provide adequate protection for waters of the state.
c. Submit to the Board an approvable Closure Plan by 1 December 1991. An approvable closure plan is one which explains how the approved remedial alternative (proposed in the Feasibility Study) will be implemented for the TPCA surface impoundment. The Closure Plan shall also describe specific methods for implementing the plan and include a time schedule for performance of all required closure tasks. The plan shall be in accordance with Title 22, Division 4, Chapter 30; and applicable sections of Title 23, Division 3, Chapter 15 of the California Code of Regulations.
d. Implement closure of the TPCA surface impoundment and surrounding contaminated soils by 1 March 1992, in accordance with the approved Closure Plan. Implement means to begin excavation and/or treatment of contaminated soils.
e. Complete TPCA closure activities in accordance with the time schedule contained in the approved Closure Plan, but no later than 15 October 1993.
ORDER NO. 91-720 AMENDING CLEANUP AND ABATEMENT
ORDER NO. 90-715 FOR SOUTHERN PACIFIC TRANSPORTATION
COMPANY, TRACY, SAN JOAQUIN COUNTY
f. Submit to the Board by 15 July 1991, a plan for removal of any measurable free petroleum hydrocarbons from the ground water table beneath the site (both TPCA and non-TPCA ground water contamination).
g. By 1 September 1991, begin removal of free petroleum hydrocarbons from the ground water table per the approved 15 July 1991 plan.
h. Submit to the Board by 15 November 1991, a Ground Water Feasibility Study which specifically addresses the ground water contaminated by past disposal practices related to the TPCA surface impoundment. The study shall describe specific methods for treatment and/or disposal of contaminated ground water.
i. Submit to the Board, by 15 March 1992, an approvable Ground Water Cleanup (Implementation) Plan. The Ground Water Cleanup Plan shall explain how the approved remedial alternative (proposed in the Ground Water Feasibility Study) shall be implemented. It shall also contain a time schedule for implementation and completion of the ground water cleanup.
j. Implement the Ground Water Cleanup Plan in accordance with the time schedule contained in the plan, but no later than 15 August 1992. Implement means to begin treatment and/or disposal of contaminated ground water.
2. COMPLETE REMAINING SITE CLEANUP TASKS (NON-TPCA):
a. Submit an Interim Drainage Control Plan by 1 July 1991 which includes recommend measures to assure that contaminated surface waters will not leave the Discharger's property. The plan shall also include a surface water monitoring program to monitor the effectiveness of the interim drainage control measures. The interim drainage control measures shall be in place by 1 September 1991.
b. Install long term spill prevention and drainage control structures at the yard and remove the temporary drainage control structure per the approved 9 November 1990 Drainage Control and Spill Prevention Control/Countermeasures Plan (following decommissioning of the Railroad Yard), but no later than 15 October 1992.
c. Submit a Site Soil and Ground Water Characterization Report by 1 September 1991. The report shall define the lateral and vertical extent of the soil and ground water contamination caused by past operations, which were not covered under the TPCA investigations.
d. Submit a Site Feasibility Study for soil and ground water contamination at the yard by 1 January 1992. The report shall consider different remediation and/or disposal options, estimated volumes, and cost estimates for the soil and ground water that have been impacted by past site operations and choose an appropriate alternative (for areas not associated with the TPCA surface impoundment).
ORDER NO. 91-720 AMENDING CLEANUP AND ABATEMENT
ORDER NO. 90-715 FOR SOUTHERN PACIFIC TRANSPORTATION
COMPANY, TRACY, SAN JOAQUIN COUNTY
e. Submit a Site Soil Cleanup (Implementation) Plan for cleanup of soil contamination at the yard by 1 May 1992. The plan shall explain how the approved remedial alternative for soil (proposed in the Site Feasibility Study) shall be implemented. The plan shall include a time schedule for implementation and completion of the cleanup activities and final decommissioning of the yard.
f. Submit a Site Ground Water Cleanup (Implementation) Plan for cleanup of ground water contamination at the yard by 1 August 1992. The plan shall explain how the approved remedial alternative for ground water (proposed in the approved Site Feasibility Study) will be implemented and shall also include a time schedule.
g. Implement ground water cleanup in accordance with the time schedule contained in the approved Site Ground Water Cleanup Plan, but no later than 1 December 1992. Implement means to begin treatment and/or disposal of contaminated ground water.
h. Implement the Site Soil Cleanup Plan in accordance with the approved time schedule contained in the plan, but no later than 1 September 1992.
i. Complete soil cleanup in accordance with the time schedule contained in the Site Soil Cleanup Plan, but no later than 15 October 1994.
WILLIAM H. CROOKS, Executive Officer
DATED: 30 April 1991
JEM: jm
8. To assure compliance with the Toxic Pits Cleanup Act and provide for cleanup of polluted portions of the site due to past operational practices, it is necessary to amend Order No. 90-715.
9. Issuance of this Order is exempt from the provisions of the California Environmental Quality Act (Public Resources Code, Section 21000, et seq.), in accordance with Section 15321(a)(2), Title 14, California Code of Regulations.
10. Any person affected adversely by this action of the Board may petition the State Water Resources Control Board (State Board) to review the action. The petition must be received by the State Board within 30 days of the date on which this Order was adopted by this Board. Copies of the law and regulations applicable to filing petitions will be provided upon request.
IT IS HEREBY ORDERED that, pursuant to Sections 25208.4, 25208.6 and 25208.8 of the Health and Safety Code and Sections 13267 and 13304 of the California Water Code, the Discharger shall:
1. COMPLETE ALL TPCA RELATED TASKS:
a. Submit all remaining information and data interpretation (pursuant to staff’s memorandum dated 18 March 1991) necessary to complete the HAR per Section 25208.8 of the Health and Safety Code by 15 July 1991.
b. Submit a Feasibility Study by 1 September 1991 that considers different soil remediation and/or disposal alternatives, volumes, and cost estimates for various cleanup levels and chooses an appropriate alternative. The chosen alternative must provide adequate protection for waters of the state.
c. Submit to the Board an approvable Closure Plan by 1 December 1991. An approvable closure plan is one which explains how the approved remedial alternative (proposed in the Feasibility Study) will be implemented for the TPCA surface impoundment. The Closure Plan shall also describe specific methods for implementing the plan and include a time schedule for performance of all required closure tasks. The plan shall be in accordance with Title 22, Division 4, Chapter 30; and applicable sections of Title 23, Division 3, Chapter 15 of the California Code of Regulations.
d. Implement closure of the TPCA surface impoundment and surrounding contaminated soils by 1 March 1992, in accordance with the approved Closure Plan. Implement means to begin excavation and/or treatment of contaminated soils.
e. Complete TPCA closure activities in accordance with the time schedule contained in the approved Closure Plan, but no later than 15 October 1993.
THEREFORE BE IT RESOLVED THAT THE STATE BOARD:
1. Amend the attached Regional Board Order No. 91-720, pursuant to California Water Code Section 13304, to require closure of the surface impoundments by October 15, 1992.
2. Direct the Regional Board Executive Officer to file a complaint against Southern Pacific Transportation Company for administrative civil liabilities if the revised October 15, 1992 closure deadline, or HAR completion date specified in Regional Board Order No. 91-720, is violated.
CERTIFICATION
The undersigned, Administrative Assistant to the Board, does hereby certify that the foregoing is a full, true, and correct copy of a resolution duly and regularly adopted at a meeting of the State Water Resources Control Board held on July 18, 1991.
Maureen Marché
Administrative Assistant to the Board
|
The Role of State Wildlife Professionals Under the Public Trust Doctrine
CHRISTIAN A. SMITH,1,2 Montana Fish, Wildlife and Parks, 1420 E. 6th Avenue, Helena, MT 59620, USA
ABSTRACT The Public Trust Doctrine (PTD) is considered the cornerstone of the North American Model of Wildlife Conservation. Effective application of the PTD requires a clear understanding of the doctrine and appropriate behavior by trustees, trust managers, and beneficiaries. Most PTD literature refers generically to the role of the government as the people’s trustee, without addressing the differences between the legislative, executive, and judicial branches of government in the United States or recognizing the distinction between elected and appointed officials and career civil servants. Elected and appointed officials, especially in the legislative branch, have policy-level decision-making authority that makes them trustees of the people’s wildlife under the PTD. In contrast, career professionals working for state wildlife agencies (SWA’s) have ministerial duties as trust managers. The differences between the roles of trustees and trust managers are important. By focusing on their role as trust managers, while supporting and respecting the role of elected and appointed officials as trustees, SWA professionals can more effectively advance application of the PTD.
© 2011 The Wildlife Society.
KEY WORDS North American model, professional, public trust doctrine, state wildlife agency.
The Public Trust Doctrine (PTD) is considered the cornerstone of the North American Model of Wildlife Conservation (Geist et al. 2001, Batcheller et al. 2010, Organ et al. 2010). In essence, the PTD holds that certain natural resources, such as water, fish, and wildlife, are held in trust by the government for the benefit of the people. The origin of the PTD was traced to the Institutes of Justinian (A.D. 529) and the concept of the sovereign as the trustee of the people’s interest in wildlife has been well established in English common law since the date of the Magna Carta (Batcheller et al. 2010).
The U.S. Supreme Court ruled in Martin v. Waddell, 41 U.S. 367 (1842) that the trust responsibility under the PTD passed from the English crown or parliament to the states upon secession of the colonies in 1776. Although the federal government has subsequently assumed primary trust responsibility for migratory birds, marine mammals, and endangered species (Bean and Rowland 1997), the majority of the PTD responsibility remains with state governments (Batcheller et al. 2010).
Mahoney (2006) questioned the degree to which state wildlife agencies (SWA’s) are fulfilling the role of trustee under the PTD and emphasized the need for more effective application of the PTD. Batcheller et al. (2010) and Jacobson et al. (2010) also stressed the need for a more effective model of trustee-based governance. Jacobson and Decker (2008) argued that effective trust-based governance depends on the trustees recognizing and fulfilling their role under the PTD. Because state governments retain the majority of responsibility under the PTD, it is critical that wildlife professionals working for SWA’s fully understand both the PTD and their role and responsibilities under this fundamental doctrine.
Most PTD literature refers to government generically as the public’s trustee, without differentiating between the legislative, executive, and judicial branches established by the United States’ and individual states’ constitutions (Horner 2000, Mahoney 2006, Batcheller et al. 2010, Jacobson et al. 2010). This broad reference to government overlooks the important distinctions between the roles and responsibilities of the three branches of government. It also ignores the difference between elected and appointed officials within the executive branch and career civil servants, including most SWA professionals.
I argue that legislators and the commissioners to whom legislators have delegated specific authorities are the primary trustees of the public’s wildlife. Governors and appointed agency directors in the executive branch also serve to some degree as trustees. In contrast, SWA professionals are trust managers. I describe the non-trivial differences between trustees and trust managers and explain why the PTD would be more consistently followed if SWA professionals focus on their role as trust managers while supporting and respecting the role of elected and appointed officials as trustees.
The judicial branch also plays a critical role with respect to the PTD. Not only was the court the origin of the PTD in American law (Bean and Rowland 1997), the judiciary is the people’s source of redress if the legislative or executive branches of government fail to perform their duties under the PTD (Sax 1970, Horner 2000). However, a thorough treatment of the complex legal issues and role of the courts...
regarding the PTD is beyond the scope of this paper. Readers interested in more detail on these topics should refer to Sax (1970), Bean and Rowland (1997), Horner (2000), Wood (2009), and Batcheller et al. (2010) as well as the references and case law cited by those authors.
**WHO ARE THE TRUSTEES OF THE PUBLIC'S WILDLIFE?**
Analysis of the structure of governance in the United States supports the proposition that state legislators, and the citizen commissioners to whom the legislatures have delegated limited rule-making authority, are the primary trustees under the PTD. The U.S. Supreme Court, ruling in the seminal case, *Illinois Central R.R. v Illinois*, 146 U.S. 387 (1892), clearly viewed the legislature as the public's trustee under the PTD. Justice Field wrote in the case that, "Every legislature must, at the time of its existence, exercise the power of the State in the execution of the trust devolved upon it" (146 U.S. at 460).
The role of the legislature as the people's trustee is further reinforced by consideration of one of the basic tenets of trust law. A trustee must either possess or have effective ownership control of the corpus of the trust to make decisions regarding management of the trust and distribution of proceeds from the trust in the interest of the beneficiaries.
Under the PTD, ownership of wildlife is generally construed as being collectively vested in the public at large, until an animal is reduced to the possession of an individual through taking by means authorized by law (Bean and Rowland 1997). Through adoption of state constitutions, the citizens of each state have granted the power to enact the laws that govern the taking of wildlife to the legislature. Thus to the extent the people have empowered any branch of government to exercise control over their collective ownership of wildlife, they have done so to their elected representatives in the legislature, not to the executive branch or judiciary.
In most states, the legislature has created a citizen commission charged with oversight of the SWA and has delegated to the commission limited authorities such as regulating methods of take and allocation of wildlife harvest. The primary purpose of establishing commissions during the first half of the last century was to insulate decisions affecting the public's wildlife from the vagaries of partisan politics in the legislative arena or Governor's office (Management Assistance Team 2007). Horner (2000) argued this insulation from political influence is essential to effective application of the PTD. Nevertheless, commissions derive their existence and power from statutes adopted by the legislature. Individuals nominated to serve on the commission are generally subject to confirmation by the senate and commission decisions can be overturned by a simple majority of the legislature. Thus, while commissions serve as trustees, they do so subject to the oversight and will of the legislature.
Governors and their appointed agency directors within the executive branch also serve to some degree as trustees through participation in the legislative process, nomination of commissioners, and setting policy for wildlife use and conservation. However, most of the policy-level decisions made by these officials fall within constraints set by the legislature.
Jacobson et al. (2010) suggested that "...trustees should be qualified, competent, impartial, and assiduous to the interests of all trust beneficiaries. There should be a mechanism for their replacement if they prove deficient in any of these requirements, and the Trust beneficiaries should have the capacity to initiate the removal of a trustee following due process, along with a voice in the selection of new trustees." While legislators and governors demonstrate varying degrees of qualification, competence, impartiality, and assiduousness with respect to their duty under the PTD, the people of the state (the beneficiaries of the trust) select and remove legislators and governors (trustees) through the elective process on a regular basis. These officials, in turn, regularly replace commissioners and directors. The same cannot be said of SWA professionals.
SWA professionals are civil servants whose jobs are typically protected by employment law, subject to removal only for cause. SWA professionals operate within statutory limits set by the legislature, budget constraints set by the legislature or in a few states the commission, and policy direction from the governor, commission, and director. SWA professionals are trust managers, not trustees.
**THE DIFFERENCE BETWEEN A TRUSTEE AND A TRUST MANAGER**
The differences between trustees and trust managers are neither trivial, nor merely semantic. Under trust law, a trustee is charged with a fiduciary duty to preserve the assets of the trust in the long-term best interest of the beneficiaries. To fulfill that duty, the trustee must be aware of the current value of the trust as well as the potential to increase the value of the trust through prudent management. The trustee must weigh the risks associated with alternative management strategies against the potential for returns that would increase the corpus or dividends of the trust. The trustee must weigh the immediate needs and desires of the beneficiaries against the duty to sustain the trust and resolve any competing demands among the beneficiaries. In consideration of all the foregoing, the trustee must determine the amount of the corpus or earnings of the trust that should be distributed to the beneficiaries, as well as the allocation of benefits among beneficiaries. Fulfilling this fiduciary responsibility requires consideration of complex trade-offs and making policy-level decisions. Beneficiaries can initiate legal action to hold a trustee directly accountable if the trustee fails to fulfill these fiduciary responsibilities.
In contrast, the role of a trust manager is to monitor and manage the corpus of the trust to attain the goals set by the trustees, report on the status of the trust to the trustees and the beneficiaries, and distribute the proceeds consistent with the direction of the trustees. These responsibilities require knowledge and expertise in management of the trust assets, but are predominantly executive or ministerial functions.
Trust managers are accountable directly to the trustee, but not the beneficiaries.
In the context of wildlife conservation, policy-level decisions regarding the state’s wildlife are predominantly made by elected and appointed officials. Decisions regarding what species can be taken by anglers or hunters; what programs SWAs are authorized to implement to create, increase, or sustain harvestable surpluses (e.g., setting aside habitat preserves; predator control); and the allocation of harvest among beneficiaries are decisions typically made in statute or rule by the legislature or commission. Governors and agency directors also make policy-level decisions related to programs, budgets, and management goals, though most of these officials’ decisions fall within limits set by the legislature or commission. All elected and appointed officials are directly accountable to the public through both the ballot box and courts where citizen suits typically name elected or appointed officials as the plaintiff.
The day-to-day management of the public’s wildlife, including such activities as survey and inventory of populations, habitat management, law enforcement, harvest monitoring, etc. is conducted by SWA professionals. Although SWA professionals are, rightly, charged with developing management options, defining trade-offs, and making recommendations with respect to higher-level policy decisions, the authority for those decisions remains with elected and appointed officials in almost every case. Citizens who are not satisfied with the performance of SWA professionals typically seek redress through their elected and appointed officials rather than filing suit against the professionals. Indeed, in most states, SWA professionals are protected from civil liability in the performance of their duties.
**THE IMPORTANCE OF PARADIGM**
Recognizing the distinction between the roles and responsibilities of elected and appointed officials versus SWA professionals is essential to advancing application of the PTD. The authors of the United States’ constitution thoughtfully crafted a system of governance that separated and balanced the power and authority for policy-making, executive functions, and judicial oversight of government. That constitution, upon which state constitutions are modeled, vests the majority of policy-making authority in the legislative branch, where all members are elected by the citizens. In the executive branch, the power to veto a legislative decision is restricted to the President or Governor, who are also elected, and any veto is subject to potential override by the legislature. This is a deliberate construct to ensure that all the individuals setting major policy are directly accountable to those affected by those policies.
Legislators and governors are keenly aware of the fact that they serve at the pleasure of the electorate. These elected officials and the individuals they appoint must take into consideration a broad range of factors during their deliberations. They must weigh all the biological, social, economic, and political implications of their decisions. These officials also face the direct consequences of making a policy level decision, for better or worse.
In my experience, elected and appointed officials take offense when SWA professionals expect rubber stamp approval of their recommendations. This is particularly true when professionals’ recommendations are influenced by the wildlife-related values of SWA professionals which have been shown to differ significantly from the general public to whom elected and appointed officials are directly accountable (Gill 1996, Gigliotti and Harmoning 2003, Teel and Manfredo 2009). The resulting tension between these officials and SWA professionals undermines the working relationship that SWA professionals must maintain if they expect to influence higher-level decision makers.
Finally, if SWA professionals view themselves as trustees of the public’s wildlife, given the basic tenet of trust law cited above, SWA professionals must also perceive they have ownership control over wildlife. That perception can lead to resistance to public participation and a broader range of stakeholder involvement in decision-making which runs counter to the reforms promoted by Jacobson et al. (2010).
**HOW CAN SWA PROFESSIONALS ENHANCE TRUST-BASED GOVERNANCE?**
Jacobson and Decker (2008), Batcheller et al. (2010), and Decker et al. (2010) argued that wildlife conservation would be more effective if the PTD were more widely understood by the public and explicitly articulated in constitutional or statutory law, rather than existing largely as common or judge-made law. These authors suggested that SWA professionals inform elected officials and the public about the PTD and participate in the process of embedding the PTD in the body of their state’s legal foundation. I agree, and would argue that no group is better suited to this task than SWA professionals. Surveys demonstrate that the public places a high degree of trust in SWA professionals (Duda et al. 2010). SWA professionals should study the PTD and leverage their credibility with the public to communicate the importance of the PTD to conservation at every opportunity.
SWA professionals should work with both elected and appointed officials and the general public to draft statues or constitutional language and participate both as professionals and citizens, themselves, in efforts to codify the PTD. SWA professionals can minimize the perception that they are attempting to enhance their own power at the expense of elected and appointed officials or the public in this process by emphasizing and respecting the difference between the roles of trustees, beneficiaries, and trust managers.
Jacobson and Decker (2008) argued that in some states, elected and appointed officials fail to fulfill their responsibility as trustees for all of the public. These authors and others have suggested that changes to the wildlife governance structure, such as how commissioners are selected, may be necessary to improve application of the PTD. Restructuring of governance may be beneficial or even necessary, but in the interim, SWA professionals can increase the extent to which elected and appointed officials perform as trustees by redoubling efforts to inform the public about the PTD. A well-informed public will hold elected and appointed
officials accountable as their trustees much more effectively than SWA professionals can. A well-informed public would also be essential to drive the larger political processes required to restructure governance.
SWA professionals may be tempted to assume the role of trustees in states where elected and appointed officials are derelict in their duties. However, doing so is counterproductive for at least two reasons. First, by attempting to perform as trustees, rather than taking steps to assure elected and appointed officials are doing so, SWA professionals will enable continued failure of the trustees to fulfill their duty under the PTD. Second, if SWA professionals assert themselves as the people’s trustees, they will very likely alienate elected and appointed officials.
If elected and appointed officials are not fulfilling their role as trustees of the people’s wildlife, SWA professionals should inform these officials, respectfully, of their duties and the consequences of violating their fiduciary responsibility. This can be a challenging task, given the degree of control elected and appointed officials have over resources available to SWA professionals and the underlying responsibilities of SWA professionals to execute policies set by elected and appointed officials. In addition, the deliberate tension created by the separation of powers between the legislative branch and the elected and appointed officials in the executive branch has the potential to place SWA professionals in the middle of political contests, with conflicting direction coming from the legislature, commission, governor, and director.
SWA professionals will be most effective operating in this complex political environment if they have earned and retain the respect of elected and appointed officials. Essential individual and organizational behavior to gain and keep the respect of elected and appointed officials includes being open and honest in all interactions, respecting the roles and authorities of these officials, and communicating directly with them first about disagreements or areas of potential violation of the PTD. Nothing will undermine an SWA professional’s effectiveness with elected and appointed officials more than surreptitiously communicating with special interest groups or the public to incite opposition. Avoiding passive or insubordinate behavior such as resisting implementation of decisions by elected and appointed officials with which SWA professionals may personally disagree, but that do not violate the PTD, is also an essential element of a respectful working relationship. In my experience, by focusing on providing thorough, objective decision support (i.e., information, analysis, and advice), while recognizing and respecting the role of elected and appointed officials to establish policy, SWA professionals can have greater influence on decisions over the long term.
Obviously, if the legislature proceeds with a decision or action that appears to violate the PTD, SWA professionals have an obligation, as part of the executive branch, to support intervention by the judiciary, consistent with the constitutional role of the courts in resolving disputes between the legislative and executive branches. SWA professionals can minimize the impact of involving the court on their future relationship with legislators by being up front about the dispute and ensuring that the case is tried in the court, not the media, and that arguments are based on facts and the law, not personal or political opinions or motivations.
SWA professionals can also enhance application of the PTD by helping trustees understand the needs and desires of the beneficiaries. People often express their needs or desires to elected and appointed officials in the form of public comments on proposed legislation or rules. While certainly valuable to the political process, and highly valued by the public, this type of information can easily be influenced by activist campaigns or lobbyists and may only reflect the views of those interests that strongly favor or oppose a given proposal (Peterson and Messmer 2010).
SWA professionals can ensure that elected and appointed officials have more complete and balanced information regarding the public’s values, needs, or desires. The increasing number and role of human dimension specialists within SWA’s speaks to the recognition that wildlife conservation and management is about much more than biology. Fully understanding the social, economic, and political aspects of management decisions is equally important to informed decision-making under the PTD. SWA’s should apply as much, and as rigorous, science to these areas as they do the biological aspects of wildlife conservation. As recommended by Decker et al. (1991) in the process of generating options, defining trade-offs, or facilitating consensus building, SWA professionals must also be vigilant to avoid either filtering information or attempting to influence the outcome with their own values.
In addition to assessing the public’s needs and desires, SWA professionals are ideally positioned to identify and resolve competing or conflicting demands among the beneficiaries. Most wildlife management decisions involve trade-offs, with different parties bearing costs or reaping benefits. If the competing or conflicting interests are not provided a way to seek constructive resolution of their differences prior to the time when a legislative body or commission is faced with making a decision, these trustees will be forced to make a decision with a higher probability of creating winners and losers. Interests that lose in one round of decision-making may attempt to obstruct implementation of the decision through whatever means are available (e.g., legal challenges) or seek to reverse the decision at the next opportunity (e.g., election cycle, commission meeting, ballot initiatives, etc.). This results in wasted time and resources and potentially flip-flop management approaches. It is often more productive to invest the time in seeking broad-based, collaborative recommendations that lead to politically stable policy decisions through consensus building ahead of decision-making (Jacobson and Decker 2008). Many SWA’s have recognized the value of this approach and are hiring or training professionals within their ranks with the necessary skills to facilitate conflict resolution and participatory democracy.
**CONCLUSION**
More effective application of trust-based governance is critical to the future of wildlife conservation and management. The PTD will be more consistently applied when the parties
involved in trust-based governance understand their roles and fulfill their respective responsibilities. At the state level, legislators and commissioners are the primary trustees, responsible for most policy-level decisions regarding the corpus of the trust and allocation of benefits. Governors and their appointed agency directors also have responsibilities as trustees. SWA professionals are trust managers, responsible for executive functions, consistent with the direction set by the trustees.
SWA professionals can best advance application of the PTD by informing elected and appointed officials about their roles and responsibilities as trustees, informing the public about their rights and responsibilities as beneficiaries under the PTD, and working to embed the PTD in codified law. If the public fully understands the PTD, the citizens of a state will be more effective in holding elected and appointed officials accountable than SWA professionals can be. SWA professionals must not attempt to fill the role of trustees to avoid enabling dereliction of duty by elected and appointed officials and to maintain effective working relationships with these officials. SWA professionals will be more effective and have greater influence on decisions of the trustees if they respect the respective roles of all parties under the PTD.
Fulfilling their role under the PTD requires SWA professionals to continue to excel at traditional activities, such as population and harvest monitoring, biological and human dimensions research, and law enforcement, as well as expanding and enhancing efforts related to communication, education, and public engagement in decision making. It also requires SWA professionals to inform and support the decisions of policy makers and the general public while holding their own values in check. This is a high standard to achieve, but no one ever said being a professional was easy.
ACKNOWLEDGMENTS
I am indebted to all the elected and appointed officials and agency colleagues with whom I worked over the past 35 years as an SWA professional for helping shape the ideas reflected in this paper. I especially thank D. Decker, C. Jacobson, M. Nie, J. Organ, D. Pletscher, J. Satterfield, M. Williams, and 2 anonymous reviewers for comments on earlier drafts of this manuscript that helped sharpen the focus of my arguments.
LITERATURE CITED
Batcheller, G. R., M. C. Bambery, L. Bies, T. Decker, S. Dyke, D. Guynn, M. McEntoe, M. O’Brien, J. F. Organ, S. J. Riley, and G. Roehm. 2010. The public trust doctrine: implications for wildlife management and conservation in the United States and Canada. Technical Review 10-01. The Wildlife Society, Bethesda, Maryland, USA.
Bean, M. J., and M. J. Rowland. 1997. The evolution of national wildlife law. 3rd edition. Praeger Publishers, New York, New York, USA.
Decker, D. J., R. E. Shanks, L. A. Nielsen, and G. R. Parsons. 1991. Ethical and scientific judgments in management: beware of blurred distinctions. Wildlife Society Bulletin 19:523–527.
Decker, D. J., J. Organ, and C. Jacobson. 2010. Why should all Americans care about the North American model of wildlife conservation? Transactions of the 74th North American Wildlife and Natural Resources Conference 74: 52–36.
Duda, M. D., M. F. Jones, and A. Criscione. 2010. The sportsman’s voice—hunting and fishing in America. Venture, State College, Pennsylvania, USA.
Geist, V., S. P. Mahoney, and J. F. Organ. 2001. Why hunting has defined the North American Model of Wildlife Conservation. Transactions of the 66th North American Wildlife and Natural Resources Conference 66: 175–185.
Gigliotti, L., and A. Harmoning. 2003. Evaluation of the North Dakota Game and Fish Department: Internal Assessment. Project report for the North Dakota Game and Fish Department. Human Dimensions Consulting, Pierre, South Dakota, USA.
Gill, R. B. 1996. The wildlife profession subculture: the case of the crazy aunt. Human Dimensions of Wildlife 1:60–69.
Horner, S. M. 2000. Embryo, not fossil: breathing life into the public trust in wildlife. University of Wyoming College of Law, Land and Water Law Review 35:1–66.
Jacobson, C., and D. Decker. 2008. Governance of state wildlife management: reform and revive or resist and retrench? Society and Natural Resources 21:441–448.
Jacobson, C. A., J. F. Organ, D. J. Decker, G. R. Batcheller, and L. Carpenter. 2010. A conservation institution for the 21st century: implications for state wildlife agencies. Journal of Wildlife Management 74: 203–209.
Mahoney, S. 2006. The public trust doctrine of North American conservation: reality or myth? Proceedings of the Western Association of Fish and Wildlife Agencies, Bismarck, North Dakota, USA.
Management Assistance Team. 2007. Commission guidebook: understanding the fish and wildlife commission’s role in strategic partnership with the agency, director and stakeholders. Association of Fish and Wildlife Agencies. http://www.matteam.org/joomla/content/blog/category/106/371/. Accessed 5 Oct 2010.
Organ, J. F., S. Mahoney, and V. Geist. 2010. Born in the hands of hunters. The Wildlife Professional 4:22–27.
Peterson, C. C., and T. A. Messmer. 2010. Can public meetings accurately reflect public attitudes toward wildlife management? Journal of Wildlife Management 74:1588–1594.
Sax, J. L. 1970. The public trust doctrine in natural resource law: effective judicial intervention. Michigan Law Review 68:471–566.
Teel, T., and M. Manfredo. 2009. Understanding the diversity of public interests in wildlife conservation. Conservation Biology 24:128–139.
Wood, M. C. 2009. Advancing the sovereign trust of government to safeguard the environment for present and future generations (Part 1): ecological realism and the need for a paradigm shift. Environmental Law 39:43–89.
Associate Editor: John Daigle.
|
CYCLIC BEHAVIOUR OF THE SINGLE CRYSTAL SUPERALLOY SRR99 AT 980°C
P.D. Portella, A. Bertram, E. Fahlsbusch, H. Frenz and J. Kinder
Federal Institute of Materials Research and Testing (BAM)
Unter den Eichen 87, D-12205 Berlin, FRG
ABSTRACT
[001]-oriented specimens of the single crystal superalloy SRR99 were subjected to different stress- and strain-controlled cyclic loading tests (cyclic creep, LCF, LCF with hold times under tensile or compressive loading) at 980°C. The observed mechanical behaviour is discussed considering the microstructural evolution and, in some cases, the damage evolution in the course of the experiments. A visco-elastic model used to successfully describe the behaviour of SRR99 under cyclic loading conditions allows a precise estimate of specimens lifetime.
KEYWORDS
SRR99; single crystal superalloy; cyclic behaviour; cyclic creep; LCF; microstructural evolution; damage evolution; modelling; lifetime prediction.
CYCLIC CREEP
Different cyclic creep experiments have been carried out on SRR99 specimens at 980°C. The upper tensile load ($\sigma_u$) was kept constant at 200 MPa, the upper hold time was varied between 10 s and 300 s. The lower load ($\sigma_l$) was chosen between -200 MPa and 180 MPa with the same hold times. In most cases, the hold times for both stress levels were symmetrical, but in certain cases an asymmetrical time-division was applied. The alternating stress levels led to values of $R = \sigma_1 / \sigma_u$ ranging from -1 to 0.8. Usually, the experiments were conducted to a maximum strain of approximately 5% or until a total experimental time of 500 hours. The results of these experiments are presented in Fig. 1, which also shows the monotonous creep behaviour of SRR99 at 200 MPa ($R = 1$).
In the experiments with $0 < R < 1$ and a symmetrical load cycle we observed both a decrease in creep strain rate and an increase in lifetime, which become more pronounced by decreasing $R$. On the other hand, a reduction in the lower hold time leads to an increase in the creep strain rate, as shown in Fig. 1 for $R = 0.1$. In all these experiments we observed backward creep strain in the $\sigma_v$-phase. Hence, the internal stress built up in the $\sigma_v$-phase is high enough to promote the backflow of the mobile dislocations and the specimen contraction in the $\sigma_f$-phase. A similar effect was observed in other strongly particle-hardened materials as in the ODS-alloy MA 754 (Thien et al., 1981).
Fig. 1. Cyclic and monotonous creep curves of SRR99 at 980°C and for $\sigma_u = 200$ MPa. Upper and lower hold times are given for each test.
A. $R = 0.8$
$\varepsilon = 1.7\%$
B. $R = 0.8$
$\varepsilon = 4.8\%$
C. $R = 0.1$
$\varepsilon = 0.7\%$
D. $R = -1$
$\varepsilon = 0.7\%$
Fig. 2. $\gamma/\gamma'$-microstructure after cyclic creep loading. (010) longitudinal sections
The evolution of the $\gamma/\gamma'$-microstructure for $R$-values between 0 and 1 is shown in Fig. 2. The initial cuboidal microstructure was altered after relatively low creep strain into a lamellar one, the lamellae lying perpendicular to the stress direction. With increasing strain these lamellae became smaller, i.e. the mean length of their section in the metallographic specimens decreased significantly. Further, we measured a slight decrease of the area fraction of $\gamma'$ from about 60% to 50%. This result was independent of the orientation of the metallographic section, so that we may assume the same reduction in the volume fraction of $\gamma'$. Under monotonous creep conditions we observed a very similar microstructural evolution.
The experiments with $R = -1$ led to the fracture of the specimens after very small plastic deformation, the maximum strain in the last cycle ranging between -0.37 % and 0.5 %. Under these conditions the $\gamma'$ cube-like particles seem to change to folded plates lying mainly parallel to ((111)) planes, Fig. 2. For this $\gamma/\gamma'$-microstructure we measured different values of the $\gamma'$ area fraction depending on the orientation of the metallographic section.
**LOW CYCLE FATIGUE**
LCF-tests were performed under total strain control and a triangular wave form (Portella and Kinder, 1995). Independently of the strain amplitude, $\Delta \varepsilon_t$, we observed for a strain rate of $10^{-3}$ s$^{-1}$ a slight softening in the very first cycles followed by a much slower reduction in the stress amplitude, $\Delta \sigma$, Fig. 3. The faster reduction in the last cycles is due to the growth of surface cracks. In all these experiments there was only a faint change in the $\gamma/\gamma'$-microstructure. On the other hand, LCF-tests with a strain rate of $10^{-5}$ s$^{-1}$ led to a slight softening in the first cycles followed by a much more pronounced reduction in $\Delta \sigma$. The $\gamma/\gamma'$-microstructure observed in this case after 16 cycles is very similar to that observed after the same test time at the end of the experiment with $10^{-3}$ s$^{-1}$. The continuous softening observed afterwards is accompanied by the coarsening of the $\gamma'$-phase to folded plates lying mainly parallel to ((111)) planes, a similar structure to that observed in experiments with $R_o = -1$.
The fracture mechanism under LCF loading was independent of the strain rate. A large population of small surface cracks perpendicular to the specimen axis could be observed at $N \approx \frac{1}{2} N_f$. These cracks were induced by oxidation, whereas their microstructural origin could not be identified. Some of the cracks coalesced to form the main crack, which grew perpendicular to the specimen axis. This last stage was responsible for the very last part of the $\Delta \sigma \times N$ diagram, whose form depends on the relative position of main crack and extensometer.
The effect of relaxation periods on the LCF behaviour of this alloy was investigated by holding the strain for 300 s either at the maximum strain level, i.e. in the tension phase, or at the minimum strain level, i.e. in the compression phase.
Hold times in the compression phase led to a reduction in $N_f$ by a factor of about 2 for $\Delta \varepsilon_t = 1.2$ % in comparison to the lifetime under LCF loading. This reduction factor even increases by decreasing $\Delta \varepsilon_t$. The evolution of the $\gamma/\gamma'$-microstructure is dictated by the compression stress during the hold times with the formation of $\gamma$ plates on ((100)) planes parallel to the specimen axis. Fracture under these loading conditions is initiated by the same type of surface cracks, as reported above for LCF loading. The main crack, however, grows from both ends of a such surface crack on two ((111)) planes.
A reduction in $N_f$ by a factor of about 2 is also observed when introducing a hold time of 300 s in the
Fig. 3. Microstructural evolution during LCF-tests without hold times for two values of the total strain rate.
tension phase for $\Delta \varepsilon_t = 1.2\%$. Decreasing the strain amplitudes leads under these loading conditions to a decrease in the reduction factor and even to larger lifetimes than those observed under LCF loading. Both the evolution of the $\gamma/\gamma'$-microstructure and the fracture mechanism are governed by the tension stress during the hold times, viz. with the formation of $\gamma$ plates lying perpendicular to the specimen axis and with the growth of creep cracks from micropores in the bulk of the specimens. These creep cracks grow perpendicular to the specimen axis and assume a square shape with the sides parallel to [[110]] directions. The final fracture is a result of the connection of a large number of creep cracks by shearing of the ligaments.
MODELLING
Bertram et al. (1991, 1993) used a Burgers-type model combined with a Kachanov / Rabotnov damage variable to describe the high temperature creep behaviour of SRR99. This model also allows the description of the cyclic behaviour of SRR99 at $980^\circ C$ under either stress- or strain-controlled conditions using a single set of material parameters (Bertram and Fahlbusch, 1995).
For the description of the damage under cyclic loading, the evolution equation for creep damage was extended according to the Franklin/Danzer rule (Aktaa et al., 1993; Rubeša and Danzer, 1994). The evolution equation for damage variable, $\delta$, takes the following form:
\[ \delta = C_{D3} \frac{\sigma_0(t)^{C_{D1}}}{(1 - \delta(t))^{C_{D2}}} \left| \frac{\varepsilon_{in}(t)}{C_{D5} \varepsilon_{c,\min}(t)} \right|^{C_{D4}}, \quad \varepsilon_{c,\min}(t) = \frac{\sigma_0(t)}{L(\sigma(t))}, \]
where \( \sigma_0 \) is the nominal stress and \( L \) is one of the viscosities of the model which are dependent on the effective stress \( \sigma \) (Bertram and Fahlbusch, 1995). The damage parameters \( C_{D1}, C_{D2}, C_{D3} \) and \( C_{D5} \) can be determined by creep tests, \( C_{D4} \) by LCF-tests.
This model was fitted using four creep tests as well as the first cycle of three LCF tests without hold time. Figures 4 and 5 show the creep- and LCF-tests which were used for the calibration: The symbols represent some of the experimental values and the lines the curves calculated using this set of parameters. Figures 6 and 7 show the experimental values (symbols) of some tensile (CERT) and cyclic creep tests and the corresponding calculated curves using the same set of parameters. Finally, Figs. 8 and 9 show the comparison of the observed lifetime of specimens (symbols) and the estimates obtained from the model (line) for strain-controlled LCF and creep tests. First results of the lifetime prediction for specimens submitted to LCF-loading with hold times are in good agreement with the experimental results.
ACKNOWLEDGEMENTS
This work was partially supported by the Bundesministerium für Forschung und Technologie, BMFT, contract number 03 M3038 D. The authors are indebted for the contributions of Monique Duguéperoux and Jens Riedel.
REFERENCES
Aktaa, J., D. Munz and B. Schinke (1993). The dependence of damage on internal variables and its incorporation into constitutive equations. In: *Transactions of SMIRT-12* (K. Kussmaul, ed.), 135-140. Elsevier Science Publishers, Oxford.
Bertram, A., J. Olschewski, M. Zelewski and R. Sievert (1991). Anisotropic creep modeling for F.C.C. single crystals. In: *Proc. IUTAM Symp. "Creep in Structures IV"* (M. Zyczkowski, ed.), 29-36. Springer, Berlin.
Bertram, A., J. Olschewski, R. Sievert and M. Zelewski (1993). Creep modelling and lifetime prediction for nickel-base single crystals at high temperatures. In: *Transactions of SMIRT-12* (K. Kussmaul, ed.), 147-151. Elsevier Science Publishers, Oxford.
Bertram, A. and E. Fahlbusch (1995). Einachsige Materialmodellierung von Superlegierungen im Hochtemperaturbereich. BAM-V.31 Technical Report. Bundesanstalt für Materialforschung und -prüfung, Berlin.
Portella, P.D., and J. Kinder (1995). Microstructural evolution during creep and LCF-testing of single crystal superalloy SRR99. Submitted to *Materials Science and Engineering A*.
Rubeša, D. and R. Danzer (1994). Reevaluation of the SRM lifetime prediction rule by coupling it with a suitable constitutive model. In: *Localized Damage III: Computer-Aided Assessment and Control* (M.H. Aliabadi et al., eds.). Computational Mechanics Publications, Southampton/Boston.
Thien, J.K., D.E. Matejczyk, Y. Zhuang and T.E. Howson (1981). Anelastic relaxation, cyclic creep and stress rupture of \( \gamma' \) and oxide dispersion strengthened superalloys. In: *Creep and fracture of engineering materials and structures* (B. Wilshire and D.R.J. Owen, eds.), 433-446. Pineridge Press, Swansea.
Fig. 4. Creep tests at $\sigma_0 = 170, 200, 230$ MPa.
Fig. 5. LCF tests. Total strain range 2%, 1%.
Fig. 6. Tensile tests. Strain rate at $10^{-3}$, $10^{-4}$, $10^{-5}$ 1/s.
Fig. 7. Cyclic creep tests.
Fig. 8. LCF tests. Lifetime estimates.
Fig. 9. Creep tests. Lifetime estimates.
|
Managing and financing metropolitan public services in China: experience of the Pearl River Delta region
Xie Baojian, Ye Lin & Zijie Shao
To cite this article: Xie Baojian, Ye Lin & Zijie Shao (2018) Managing and financing metropolitan public services in China: experience of the Pearl River Delta region, Public Money & Management, 38:6, 445-452, DOI: 10.1080/09540962.2018.1486627
To link to this article: https://doi.org/10.1080/09540962.2018.1486627
Published online: 27 Jul 2018.
Submit your article to this journal
Article views: 45
View Crossmark data
Citing articles: 1 View citing articles
Managing and financing metropolitan public services in China: experience of the Pearl River Delta region
XIE Baojian, YE Lin and SHAO Zijie
Under China’s current fiscal policies and inter-governmental relations, it is a significant challenge to finance and deliver public services across jurisdictions. This challenge was met in the Pearl River Delta region in southern China with a collaborative governance approach. Directives from higher-level governments and horizontal inter-city fiscal arrangements were successfully combined to deliver public services. Effective networks should be developed to improve co-ordination and collaboration in delivering cross-jurisdictional public services.
Keywords: Collaborative governance; fiscal policy; inter-city public services; inter-governmental relationships; network governance; Pearl River Delta region.
Inter-governmental fiscal relations and inter-city public services are among the most challenging public management issues in China. The national government provides policy guidance and fiscal allocations to subnational governments for certain services. However, many municipal and inter-city public services are entirely left to local governments. Under the current inter-governmental fiscal arrangements, there are gaps in the financing and management of metropolitan public services. This paper answers the following questions:
• How do the existing inter-governmental relations influence finance and management of inter-city public service arrangements?
• What factors influence sub-national governments’ behavior in delivering different kinds of inter-city public services?
• What lessons does the Pearl River Delta (PRD) region experience provide to other areas?
The relationship between different levels of government has long been discussed in China. As early as 1956, MAO Zedong foresaw the problematic nature of the relationship between the central government and lower levels, including provincial and local governments, in his *Ten Major Relations* (Mao, 1956). In a vast country, such as China, Mao believed local governments should be encouraged to be active and independent in economic development, instead of solely relying on the central government. Sixty years later, the central government still found it necessary to issue *Guidance on Advancing the Reform of Divisions of Revenues and Expenditure Responsibilities between the Central and Local Governments* (State Council, 2016). This demonstrated the persistent nature of problems of inter-governmental relations in China (Zhang, 2017) and in many other countries (Ahmad and Brosio, 2015).
Missing in the academic literature on inter-governmental fiscal relations and management is the important issue of how sub-national governments act horizontally to perform public functions. As outlined in article 3 in the State Council No. 49 Decree, one of the major tasks is to reform the divisions of revenues and expenditure responsibilities at the sub-national level and among local governments, including the public service functions in basic living conditions, public safety, urban development and infrastructure (State Council, 2016). Many of the public service functions are cross-jurisdictional matters and involve multiple units of local governments, or even provincial governments. The study of the relationship among sub-national governments with regard to regional public functions and cross-jurisdictional services is integral to the analysis of inter-governmental relationships in China.
Following a discussion of financing and managing cross-jurisdictional public services at the sub-national level, this paper presents a
network governance analysis of two cases on the infrastructure and environmental services in the Guangzhou–Foshan metropolitan area in the PRD region, and concludes with experiences and lessons from these cases.
**Financing and managing cross-jurisdictional public services at the sub-national level**
Some economists have offered explanations for financing and managing cross-jurisdictional public services. Hayek (1945) proposed that local governments had better information on citizen preference, so they were most suitable to make decisions and arrange public services. Tiebout (1956) stated that the competition between jurisdictions would allow residents to choose the best place to live to get good public services. Musgrave (1959) and Oates (1972) believed that appropriate assignment of jurisdictions over public goods and taxes could improve delivery efficiency. Cross-jurisdictional co-operation could also be used in designing suitable mechanisms for managing regional affairs (Andrew and Feiock, 2010; Hawkins, 2010).
Regional development has become a significant urban development pattern in recent years, and so more attention is being paid to the inter-city issues of public policy and service delivery in China. On the one hand, the national government possesses ultimate authority in China (Tsai, 2004) but, on the other hand, economic reforms and the fiscal reforms have had a decentralizing effect on inter-governmental fiscal relations (Wu and Wang, 2013). Bird and Chen (1998) distinguished between three forms of ‘decentralization’: ‘deconcentration’, where some responsibilities are transferred from the central to sub-national units; ‘delegation’, where local governments act as agents of the central government; and ‘devolution’, where local governments are given decision-making authority. The Chinese case is the closest to the first scenario.
With the trend of fiscal decentralization on the expenditure side for the past two decades, horizontal inter-governmental co-operation has become increasingly important in inter-city public service delivery in China (Xu and Yeh, 2013). Yet, due to the lack of proper formal institutional structures in place for inter-city public services, local governments are obliged to adopt collaborative governance through negotiations and the formation of governance networks (Peters and Pierre, 2001; Bevir, 2003). Collaborative governance is ‘the processes and structures of public policy decision-making and management that engage people constructively across the boundaries of public agencies, levels of government, and/or the public, private and civic spheres in order to carry out a public purpose that could not otherwise be accomplished’ (Emerson et al., 2011, p. 2). There are several important system factors contributing to the formation of networks for collaborative governance including: a multilayered context of political, legal, socioeconomic, environmental, and other conditions; essential drivers, such as leadership, incentives, interdependence; and collaborative dynamics, such as principled engagement, shared motivation, capacity for joint action.
Inter-governmental relations and network theories have been used to study horizontal network management and collaborative public management (Agranoff and McGuire, 1998, 2001; Kamensky and Burlin, 2004). Networks play an important role in co-ordinating activities when issues are cross-jurisdictional and individual governments need to pool scarce resources (Kickert et al., 1997; Nyholm and Haveri, 2009). Agranoff and McGuire (1998) identified three forms of horizontal networks among American local governments: policy- and strategy-making, resource sharing, and project-based. The network form of governance has advantages over both hierarchy and market solutions in simultaneously adapting, coordinating, and safeguarding exchanges (Jones et al., 1997). Tofing (2005) discussed the development of a second generation of governance network research by considering the meta-governance of self-regulating networks, the role of discourse in relation to governance networks, and the democratic problems and potentials of network governance. Newman and Thornley (1997) analysed the impact of network governance on London’s urban policy-making and found that multiple actors are involved in policy design and implementation. Ferlie et al. (2011) argued for benign ‘post-bureaucratic’ leadership with high engagement levels from multiple parties to overcome government fragmentation.
Studying different modes of network governance and its effectiveness in public policy, Provan and Kenis (2008) identify four key structural and relational contingencies: trust, size (number of participants), goal consensus, and the nature of the task. Networks governed by a unique network administrative organization are popular in developing countries for dealing with complex policy issues in rapidly changing socioeconomic environments (Wang and Yin, 2012). In an
inter-governmental context, Wang et al. (2016) identify the financial, technical and managerial capacity of network administrative organizations in implementing collaborative governance.
The following sections analyse the issues identified in this literature to investigate the effectiveness of network governance and related capacity factors in providing two inter-city public services in the PRD region.
**A 2x2 design for studying inter-city services**
Our research had a 2x2 design: two public services (urban transportation infrastructure development and water quality control) in two municipalities (Guangzhou and Foshan). The rationale for this is that it takes at least two jurisdictions to study inter-city relations, and we were interested in comparing ‘hard’ and ‘soft’ public services. Urban transportation infrastructure development received the most attention under the umbrella of Guangzhou–Foshan metropolitan integration plan (Ye, 2014). Water quality control is another important area in national, provincial and municipal plans established to clean the water in this region. Most water quality control plans are implemented across jurisdictional boundaries and require both Guangzhou and Foshan to take action. Transportation infrastructure development is viewed as promoting economic growth, while water quality control may be characterized as a redistributive function, thus providing useful comparisons.
To study the two services, we analysed the contents of documents to learn about the plans and projects. The planning documents examined included the National Master Plan of Urban System 2010–2020; the National Key Functional Area Master Plan; the Outline Plan for the Reform and Development of the PRD (2008–2020); the Guangzhou Urban Development Master Plan 2010–2020; the Foshan Urban Development Master Plan 2010–2020; the Guangzhou and Foshan Metropolitan Integration Development Plan 2009–2020; the 13th Five Year Plan of Guangzhou and Foshan Integration; the Working Plan of Guangdong Province Water Quality Control Act 2013–2020; the Guangzhou Cleaner Water Act 2013–2020, and the minutes of the annual joint mayoral meeting between Guangzhou and Foshan, as well as other related documents.
We collected additional qualitative evidence through interviews with government officials in the relevant departments in these two cities, including the development and reform commission, land and planning commission, transportation, environmental protection and water management. The dates for cited interviews in this paper are listed in each quotation. All interviews were recorded and transcribed. A thematic analysis of the interview transcripts resulted in a number of themes and sub-themes. This paper includes only the relevant subthemes and quotations from the materials collected for a larger research project.
**Inter-city subway**
One of the most significant patterns of China’s urban development in the past three decades was the formation and growth of metropolitan regions (see, for example, Wu and Zhang, 2007; Xu and Yeh, 2010, 2013; Ye, 2013, 2014). The growth of these regions has been described as ‘an outcome of carefully planned economic and administrative policies’ (Ye, 2013, p. 292). According to the China’s regional economic development report, three metropolitan regions accounted for more than 40% of the total national GDP, with about 18% of the national total population (Liang, 2015). Among these regions, the PRD region is unique in that it is entirely in the province of Guangzhou. This makes regional policies and inter-city agreement relatively easier to reach, due to the strong directive authority of the provincial government of Guangdong in regional issues (Lin, 2001; Xu and Yeh, 2013; Ye, 2013, 2014).
Guangzhou and Foshan are adjacent to the PRD region. Guangzhou, the third largest city in China, has a total population over 14 million and a land area of over 7,400 km², and a per capita annual GDP of over 120,000 RMB. Foshan is an average-sized city in China, with a total population over 8 million, and land area of over 3800 km², and a per capita annual GDP around 100,000 RMB. The development of the Guangzhou–Foshan metropolitan region has been a flagship case for inter-city cooperation in China (Xu and Yeh, 2005; Ye, 2014). Such a vision and ambition for metropolitan development can be found in the two cities’ strategic plans. In 2000, the Guangzhou municipal government launched the Guangzhou Urban Development Strategic Concept Plan and announced the strategy of ‘exploring to the south, optimizing the north, moving to the east, connecting the west’, of which the ‘connecting to the west’ refers to the linked development between Guangzhou and Foshan. In 2011, the metropolitan-wide development vision was reinforced in the
Guangzhou urban development master plan 2010–2020. The Foshan urban master plan 2011–2020 emphasized utilizing the integrated development of Guangzhou and Foshan to promote a linked metropolis that would benefit Foshan’s growth and development.
In 2009, Guangzhou and Foshan established a biannual joint mayoral meeting and signed a metropolitan integration agreement (MIA) between the two cities. Under the framework of the MIA, four policy documents were put in place in the areas of economic co-operation, urban planning, environmental protection, and transportation infrastructure. The joint mayoral meeting set up an annual key work plan for Guangzhou–Foshan. Each project in the annual key work plan was implemented with a specific deadline and corresponding agencies in the two cities (Ye, 2014). Among these projects, the Guangzhou–Foshan inter-city subway has been the most significant. As early as in 2001, Guangdong province, in its 10th five year plan (2001–2005), passed the PRD inter-city rail rapid transit development plan and the Guangzhou–Foshan inter-city subway (GFIS) was the first project delivered. One subway station site was put into test construction in 2002. In 2003, the GFIS project construction plan was made by China International Project Consulting Corporation and submitted to the National Development and Reform Committee (NDRC). This plan was approved in 2005, making the GFIS the first inter-city subway in China. The construction of the GFIS started in 2007. The first section of the GFIS opened in 2010, with 14 stations and a total mileage of 20.47 km (14.8 km in Guangzhou and the rest in Foshan). The second section of the GFIS was completed in December 2016. By 2016, the GFIS had a total distance of 53.63 km (32.16 km in Guangzhou and 21.47 km in Foshan).
The construction and operation of the GFIS is worth close examination. In order to carry out the GFIS project, Guangzhou and Foshan established the Guangdong Guang-Fo Rail Transit Company Ltd (referred to as ‘the joint company’ hereafter). The initial funding for this state-owned company was provided by the two municipal governments, together with fiscal subsidies from the Guangdong provincial government. For the 14.5 billion RMB initial funding, 55% was contributed capital and the remaining 45% was raised with loans made by the two governments. The provincial government provided 10% of the required capital contribution, with the remaining 45% covered by the Guangzhou and Foshan local governments on a 51:49 basis. All funding responsibilities were proposed during the joint mayoral meetings between the two cities, agreed by the two governments, and were approved by the provincial government.
Such a shared financing arrangement was apparently intended to give Guangzhou control over the construction and operation of the GFIS. Before the GFIS was built, the city of Foshan did not have a subway. The construction of the GFIS provided Foshan with an unprecedented opportunity to develop its underground transit system. As the ‘big brother’, Guangzhou usually maintains a leading position in inter-city issues. The GFIS project was no exception because Guangzhou had the construction technology and operational skills for a modern subway system. Without Guangzhou’s contribution, the construction and operation of GFIS would have been virtually impossible. In order to carry out the construction of the GFIS, the Foshan municipal government established a fully state-owned Foshan Railway Investment Group Company Ltd for the construction and management. Its counterpart in Guangzhou is the Guangzhou Subway Group Company Ltd, established in 1992. Thus, the two fully state-owned enterprises of Foshan Railway Investment Group Company and Guangzhou Subway Group Company acted on behalf of the two local governments in the GFIS projects.
Since the municipal government of Guangzhou provided a larger share of funding, the chairman and the general manager of the joint company were both appointed by the Guangzhou municipal government and the Guangzhou Subway Group Company. Subsequently, the Guangzhou Subway Group Company appointed its vice general manager to the chairmanship of the joint company. Moreover, since the Foshan Railway Investment Group Company did not have any prior experience in subway construction and operation, the joint company signed a management responsibility contract with the Guangzhou Subway Group Company. Figure 1 shows the establishment, organization, and management of the GFIS.
Since 2009, the GFIS construction has been a top agenda item for every joint mayoral meeting and in the annual key work plans. Inter-city infrastructure projects like the GFIS have been well received by both the provincial and local governments. The provincial government approved the GFIS plan, submitted the plan to and convinced NDRC in the central government, then provided funds to fill the funding gap. The neighboring local governments contributed to the project as they were able to.
The water quality control programme
Not all inter-city projects have been as successful as urban transportation. Around two-thirds of the annual key work plan projects implemented between 2009 and 2013 are infrastructure building and economic co-operation programmes (Ye, 2014). These projects are favoured because they generate economic outputs and stimulate local growth—the most important objectives of local governments in China. Other programmes, such as environmental programmes, are much less popular.
Co-operation in water quality control between Guangzhou and Foshan was one of the major projects in the MTA signed by the two governments during the first joint mayoral meeting in 2009. In 2013, the Guangdong provincial government approved the Guangdong Cleaner Water Action Plan 2013–2020. In it the Guangzhou and Foshan Cross-Jurisdictional Water Pollution Comprehensive Control Special Plan (referred to as ‘the special plan’ hereafter) was announced as a pilot programme to tackle inter-city water pollution problems in the Guangdong province. The special plan set clear targets and tasks for the governments of Guangzhou and Foshan from 2013 to 2020. For example, Guangzhou was required to maintain its water quality above category IV and Foshan above category V by the end of 2013 (category I being the best). By 2015, Foshan had to achieve the same water quality level (category IV) as in Guangzhou. The water quality in the designated sections of the Pearl River, which runs through the two cities, should reach category III. By 2020, water quality in both cities should reach the respective required standards.
In order to reach these goals, the provincial government mandated the two municipal governments to set out specific plans in 2013 by surveying bodies of water and existing pollutants. The two cities needed to adjust their industrial structures according to the Guangdong Industry Structure Adjustment Guiding Catalogue. The special plan required the city of Guangzhou to build nine water treatment plants by 2015, increase the city’s daily treatment capacity by 70 tons, and extend the city’s sewage pipeline by 505.4 km. Foshan needed to increase its daily treatment capacity by nine tons and extend its sewage pipeline by 83 km. Both cities were required to enhance their treatment capacity for sewage from 90% in 1995 to 95% in 2020. Sixteen rivers were identified as the inter-city water quality control areas by the special plan.
The city of Guangzhou passed cleaner water legislation and treatment plans for cross-city rivers between Guangzhou and Foshan in 2014. Around 14 billion RMB was set aside to implement these plans from 2013 to 2016, according to the Guangzhou Water Management Bureau (interviewed in July 2017). The city of Foshan established their water quality control comprehensive plan 2015–2020, with 84 projects and a total budget of 3.08 billion RMB. In 2016, among the 258 inter-city water quality control projects sponsored by Guangzhou and Foshan, 151 had been completed, 66 projects were underway and 41 projects were in preparation. All of the funds necessary for these water clean-up efforts had to be raised by the two cities themselves. For the 16 cross-city water body treatment projects, the two municipal governments adopted a river master policy requiring the districts through which water flowed to be financially responsible for clean-up. Joint enforcement projects between the two cities were arranged on an *ad hoc* basis, according to the Guangzhou municipal government office (interviewed in April 2017).
There was confusion in decision-making and operation because multiple agencies were involved in the policy design and implementation process. These included the water management bureau, the environmental protection department, and a number of offices in the municipal development and reform commission. For instance, while the water management bureau was the lead agency, at least three offices (regional affairs, environment and resources, and municipal development) under the municipal development and reform...
commission participated in overall planning, cross-border negotiation, and urban economy issues related to water quality control. The water management bureau was responsible for cleaning up pollutants in the water, but did not have any authority to punish the polluters, which include several major industrial plants sited along the rivers. The environmental protection department was in charge of regulating and levying fines on polluters. Coordinating across multiple departments has been difficult due to limited financial resources, unclear organizational functions, and a poorly co-ordinated network. ‘We do not know which department is responsible for what functions. It is hard to see how the fiscal resources are allocated’ according to an official in the environment and resource office in the Guangzhou municipal development and reform commission (interviewed 1 September 2017).
**Discussion**
The division of duties and allocation of resources were significantly less effective in the water quality control programme than in the transportation infrastructure projects. Table 1 summarizes the difference between two cases, using the key factors identified in the literature cited earlier.
As table 1 shows, the transportation project had higher density trust, fewer agencies (clearer duties), higher goal consensus, and moderate need for network-level competencies, in comparison to water quality control actions. A ‘shared governance’ model was in place because of the willingness of two cities to establish the GFIS with active involvement by the provincial government. A highly-effective network governance model was available to stimulate the stable expansion of the inter-city subway. Such a development drove another round of financial investment and economic growth in both cities, particularly in Foshan. Growth in real estate development in Foshan was attributed to the expansion of GFIS (Wangyi Real Estate News, 2017). The construction of GFIS also enabled the residents of Guangzhou to enjoy a 30-minute commute to amenities in Foshan and won the inaugural Huace Subway Award from the Chinese Real Estate Development Association in 2015.
Effective co-operation in water quality control would require a lead organization or a network administrative office as described in Provan and Kenis (2008). In reality, however, it is not easy to have a single agency take the lead in environmental regulations that have wide impact on economic growth and local development. The involvement of the senior government, namely the Guangdong provincial government, would be ideal. However, this did not happen in the Guangzhou–Foshan case.
Xu and Yeh (2013, p. 149) developed the model of bargaining and negotiation, where once a partnership is formed and adopted, the implementation process is characterized by negotiation among and between levels of the hierarchy. Each agent aims to steer the plan in a direction favourable to its own interests. The outcome may not be as originally intended, but it is worked out through extensive bargaining. This paper argues that sub-national governments in China developed selective mechanisms in dealing with inter-city public policies and services. As shown in the GFIS case, governments at the central and provincial level provided institutional support for the development plan while funding was agreed by the two governments of Guangzhou and Foshan, with the provincial government providing some of the investment funding.
**Table 1. Comparison of two cases by key predictors of effectiveness of network governance forms.**
| Case | Trust | Size | Goal consensus | Nature of task |
|-----------------------------|----------------------------------------------------------------------|----------------------------------------------------------------------|--------------------------------------------------------------------------------|-------------------------------------------------------------------------------|
| Transportation infrastructure (more effective) | High density—delegating GFIS’s operational responsibility to the Guangzhou Subway Group Company Ltd | Few—primarily the subway companies in the two cities | Moderately high—stimulating economic growth and local development | Moderate—relatively straightforward, with senior government’s active attention and effective co-ordination |
| | High financial capacity | High managerial capacity | Shared motivation | High technical capacity |
| Water quality control (less effective) | Low density—multiple multiple agencies of water management, environmental protection and development | Moderate—multiple agencies without a leading authority | Moderately low—potential conflict between environmental protection and economic growth | High—more complicated, lack of senior government attention or effective co-ordination |
| | Low financial capacity | Low managerial capacity | Mixed motivation | Low managerial capacity |
making the inter-city subway project financially sound. Since Guangzhou is a much stronger city than Foshan, the two municipal governments took advantage of Guangzhou’s subway management capacity by fully contracting out the GFIS operation to the Guangzhou Subway Group, enhancing the managerial capacity of the project. Fiscal and administrative agreements were not difficult to reach in this case since the GFIS would clearly bring economic benefits to both cities, the province and to the overall development of the PRD region. Multiple levels of governments act proactively to create an inter-governmental network to advance the project.
In light of the environmental protection projects, such as the water quality control between the two cities, few flexible fiscal arrangements were reached because such projects tend to produce negative externalities with few economic benefits. In this case, the provincial government had to assume a co-ordinator’s role and direct inter-city efforts, by establishing strict regional policies and enforcing implementation guidelines. Vertical inter-governmental guidance and direction is more important in this regard. Referring to the analytical framework proposed by Wang et al. (2016), the project lacked financial capacity due to the difficulties of getting and enforcing financial commitment from the two cities involved. Technically the clean water standard was set up but, managerially speaking, duties were unclear and there was a low level of shared motivation among overlapping agencies, which made inter-city water quality control difficult to implement.
**Conclusion**
This paper has described the institutional context for China’s inter-governmental relations and explained the financial and managerial requirements of two horizontal network arrangements. It takes into account the nature of economic activities and types of services and argues that, under the current inter-governmental structure in China, economic principles and management practices need to be adapted to deal with the issues of inter-city services. Local governments in China are making selective inter-governmental arrangements to accommodate regional collaborations. The findings in this study help to clarify transformational inter-governmental relations, particularly at the sub-national level and how it influences inter-governmental fiscal management in metropolitan public service delivery in China. Future research is needed to further compare the delivery of metropolitan public services in other national contexts in order to offer more generalized evidence and recommendations for developing countries.
**Acknowledgements**
This research was supported by the Ministry of Education Key Research Center (Major Project 16JJD630013), National Social Science (Key Project Fund 13&ZD041), National Science Foundation (Project 71673111), and Guangdong Science Foundation (Project 2015A030313323). QIU Mengzhen, TU Siming and GONG Rui assisted in data collection. Earlier versions of this paper were presented at the 78th annual conference of the American Society for Public Administration Annual Conference (Atlanta, USA) and a workshop at the Education University of Hong Kong. The authors are grateful for the comments from the panelists, anonymous reviewers and the editors of this PMM theme. The authors are solely responsible for any errors or omissions.
**IMPACT**
How to finance and deliver public services across jurisdictions is a significant challenge to public officials and academic researchers. This paper offers a new way of tackling this issue with a collaborative governance approach in the Pearl River Delta region in southern China. Local governments tended to favour projects with positive economic benefits and visible outputs over less productive tasks such as environmental protection. Effective networks will enhance co-ordination and collaboration in delivering cross-jurisdictional public services. The lessons in this paper will be helpful for countries that are undergoing rapid urbanization.
**References**
Agranoff, R. and McGuire, M. (1998), Multi-network management: collaboration and the hollow state in local economic policy, *Journal of Public Administration Research and Theory*, 8, pp. 67–91.
Agranoff, R. and McGuire, M. (2001), Big questions in public network management research, *Journal of Public Administration Research & Theory*, 11, 3, pp. 295–326.
Ahmad, E. and Brosio, G. (2015), *Handbook of Multilevel Finance* (Elgar).
Andrew, S. A. and Feiock, R. C. (2010), Core-peripheral structure and regional governance: implications of Paul Krugman’s new economic geography for public administration, *Public Administration Review*, 70, 3, pp. 494–499.
Bevir, M., Rhodes, R. A. W. and Weller, P. (2003), Traditions of governance: interpreting the changing role of the public sector. *Public Administration*, 81, pp. 1–17.
Bird, R. M. and Chen, D. J. (1998), Intergovernmental fiscal relations in China in international perspective. In Brean, D. (Ed.), *Taxation in Modern China* (Routledge), pp. 151–186.
Emerson, K., Nabatchi, B. and Balog, S. (2011), An integrative framework for collaborative governance. *Journal of Public Administration Research and Theory*, 22, pp. 1–29.
Ferlie, E. et al. (2011), Public policy networks and ‘wicked problems’: a nascent solution? *Public Administration*, 89, 2, pp. 307–324.
Hawkins, C. V. (2010), Competition and cooperation: local government joint ventures for economic development. *Journal of Urban Affairs*, 32, 2, pp. 253–275.
Hayek, F. A. (1945), The use of knowledge in society. *American Economic Review*, 35, pp. 519–530.
Jones, C., Hesterly, W. S. and Borgatti, S. P. (1997), A general theory of network governance: exchange conditions and social mechanisms. *Academy of Management Review*, 22, 4, pp. 911–945.
Kamensky, J. and Burlin, T. (2004), *Collaboration: Using Networks and Partnerships* (Rowman & Littlefield).
Kickert, W. J., Klijn, E. and Koppenjan, J. F. (Eds), (1997), *Managing Complex Networks* (Sage).
Liang, H. G. (Ed), (2015), *China’s Regional Economic Development Report* (Social Science Academic Press).
Lin, G. C. S. (2001), Metropolitan development in a transitional socialist economy. *Urban Studies*, 38, 3, pp. 383–406.
Mao, Z. (1956), *The Thesis on the Ten Major Relationships*.
Musgrave, R. (1959), *Theory of Public Finance* (McGraw Hill).
Newman, J. (2001), *Modernising Governance* (Sage).
Newman, P. and Thornley, A. (1997), Fragmentation and centralisation in the governance of London: influencing the urban policy and planning agenda. *Urban Studies*, 34, pp. 967–988.
Nyholm, I. and Haveri, A. (2009), Between government and governance—local solutions for reconciling representative government and network governance. *Local Government Studies*, 35, 109–124.
Oates, W. (1972), *Fiscal Federalism* (Harcourt).
Peters, G. B. and Pierre, J. (2001), Developments in inter-governmental relations: towards multi-level governance. *Policy & Politics*, 29, pp. 131–135.
Provan, K. G. and Kenis, P. (2008), Modes of network governance: structure, management, and effectiveness. *Journal of Public Administration Research and Theory*, 18, 2, pp. 229–252.
State Council (2016), *The Guidance on Advancing the Reform of Divisions of Revenues and Expenditure Responsibilities between the Central and Local Governments*.
Tiebout, C. (1956), A pure theory of local expenditures. *Journal of Political Economy*, 64, pp. 416–424.
Tofing, J. (2005), Governance network theory: towards a second generation. *European Political Science*, 4, 3, pp. 305–315.
Tsai, K. S. (2004), Off balance: the unintended consequences of fiscal federalism in China. *Journal of Chinese Political Science*, 9, 2, pp. 1–26.
Wang, F. and Yin, H. T. (2012), A new form of governance or the reunion of the government and business sector? *International Public Management Journal*, 15, 4, pp. 429–453.
Wang, X. H., Chen, K. and Berman, E. M. (2016), Building network implementation capacity. *International Public Management Journal*, 19, 2, pp. 264–291.
Wangyi Real Estate News (2017), A new era for real estate development in Foshan: the impact of subway (3 January).
Wu, A. M. and Wang, W. (2013), Determinants of expenditure decentralization. *World Development*, 46, pp. 176–184.
Wu, F. L. and Zhang, J. X. (2007), Planning the competitive city-region: the emergence of strategic development plans in China. *Urban Affairs Review*, 42, 5, pp. 714–740.
Xu, J. and Yeh, A. G. O. (2005), City repositioning and competitiveness building in regional development: new development strategies in Guangzhou, China. *International Journal of Urban and Regional Research*, 29, 2, pp. 283–308.
Xu, J. and Yeh, A. G. O. (2010), Planning mega-city regions in China. *Progress in Planning*, 73, pp. 17–22.
Xu, J. and Yeh, A. G. O. (2013), Inter-jurisdictional co-operation through bargaining: the case of the Guangzhou–Zhuhai railway. *China Quarterly*, 21, 3, pp. 130–151.
Ye, L. (2013), Urban transformation and institutional policies: case study of mega-region development in China’s Pearl River Delta. *Journal of Urban Planning and Development*, 139, 6, pp. 292–300.
Zhang, G. (2018), The revolutions in China’s inter-governmental fiscal system. *Public Money & Management*, 38, 6, p. 419.
|
ABSTRACT
Genomics is playing an important role in transforming healthcare. Genetic data, however, is being produced at a rate that far outpaces Moore’s Law. Many efforts have been made to accelerate genomics kernels on modern commodity hardware such as CPUs and GPUs, as well as custom accelerators (ASICs) for specific genomics kernels. While ASICs provide higher performance and energy efficiency than general-purpose hardware, they incur a high hardware design cost. Moreover, in order to extract the best performance, ASICs tend to have significantly different architectures for different kernels. The divergence of ASIC designs makes it difficult to run commonly used modern sequencing analysis pipelines due to software integration and programming challenges.
With the observation that many genomics kernels are dominated by dynamic programming (DP) algorithms, this paper presents GenDP, a framework of dynamic programming acceleration including DPAX, a DP accelerator, and DPMAP, a graph partitioning algorithm that maps DP objective functions to the accelerator. DPAX supports DP kernels with various dependency patterns, such as 1D and 2D DP tables and long-range dependencies in the graph structure. DPAX also supports different DP objective functions and precisions required for genomics applications. GenDP is evaluated on genomics kernels in both short-read and long-read analysis pipelines, achieving 157.8× throughput/mm² over GPU baselines and 132.0× throughput/mm² over CPU baselines.
CCS CONCEPTS
• Computer systems organization → Special purpose systems;
• Applied computing → Genomics.
KEYWORDS
Computer Architecture, Hardware accelerators, Reconfigurable architectures, Genomics, Bioinformatics
ACM Reference Format:
Yufeng Gu, Arun Subramaniyan, Tim Dunn, Alireza Khadem, Kuan-Yu Chen, Somnath Paul, Md Vasimuddin, Sanchit Misra, David Blaauw, Satish Narayanasamy, and Reetuparna Das. 2023. GenDP: A Framework of Dynamic Programming Acceleration for Genome Sequencing Analysis. In Proceedings of the 50th Annual International Symposium on Computer Architecture (ISCA ’23), June 17–21, 2023, Orlando, FL, USA. ACM, New York, NY, USA, 15 pages. https://doi.org/10.1145/3579371.3589060
1 INTRODUCTION
Genome sequencing, a key component of precision health, is necessary for early detection of cancer [71], autism [40], infectious diseases (such as COVID-19 [2]) and genetic diseases [78]. Genomics is a wide space and there are diverse applications within it, such as whole genome sequencing [43] and pathogen detection [50]. With the innovation in genome sequencing technologies over the past decade, sequencing data is being produced cheaper and faster, increasing at a rate that far outpaces Moore’s Law. The cost to sequence a human genome has dropped from $100 million at the beginning of this century to less than $1000 nowadays [74]. The total
amount of sequencing data has been doubling approximately every seven months and projections indicate that 100 million genomes will be sequenced by 2030 [7].
This large volume of sequencing data poses significant computational challenges and requires novel computing solutions which can keep pace. Recent works explore architecture-aware optimizations on commodity hardware such as leveraging SIMD hardware on CPUs [34, 73] and thread-level parallelism on GPUs [9, 10, 26, 28, 57, 61–63]. Custom accelerators, however, achieve much better performance and are more area and power efficient than CPUs and GPUs [14, 23–25, 32, 70, 77]. These accelerators gain orders of magnitude speedups over general-purpose hardware, but at a high cost of hardware design. Specific genomics “kernels” (algorithms) do not have a market large enough to justify a custom chip. This makes it difficult to design an accelerator for the particular implementation of a single kernel, since the state-of-the-art implementation may change significantly over the next few years. For example, Smith-Waterman (SW), the basic approximate string matching algorithm, was optimized from the original SW [66] to a banded SW [17], and further to an adaptive banded SW [44] and a wavefront version [48]. Therefore, the high cost for designing custom accelerators and frequent kernel developments motivate a generic domain-specific accelerator for genome sequencing analysis.
In the commonly used genome sequencing pipelines, dynamic programming (DP) algorithms are widely used, including read alignment and variant calling in reference-guided alignment, layout and polishing in de-novo assembly, as well as abundance estimation in metagenomics classification [68]. Matrix multiplications are the heart of machine learning applications, which motivates the design of Tensor Processing Units (TPU) [33]. Similarly, DP algorithms are adopted by many genomics kernels and account for large amounts of time in mainstream sequencing pipelines, which provides the opportunity for a dynamic programming accelerator which supports both existing and future DP kernels.
Dynamic programming simplifies a complicated problem by breaking it down into sub-problems which can be solved recursively. However, accelerating a general-purpose DP algorithm comes with several challenges. First, DP kernels in common sequencing pipelines have different dependency patterns, including both 1-Dimension and 2-Dimension DP tables. Some kernels have long-range dependencies in the graph structure, where cells in the DP table not only depend on the neighboring cells, but also depend on cells far away. Second, DP kernels have different objective functions which include multiple operators. For instance, approximate string matching, an algorithm applied in DNA, RNA, and protein sequence alignment, has three modes: local (Smith-Waterman), global (Needleman-Wunsch) and semi-global alignment (overlap), as well as three methods for scoring insertions and deletions: linear, affine, and convex [72]. Each mode or method above requires a unique objective function. Third, DP kernels have different precision requirements. It is challenging to support multiple precision arithmetic while neither losing efficiency for low-precision computation nor compromising accuracy for high-precision computation.
To address these challenges, we propose GenDP, a framework of dynamic programming acceleration for genome sequencing analysis, which supports multiple DP kernels. First, we present DPAX, a DP accelerator capable of solving multiple dependency patterns by providing flexible interconnections between processing elements (PEs) in the systolic array. The systolic array helps exploit the wavefront parallelism in the DP table and provides better spatial locality for DP dataflow. DPAX decouples the control and compute instructions in the systolic array. Second, we present DPMAP, a graph partitioning algorithm which maps the data-flow graph of the objective function to compute units in the DPAX accelerator. DPAX supports different objective functions and multiple precision arithmetic by programmable compute units.
We evaluate the GenDP framework on four DP kernels: Banded Smith-Waterman (BSW) [73], Chain [38, 39], Pairwise Hidden Markov Model (PairHMM)[58] and Partial Order Alignment (POA) [72]. We also demonstrate generality of the proposed framework by extending to other dynamic programming algorithms such as Dynamic Time Warping (DTW) which is commonly used for speech detection [12], as well as the Bellman-Ford (BF) algorithm for shortest path search in robotic motion planning tasks [51].
In summary, this paper makes the following contributions:
- We propose GenDP, a general-purpose acceleration framework for dynamic programming algorithms.
- We design DPAX, a DP accelerator with programmable compute units, specialized dataflow, and flexible PE interconnections. DPAX supports multiple dependency patterns, objective functions, and multi-precision arithmetic.
- We describe DPMAP, a graph partitioning algorithm, to map the data-flow graph of DP objective functions to the compute units in DPAX.
- We synthesize the design of DPAX in a TSMC 28nm process. DPAX achieves 157.8× throughput per unit area and 15.1× throughput/Watt compared to GPU, and 132.0× throughput per unit area over CPU baselines.
## 2 BACKGROUND
### 2.1 Common Genomics Pipelines
Genome sequencing starts with raw data from the sequencer. The raw signals are interpreted to derive reads (short sequences of base pairs). This process is named basecalling. Next-generation sequencing (NGS) technologies produce short reads with ~ 100~150 base pairs (bp) [49], while third-generation technologies produce much longer reads (> 10,000 bps) [13]. After obtaining reads from raw data, there are two important analysis pipelines: reference-guided assembly and "de novo" assembly (without using a reference genome).
In reference-guided assembly, the sample genome is reconstructed by aligning reads to an existing reference genome. Read alignment can be abstracted to an approximate string matching problem, where dynamic programming algorithms [42] are used to estimate the pairwise similarity between the read and the reference sequence. After the alignment, small variants (mutations) still exist in aligned reads. A Hidden Markov Model (HMM) [58] or machine learning model [46] is then applied to detect such mutations, in a step known as variant calling.
If there is no reference genome available for alignment, the genome sequence needs to be constructed with reads from scratch, which is referred to as "de novo" assembly. Reads with overlapping regions can be chained to build an overlap graph and are then
further extended into larger contiguous regions. Finally, assembly errors are corrected in a graph-based dynamic programming polishing step [72].
In addition to the two analysis pipelines above, metagenomics classification, another pipeline, is used for real-time pathogen detection [60] and microbial abundance estimation [41]. Metagenomics classification aligns input microbial reads to a reference pan-genome (consisting of different species) and then estimates the proportion of different microbes in the sample.
### 2.2 Dynamic Programming
Dynamic programming [11] simplifies a problem by breaking it down to subproblems. Following the Bellman equation [37] which describes the objective function, the subproblems can be solved recursively from the initial conditions. Longest common subsequence (LCS) [31] is a classic DP algorithm that involves looking for the LCS of two known sequences $X_m = \{x_0, x_1...x_{m-1}\}$ and $Y_n = \{y_0, y_1...y_{n-1}\}$. First, looking for the LCS between $X_m$ and $Y_n$ can be simplified by looking for LCSs between $X_m$ and $Y_{n-1}$, as well as $X_{m-1}$ and $Y_n$. Each of these two subproblems can be further broken down into computing the results for LCSs between $X_{m-1}$ and $Y_{n-1}$. If we define $c[i, j]$ to be the length of an LCS between the sequence $X_i$ and $Y_j$, the objective function can be represented as shown in Equation 1:
$$
c[i, j] =
\begin{cases}
0 & \text{if } i = 0 \text{ or } j = 0 \\
c[i - 1, j - 1] + 1 & \text{if } i, j > 0 \text{ and } x_{i-1} = y_{j-1} \\
\max(c[i, j - 1], c[i - 1, j]) & \text{if } i, j > 0 \text{ and } x_{i-1} \neq y_{j-1}
\end{cases}
(1)
$$
Second, a DP table can be constructed based on the sequence $X_m$ and $Y_n$ to memorize the subproblem results, as shown in Figure 1. $c[i, j]$ is calculated based on its upper, left, and diagonal neighbors $c[i - 1, j]$, $c[i, j - 1]$, and $c[i - 1, j - 1]$. The first row and first column of the DP table are filled with 0. Based on this initial condition, the cells in the whole DP table can be filled out. Finally, the largest value in the table is the length of longest common subsequence and the corresponding subsequence can be found by the trace back ste, as shown in the orange block chain in Figure 1.
| | y | E | A | B | D | B | D |
|---|---|---|---|---|---|---|---|
| $x_i$ | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| A | 0 | 0 | 1 | 1 | 1 | 1 | 1 |
| B | 0 | 0 | 2 | 2 | 2 | 2 | 2 |
| D | 0 | 0 | 2 | 3 | 3 | 3 | 3 |
| D | 0 | 0 | 1 | 2 | 3 | 3 | 4 |
**Figure 1:** DP Table for Longest Common Subsequence
### 2.3 DP Kernels in Genomics Pipelines
We introduce four important and time-consuming DP kernels from commonly used genomics pipelines, as shown in Figure 2. Banded Smith-Waterman (BSW) is applied in read alignment, and variants of BSW are also used for RNA and protein alignment. Pairwise Hidden Markov Model (PairHMM) is used in post-alignment variant calling. Partial Order Alignment (POA) is applied in the polishing step of assembly. Chain is used in both alignment and assembly of long read sequencing, as well as metagenomics classification. These four kernels spend 31%, 70%, 47% and 75% of time in corresponding sequencing pipeline stages respectively [68]. The details of these algorithms are explained as follows:
**Banded Smith-Waterman (BSW)** is the banded version of the Smith-Waterman [66] algorithm, which estimates the pairwise similarity between the query and reference sequences. The similarity score for a given DNA sequence is typically computed with affine-gap [52] penalties, identifying short insertions and deletions in pairwise alignments. The objective function is shown in Figure 2a, which computes three matrices H, E and F, corresponding to three edit types: match, insertion and deletion. S is a similarity score between the base $X(i)$ and $Y(j)$. $H(i, j)$ refers to the similarity score for the substring $X(\emptyset, i)$ and $Y(\emptyset, j)$. The banded version of Smith-Waterman is applied with a maximum of w insertions or deletions, illustrated as the region between black cells in Figure 2a. BSW can be computed using 8-bit or 16-bit integer arithmetic depending on the sequence length [24].
**Pairwise Hidden Markov Model (PairHMM)** aligns reads to candidate haplotypes identified by the De-Brujin graph traversal. The most likely haplotype supported by the reads is identified from the pairwise alignment, which is performed by a Hidden Markov Model (HMM). The likelihood score is computed by the formula shown in Figure 2b, where $f^M$, $f^I$ and $f^D$ represent match, insertion and deletion probabilities for aligning read substring $X(\emptyset, i)$ to haplotype substring $Y(\emptyset, j)$. The weights $\alpha$ are different transition and emission parameters of the HMM. $\rho$ is the prior probability of emitting bases $X(i)$ and $Y(j)$. The computation in PairHMM uses floating-point arithmetic [77].
**Partial Order Alignment (POA):** In the assembly polishing step, multiple read sequences are used to construct a partial-order graph and the consensus sequence is then generated from the graph. Each unaligned sequence is aligned to the existing graph, as shown in Figure 2c. The nodes in the partial-order graph represent bases in the read sequence, and the weighted edges denote the times that the edges appear in different reads. Each cell not only depends on the upper and diagonal cells in the previous row, but also depends on earlier rows if there is an edge connecting that row with the current row in the graph. The objective function is similar to that used in BSW.
**Chain:** Given a set of seed pairs (anchors) shared between a pair of reads, Chain aims to group a set of collinear seed pairs into a single overlap region, as shown in Figure 2d(i). In the 1-Dimension DP table (Figure 2d(ii)), each anchor is compared with N previous anchors (default setting N=25) to determine the best parent. However, the dependency between neighboring anchors poses difficulties for parallelism. The reordered Chain algorithm [28] compares each anchor with N subsequent anchors and updates the scores each time for them (Figure 2d(iii)).
Table 1 summarizes the characteristics of four DP kernels above, including the dimension and size of DP tables, dependency patterns and arithmetic precision. BSW and PairHMM are used for short read pipelines, while POA and Chain are used in long read pipelines with larger DP tables. The first three kernels have 2-Dimension DP tables, whereas Chain has a 1-Dimension DP table. Each of these four kernels have different precision requirements, as shown in the last column.
Figure 2: Black cells in BSW show the bands that limit the computing regions. Grey cells in all figures show previously computed elements and green ones on the wavefront are cells being computed in parallel. White cells show the untouched cells. Long arrows pointing to green cells show the direction that cells are being computed in different processing units in parallel. Short arrows pointing to blue circles show the dependency patterns. Cells in POA may also include long dependencies from rows other than the previous row shown by orange arrows.
Table 1: Characteristics of DP kernels
| Kernels | Dimension | Dependency | Precision |
|---------|-----------------|-----------------------------|--------------------|
| BSW | 2D Table | Last 2 Wavefronts | 8-bit/16-bit Integer |
| | ~ 100 × 60 | | |
| PairHMM | 2D Table | Last 2 Wavefronts | Floating-point |
| | ~ 100 × 60 | | |
| POA | 2D Table | Graph structure | 32-bit Integer |
| | ~ 1000 × 500 | Long-range dependency | |
| Chain | 1D Table | Last N (~ 25) Anchors | 32-bit Integer |
| | ~ 20000 | | Floating-point |
Figure 3: GenDP Framework
3 GENDP FRAMEWORK
Figure 3 demonstrates the structure of the GenDP framework. For each new DP kernel, we analyze the inter-cell dependency pattern and the intra-cell objective function, as shown in the top and bottom blocks respectively in Figure 3. First, the inter-cell dependency patterns are determined by the recursion rule in the DP kernel. Based on the dependency pattern, we configure the processing element (PE) interconnection and generate the control instructions. Second, the intra-cell data-flow graph for the objective function is mapped to compute units based on the DMap algorithm. The compute instructions are generated based on the mapping results.
Figure 4: Overview of the DPax Architecture
3.1 Inter-Cell Dependency Pattern Supports
An overview of the DPax architecture is described in Figure 4, including 16 integer PE arrays and one floating point PE array. Each PE array contains four PEs that are connected as a 1-Dimension systolic array, in which each PE can receive data from the previous PE and pass data to the next one. The interconnections of the 16 integer PE arrays are configured based on the dependency pattern.
of the DP kernel. The last PE in the PE array can either be connected to the first PE in the next PE array or to the data buffer directly. The 16 integer PE arrays can be concatenated and make up a large systolic array consisting of 64 PEs. Figures 5b and 5d show PE arrays of size 4 and 8 respectively.

For kernels with a 2D DP Table, cells within a row are executed in the same PE. For example, rows with different colors in Figure 5 (a) are executed in PEs with corresponding colors in Figure 5 (b). Each element in the target sequence is stored in a PE statically, while elements in the query sequence are streamed through PEs in the same array. Each cell depends on results from three neighbors, the result from the left cell in the same row is stored inside the PE, while results from upper and diagonal cells in the previous row are transferred from the previous PE. First-in-First-out (FIFO) buffers connect the last and the first PEs in the array, from which the first PE (the first row in next 4-row group) acquires the dependent data from the last PE (the last row in the current 4-row group). PE array size determines how many rows can be executed in parallel.
For kernels with a 1D DP Table, cells are mapped to a large PE array as shown in Figure 5 (d). When multiple small PE arrays are connected together to form a large PE array, only the FIFO in the first PE array is utilized. For example, in the first time step, cells #1~8 in Figure 5 (c) are mapped to PEs as shown in Figure 5 (d). Cells #1~8 all depend on cell #0. Therefore, the value of cell #0 is loaded from the FIFO to each PE sequentially. During the next time step, cells #2~8 move forward to their neighboring PEs. Cell #9 is sent from the input buffer to the first PE, while cell #1 is moved out from the last PE. Meanwhile, cell #1 is loaded from the FIFO to each PE because cells #2~9 all depend on cell #1.
Long-range dependencies in the graph structure are supported by scratchpad memories (SPM) inside each PE. In a 2D DP table, usually only the result of left cell is stored in the PE. However, if a cell depends on other cells in the same row but far away, all of the candidate dependent cells need to be stored in the PE. In kernels with long-range dependencies, the result for each cell is not only stored in registers for reuse by the next cell, but also stored in SPM for potential reuse by later cells.
### 3.2 Intra-Cell Objective Function Mapping
In order to support DP objective functions efficiently in the PE, we design a multi-level ALU array in the compute unit. This multi-level ALU array is consistent with common compute patterns in genomics DP kernels, and reduces the register access pressure as well. The computations in the objective function are represented by data-flow graphs. DPMap partitions the data-flow graph into multiple subgraphs which can be further mapped to the ALU arrays in compute units. The partition rules, constrained by the structure of the compute unit, will be illustrated in Section 5.
### 4 DPAX ARCHITECTURE
#### 4.1 Processing Element Array
An overview of the DPAx architecture is shown in Figure 4 and discussed in the previous section. Figure 6 shows the architecture of the PE array and PE. The systolic array architecture simplifies control by allowing data-flow between neighboring PEs. However, the systolic data path alone cannot satisfy the requirements of various DP kernels. For example, POA has long-range dependencies and its dependency pattern is determined by the graph structure. The movement for such dependency requires branch instructions. Therefore, DPAx decouples the computation and control architecture to provide flexible data movements similar to decoupled access execute architectures [65]. Meanwhile, the parallelism and the massive reduction tree pattern observed in genomics DP kernels (Section 4.3) motivate the VLIW architecture.
The PE array consists of the input and output data buffers, control instruction buffer, decoder, first-in-first-out (FIFO) buffer and four PEs. PEs are connected as a systolic array and data can be passed from one PE to the next. The FIFO connects the last and first PEs. The first PE is connected to the input data buffer to receive the input sequences. The last PE in the PE array is also equipped with a dedicated port connected to the first PE in the next PE array to build a larger PE group. In Figure 6, blue solid and orange dotted lines show the data and control flow respectively.
#### 4.2 Processing Element
Each processing element (PE) is capable of running a control and a compute thread in parallel. Control and compute instructions are stored in two instruction buffers and decoded separately. Each PE contains a register file (RF) and a scratchpad memory (SPM) that store the short and long range dependencies respectively. Load and store ports are connected to the previous and next PEs. The first and last PEs are also connected to the FIFO and Input/Output Data Buffers.
Each PE is a 2-way VLIW compute unit array that can execute two independent compute instructions in parallel to exploit the instruction-level parallelism (ILP). Every PE contains two 32-bit compute units which execute the VLIW instructions. Each compute unit (CU) can either execute operations on 32-bit or four concurrent 8-bit groups of operands as a SIMD unit to make use of data-level parallelism (DLP). The SIMD unit improves the performance of low-precision kernels, e.g., BSW, where four DP tables are mapped to four SIMD lanes. The floating point PE array and PE architecture is similar to the integer one, but only supports 32-bit FP operands.
#### 4.3 Compute Unit Design Choice
We observe that genomics DP kernels have a common reduction tree pattern as shown in Figure 7 (a) and (b). Thus, we propose a
reduction tree architecture for the compute unit (CU). The outputs of the first-level ALUs are used as inputs to the next-level ALUs. The CU also contains a multiplication module for the weight calculation in the Chain kernel. Since multiplication increases the length of the CU critical path, we design it as a separate unit from the ALU reduction tree.

Figure 7 (c) (d) (e) shows three possible choices for the ALU reduction tree. Compared to a 1-level reduction tree, the 2-level design requires fewer register file accesses and has better balances of critical path and area; the more levels the ALU reduction tree has, the fewer times CUs need to access the register file.
Table 2 compares ALU reduction trees of 1, 2 and 3 levels, which come with 1, 3, and 7 ALUs respectively. “RF Accesses” shows the number of accesses to each RF in a single cell of the DP table. “CU Utilization” is calculated as the percentage of cycles during which each ALU is utilized in the single-cell computation. The 3-level ALU reduction tree best reduces register file accesses, but lowers the CU utilization as well. It uses more than twice as many ALUs as the 2-level tree, but hardly reduces the number of RF accesses. Thus, we pick a 2-level reduction tree for the CU design.
### 4.4 Execution Model and GenDP ISA
We adopt the following execution model for GenDP. Instructions are preloaded to the accelerator before starting a DP kernel. Each PE array runs one thread of execution, controlling the data movement between data buffers and PEs, as well as the start of the execution for each PE. Upon receiving the start flag from the PE array, each PE runs two threads of execution: control and compute. The control thread manages data movement between the SPM, register file, and the systolic data-flow between neighboring PEs. This thread also controls the start of the compute thread. The compute thread executes a compute instruction by decoding instructions, loading the data from the register file, executing computations in the CU array, and finally writing results back to the register file.
#### Table 2: ALU Reduction Trees with Different Levels
| Kernel | Level of ALU Reduction Tree | RF Accesses | CU Utilization |
|----------|-----------------------------|-------------|----------------|
| BSW | 1 | 20 | 100% |
| | 2 | 11 | 60.6% |
| | 3 | 10 | 28.6% |
| PairHMM | 1 | 32 | 96.9% |
| | 2 | 16 | 64.6% |
| | 3 | 11 | 40.3% |
| POA | 1 | 56 | 85.7% |
| | 2 | 56 | 28.5% |
| | 3 | 54 | 12.7% |
| Chain | 1 | 24 | 95.8% |
| | 2 | 20 | 38.3% |
| | 3 | 20 | 16.4% |
The control ISA is shown in Table 3, which is applied to the control instructions in both the PE array and PEs. Arithmetic instructions manipulate the *address registers* within the decoders. `mv` instructions either load immediate values or move data between memory components. `branch` instruction enables the loop iteration. `set` instructions control the start of execution for a subsidiary component (e.g. PE arrays control PEs and PEs control CUs). Figure 8 shows an example for data movement, where two control instructions (one in each PE) are executed. PE[i-1] moves data from 0x00ff in the register file to its out_port, while PE[i] moves data from its in_port to 0x00ff in the scratchpad memory. The delay for
movements in both PEs are considered into the critical path and the movement can be done in one cycle. The control instructions are generated manually in this work.

The 2-way VLIW compute instructions are executed by two compute units, each of them containing 3 operations (for 3 ALUs of the 2-level reduction tree in Figure 7 (d)) and 6 operands (4 for the top left ALU and 2 for the right one). Operations in the compute instructions are listed in Table 4. The compute instructions are generated by DMap algorithm in Section 5.
**Table 4: Compute instruction operation**
| Operation | Action |
|-----------------|------------------------------------------------------------------------|
| Addition | \( out = in[0] + in[1] \) |
| Subtraction | \( out = in[0] - in[1] \) |
| Multiplication | \( out = in[0] \times in[1] \) |
| Carry | \( out = carry(in[0], in[1]) \) |
| Borrow | \( out = in[0] < in[1] ? 1 : 0 \) |
| Maximum | \( out = max(in[0], in[1]) \) |
| Minimum | \( out = min(in[0], in[1]) \) |
| Left-shift 16-bit| \( out = in[0] \ll 16 \) |
| Right-shift 16-bit| \( out = in[0] \gg 16 \) |
| Copy | \( out = in[0] \) |
| Match Score | \( out = scoretable(in[0], in[1]) \) |
| Log2 LUT | \( out = log2(in[0]) \gg 1 \) |
| Log\_sum LUT | \( out = log\_sum(in[0]) \) |
| Comparison > | \( out = in[0] > in[1] ? in[2] : in[3] \) |
| Comparison == | \( out = in[0] == in[1] ? in[2] : in[3] \) |
| No-op | Invalid |
| Halt | Stop Computation |
## 5 DMAP ALGORITHM
The DP objective function is represented as a data-flow graph (DFG). The DMap algorithm generates compute instructions by mapping the DFG to compute units in the PE. In the DFG, a node represents an operator, while an edge shows the dependency between operators. The DFG has \(|V|\) nodes \(V = \{v_0...v_{|V|-1}\}\) and \(|E|\) edges \(E = \{e_0...e_{|E|-1}\}\). In edge \(e_i = (v_m, v_n)\), the operator in node \(v_n\) takes the result of \(v_m\) as an operand. We define node \(v_m\) as the parent of node \(v_n\), and \(v_n\) as the child of \(v_m\). Figure 9(a) shows the DFG of the BSW kernel.
DMap breaks the entire graph into subgraphs that contain either one multiplication or three ALU nodes (Figure 7(d)). The edges within the subgraphs represent the data movements within the compute units (CU). The edges between subgraphs represent accesses to the register file and are removed by DMap in three steps. First, **Partitioning** extracts nodes that will be mapped to 4-input ALUs and multipliers, because a CU supports at most one such operation. Second, **Seeding** looks for nodes that could be mapped to the second level of the ALU reduction tree. Nodes with more than one parent or more than one child are selected as seeds. The seed and its parents are mapped to a CU together. After seeding, the remaining nodes have a single parent or a single child. Third, **Refinement** maps every two remaining nodes to the 2-level ALU tree in a CU. Figure 9 shows an example of the DMap algorithm. Four subfigures represent the original graph and the three steps in DMap separately. Dashed blocks represent final subgraphs.
**Algorithm 1 Partitioning**
```plaintext
for \(v_i \in V\) do // Traverse the DFG
if \(opcode[v_i] = Multiplication\) then
Remove input and output edges of node \(v_i\)
end if
if \(opcode[v_i] = Comparison/MatchScore\) then
Remove input edges of node \(v_i\)
if node \(v_i\) has more than one child then
for \(v_j \in\) children of node \(v_i\) do
if \(opcode[v_j] = Subtraction\) then
Remove output edge of node \(v_i\)
else
Replicate node \(v_i\)
end if
end for
end if
end if
end for
```
**Partitioning**: Algorithm 1 breaks both input and output edges connected to nodes of 4-input ALUs and Multipliers. All parent and child edges of the multiplication nodes are removed (lines 2-4). DMap also removes the parent edges of 4-input operations (line 6). For a 4-input node that has two children, we replicate it if the operations of its children are commutative (except Subtraction) in order to decrease register file accesses (lines 8-14). After partitioning, all nodes have at most two parents.
**Algorithm 2 Seeding**
```plaintext
for \(v_i \in V\) do // Traverse the DFG
if node \(v_i\) (seed) has two parent nodes then
Remove output edges of node \(v_i\)
for \(v_j \in\) parents of node \(v_i\) do
Remove input edges of node \(v_j\)
end for
end if
if node \(v_i\) (seed) has more than one child then
Remove output edges of node \(v_i\)
end if
end for
```
**Seeding**: In Algorithm 2, we look for nodes that are suitable for the second level of the ALU reduction tree and name them seeds. Nodes that have two parent nodes are located to fit the structure of the ALU reduction tree (line 2). The output edges of seeds are removed because the output of this operator will be stored to the register file (line 3). In addition, since input operands of the seed’s parents must be fetched from the register file, DMap also removes the input edges of the seed’s parent nodes (lines 4-7). Finally, we
remove the output edges of all nodes with more than one child as its outputs have to be stored in the register file (lines 9-11). At this point, all nodes have at most one parent or one child.
**Refinement:** Algorithm 3 traverses the DFG in reverse order. If the node has a grandparent (line 4), the edge connecting its parent and grandparent is removed to group every two nodes (line 5). In the end, all subgraphs are able to be mapped to compute units in the PE.
**Algorithm 3 Refinement**
```
1: for $v_j \in \{v_0|V|-1...v_0\}$ do // Traverse in reverse order
2: for $v_j \in$ parents of node $v_i$ do
3: if node $v_j$ has a parent node then
4: Remove input edge of node $v_j$
5: end if
6: end for
7: end for
```
### 6 EVALUATION METHODOLOGY
We synthesize the DPAx accelerator using the Synopsys Design Compiler in a TSMC 28nm process. We use a cycle-accurate simulator to measure the throughput of DPAx accelerator on the 4 DP kernels introduced in Section 2.3. The BSW, PairHMM and POA simulations show same results as CPU baselines. The Chain simulation implements the reordered algorithm and its accuracy is compared with CPU baseline in Table 6. We use Ramulator [36] to generate DRAM configurations and use DRAMPower [4] to measure the power by DRAM access traces. The baseline CPU and GPU configurations are shown in Table 5. All CPU baselines utilize SIMD optimizations with AVX512. The CPU die area is estimated around 600 mm² [6]. We evaluate the GPU baselines on the Google Cloud Platform. The benchmark configuration for each DP kernel is detailed as follows.
**Banded Smith-Waterman (BSW):** BSW is evaluated on two million seed extension pairs with four 8-bit SIMD lanes on the DPAx accelerator. The dataset is obtained from the inputs to the Smith-Waterman function in BWA-MEM2 [73] using reads from the NA12878 human genome sample ERR194147, an Illumina genomics dataset consisting of short reads of 101 bp. We choose the 8-bit optimized SIMD implementation in BWA-MEM2 as the CPU baseline, and BSW implementation in GASAL2 [9] as the GPU baseline. We also compare GenDP with GenAx [24], an ASIC baseline.
**Pairwise Hidden Markov Model (PairHMM):** We evaluate read-haplotype pair inputs obtained from the calcLikelihoodScore function in GATK Haplotype Caller, with BWA-MEM aligned reads for human chromosome 22 as inputs. The CPU baseline is the optimized SIMD implementation in GATK Haplotype Caller [58]. We choose the implementation in [16] as the GPU baseline. A pruning-based implementation [77] is used for both the ASIC baseline and GenDP. GenDP evaluates the scan phase in a pruning-based implementation which accounts for 97.7% of the workload. The other 2.3% of the workload is a re-computation step which is performed on the CPU. The measured performance results include time spent in re-computation on the CPU host.
**Partial Order Alignment (POA):** POA is evaluated by 6217 consensus tasks obtained when polishing the Flye-assembled Staphylococcus aureus genome with Minimap2-aligned ONT long reads [75]. The CPU baseline is the SIMD accelerated implementation in Racon [72] and the GPU baseline is in [5]. GenDP supports long-range dependencies of at most 128 cells away in each row of the DP table. A few ultra-long dependency (>128) cases (caused by a very long
deletion in the last few input reads in a read group) account for 2.4% of the workload, and are performed on the host CPU.
**Chain:** We evaluate the Chain kernel using 10K reads from PacBio SMRT sequencing data of the *C. elegans* worm [3, 28] when computing overlaps with itself. We choose the SIMD optimized implementation in [35] as the CPU baseline and implementation in [28] as the GPU baseline. The GPU baseline and GenDP both apply the reordered Chain (Section 2.3) with N=64 in order to best utilize the parallelism and avoid large overhead of branches, thus computing 3.72× more cells than the CPU baseline. We penalize the measured GPU and GenDP throughput results by 3.72× to normalize them with the original CPU implementation. Our profiling results show that the reordered Chain has comparable accuracy with original Minimap2 when mapping PBSIM2 [55] simulated long reads to human genome reference T2T-CHM13 [54], as shown in Table 6.
**Table 6: Chain Accuracy Comparison**
| | Minimap2 | Reordered Chain (N=64) |
|------------------------|----------|------------------------|
| Map failure or error | 0.2476% | 0.2479% |
| Phred quality score of low quality map $Q < 10$ | 54.36 | 54.14 |
## 7 RESULTS
### 7.1 DPAx Area and Power
Table 7 shows the breakdown of area and power for the DPAx ASIC under a TSMC 28nm process. DPAx consumes 5.4mm² in area. Within a PE, 30% of the area is taken by the register file, 22% is taken by the compute unit array, and 16% is taken by the two decoders. The other 32% of total area is consumed by SRAM, including instruction buffers and SPM. Table 8 shows the power breakdown of DPAx and DRAM in 28nm. DRAM power is averaged across the 4 kernels and DPAx power is the peak power of the ASIC.
**Table 7: Breakdown of Area and Power of DPAx ASIC**
| Components | Area (mm²) | Power (W) |
|-----------------------------|------------|-----------|
| **Logic** | | |
| Compute Unit Array | 0.012 | 0.007 |
| Decoder | 0.008 | 0.004 |
| Register File | 0.015 | 0.009 |
| Integer PE | 0.035 | 0.020 |
| 1×4 Integer PE Array | 0.149 | 0.081 |
| 16×4 Integer PE Array | 2.381 | 1.307 |
| Floating Point (FP) PE | 0.047 | 0.019 |
| 1×4 FP PE Array | 0.196 | 0.080 |
| **Sub Total** | 2.577 | 1.387 |
| **Memory** | | |
| Data Buffer (200KB) | 0.424 | 0.273 |
| Instruction Buffer (208KB) | 1.222 | 1.385 |
| Scratchpad (136KB) | 0.351 | 0.217 |
| FIFO (276KB) | 0.819 | 0.306 |
| **Sub Total** | 2.845 | 2.182 |
| **Total** | 5.391 | 3.569 |
**Table 8: Breakdown of DPAx Power**
| | Static (W) | Dynamic (W) | Total (W) |
|----------------------|------------|-------------|-----------|
| DPAx | 1.456 | 2.113 | 3.569 |
| DRAM | 0.446 | 0.645 | 1.091 |
| **Total** | 1.902 | 2.758 | 4.660 |
### 7.2 GenDP Performance
We use throughput per unit area measured in Million Cell Updates per Second/mm² (MCUPS/mm²) as a metric for performance. The area and power of CPU, GenDP and custom accelerators are scaled [67] to a 7nm process for fair comparison with GPU. GenDP is expected to run at 2GHz. Figure 10(a) shows throughput / mm² comparisons across four DP benchmarks. Overall, GenDP achieves 132.0× speedup over CPU and 157.8× over GPU. Figure 10(b) shows the throughput/Watt comparison between GenDP and GPU. The large speedup can be attributed to the GenDP ISA, the specialized dataflow and on-chip memory hierarchy tailored for dynamic programming.
Both the BWA-MEM2 CPU baselines and GenDP benefit from 8-bit SIMD optimizations for the BSW kernel. With AVX512, BWA-MEM2 has 64 SIMD lanes, and GenDP has 4 SIMD lanes. The PairHMM baseline applies floating point, whereas GenDP applies the pruned-based implementation using logarithm and fixed point numbers to approximate the computation and reduce complexity. The bottleneck of POA performance on GenDP is the memory accesses. First, it has the graph dependency pattern, which is more complex than other kernels. The dependency information needs to be loaded from the input data buffer to each PE. Second, downstream trace-back functions in POA need the move directions on the DP table for each cell, which requires 8-byte outputs to be written to the output data buffer from each cell. Both the input of the dependency information and the output of the move directions consume extra data movement instructions that limit POA performance on GenDP. In the Chain kernel, both performances of GPU and GenDP are penalized by 3.72× for the extra cell computation.
### 7.3 Comparison with Accelerators
The GenDP framework’s goal is to build a versatile dynamic programming accelerator to support a wide range of genomics applications. Thus it sacrifices performance for programmability and supporting a broader set of kernels. A key research question is how much performance is sacrificed for generality. Figure 10(c) shows the performance of GenDP compared to available custom genomics ASIC accelerators, GenAx [24] accelerator for BSW, and pruning-based PairHMM accelerator [77]. We observe a geometric mean of 2.8× slowdown. This can be attributed largely to area overheads, custom datapaths for cell score computation, custom data-flow, and custom precision. For example, 37.5% of the register and 40% of the SRAM are only utilized by POA but idle in other kernels, because POA is significantly more complex than the other three. A custom data-flow could specify the data bus width between neighboring PEs and propagate all the data in a single cycle, whereas GenDP needs control instructions to move data between neighboring PEs because of various data movement requirements. An accelerator for one specific kernel can implement one appropriate precision to save area. For instance, the pruning-based PairHMM ASIC utilizes 20-bit fixed-point data which satisfies the compute requirements, but GenDP has no such custom precision choice.
In addition to custom genomics ASIC accelerators, we also compare GenDP with other data-flow and spatial architectures. SoftBrain [53] is a stream data-flow accelerator, which utilizes a data-flow graph for repeated and pipelined computation, as well as
stream-based commands for efficient data movements. For the 4 DP kernels discussed in Section 6, GenDP is more area-efficient and has 2.12× area-normalized speedup over SoftBrain. Table 9 shows the padding overhead and SIMD utilization that limit its performance. In SoftBrain, we introduce padding to remove data hazards between pipeline stages in kernels that use 2D DP tables. SIMD utilization in DP kernels depends on both the number of SIMD lanes as well as the length of input sequences. In addition, kernels with a graph structure like POA gain little from the SIMD parallelism because the number of edges connected to each node varies in the graph. Intra-cell pipelining is not well suited for POA because the intra-cell iterative blocks are sequentially computed and dependent on each other, leading to intra-cell pipeline hazards. Inter-cell pipelining also provides limited benefits because there is a variable number of block iterations within each cell (determined by the number of edges connected to the current node).
**Table 9: Benchmark implementation on SoftBrain**
| Kernel | Dimension | Pipe. Stages | Padding Overhead | SIMD Lanes(Util.) | GenDP Speedup |
|------------|-----------|--------------|------------------|-------------------|---------------|
| BSW | 2D | 3 | 9.9% | 8(42.2%) | 2.24x |
| PairHMM | 2D | 4 | 15.7% | 2(95.9%) | 1.13x |
| POA | Graph | 1 | 0 | 1(100%) | 10.74x |
| Chain | 1D | 10 | 0 | 2(73%) | 0.75x |
Triggered instruction architecture (TIA) [56] eliminates the program counter and branch instructions by predication, and exploits the locality in the spatial algorithms by using a PE array with mesh topology. However, the area and timing complexity of the trigger scheduler imposes a restriction on the number of triggered instructions (TI). For example, an implementation of edit distance scoring-based DP on top of TIA requires two PEs for 11 TIs [69]. A similar mapping strategy used on the DP kernels in Section 6 requires multiple PEs to compute the objective function in a single DP cell, limiting the benefits obtained from a spatial architecture like TIA. The number of TIs and PEs required by objective functions in each DP kernel is listed in Table 10.
**Table 10: Triggered Instruction (TI) Required on TIA**
| Kernel | BSW | PairHMM | POA | Chain |
|----------|-----|---------|-----|-------|
| **Number of TIs required** | 30 | 45 | 90 | 47 |
| **Number of PEs required** | 5 | 8 | 16 | 8 |
In summary, GenDP balances well the specialization and generality trade-off for dynamic programming acceleration. Custom ASIC accelerators achieve better performance but are less programmable. SoftBrain, with reconfigurable networks, cannot efficiently support the DP kernels described in this work because of parallelism overhead and pipeline hazards. TIA reduces control instructions and exploits the spatial locality but is not well suited for compute-intensive DP kernels with complex scoring schemes.
### 7.4 ISA Analysis
GenDP has a more efficient ISA for DP algorithms than general-purpose processors. We compare the number of compute instructions required per cell update in GenDP ISA to riscv64 and x86-64 ISA. The riscv64 and x86-64 instruction counts are obtained using `riscv64-unknown-elf-g++` and `g++` compilers respectively. Among four kernels, the instruction counts on GenDP are reduced by 8.1× and 4.0× on average when compared with riscv64 and x86-64, shown in Figure 10(d). The efficiency of GenDP instructions is affected by compute unit utilization, as shown in Table 2.
Several advantages of GenDP ISA are shown as follows: *First*, GenDP applies the VLIW architecture, where one instruction contains opcodes for 6 ALUs in the compute unit (CU) array. The ALU reduction tree in the CU fits the compute characteristics of DP kernels well. GenDP has an average 48% VLIW utilization among 4 kernels, shown in Table 11. The multiplication and conditional operations in Chain and POA could only be mapped to 4-input ALUs in DPax, which limits the VLIW utilization. Meanwhile, During the execution in POA, 14.3% of the CUs are idle because of the complex dependency pattern of POA. *Second*, GenDP’s ISA includes several custom operations such as Comparison, Max/Min and Lookup Table (LUT). In Chain, GenDP uses a special instruction for the LUT. In comparison, riscv64 and x86-64 need 14 and 7 instructions respectively for this LUT implementation. *Third*, the systolic array architecture provides spatial locality and saves many register-to-register operations. Meanwhile, DPax has a memory hierarchy design with FIFO and scratchpad memory, which cache intermediate values for inter-cell communication and reduce the memory access to DRAM.
**Table 11: VLIW Utilization**
| Kernel | BSW | PairHMM | Chain | POA |
|----------|-----|---------|-------|-----|
| **Utilization** | 60.6% | 64.6% | 38.3% | 28.5% |
### 7.5 Scalability
With 8-channel DDR4-2400 DRAM (153.2 GB/s peak bandwidth), GenDP could scale up to 64 DPax tiles and achieve 6.17× raw performance speedup over the GPU baseline, shown in Table 12. The area of GenDP is scaled [67] to 7nm to make a fair comparison with the GPU baseline.
### 7.6 Generality and Limitation
In addition to these four DP kernels within the commonly used sequencing pipelines, the GenDP framework also supports other
Table 12: GenDP and GPU Raw Performance Comparison
| | Area($mm^2$) | Raw Perf.(GCUPS) | Speedup |
|---------------------|--------------|------------------|---------|
| NVIDIA A100 GPU | 826.0 | 48.3 | 1 |
| GenDP (64 tiles) | 44.3 | 297.5 | 6.17x |
DP algorithms in either genomics or broader fields. This section discusses the generality and limitation of GenDP.
7.6.1 Dependency range. DP algorithms could be categorized into near-range (e.g., neighboring dependency pattern), limited long-range (e.g., dependency distance within 128) and ultra long-range (e.g., dependency distance longer than 128). GenDP could efficiently support near-range and long-range dependencies by fine-grained spatial locality design, such as the systolic array and scratchpad memory in the PE. GenDP also supports ultra long-range dependencies but needs to access these data through DRAM because the on-chip buffer is not large enough. However, the ultra long-range dependencies are usually rare, for example, POA only has 2.4% workload with dependency distances longer than 128, which are performed on the host CPU in simulation.
7.6.2 Active region. GenDP requires to specify the active regions in the DP table before the computation starts. For example, GenDP supports the static band choice in the DP table but does not support adaptive or dynamic band choice. In these cases, GenDP could choose a larger tiled static region that covers the adaptive bands but will sacrifice some performance.
7.6.3 Objective function. GenDP ISA supports most computations in the commonly used genomics pipeline, including local, global and semi-global approximate string matching as well as linear, affine, and convex scoring modes mentioned in Section 1. It also supports DP algorithms in other fields such as speech detection and robot motion planning.
7.6.4 Multi-precision arithmetic. DP kernels utilize computations of different precisions. For example, BSW can be computed using 8-bit or 16-bit precision depending on the sequence length. Computations in POA and Chain are in 32-bit integer format and PairHMM requires both integer and floating-point computation. DPAx has both integer and floating-point PEs. The integer PEs support 32-bit and 8-bit integer arithmetic, and also support 64-bit and 16-bit basic operations such as addition, subtraction, and multiplication by using two parallel compute units.
Figure 11: GenDP Instruction and Performance on DTW and BF benchmarks
7.6.5 Broader field. Dynamic Time Warping (DTW) measures the similarity between two temporal sequences, which could be utilized for nanopore signal basecalling [23] and speech detection [12]. DTW has near-range dependency pattern similar to Smith-Waterman. Bellman-Ford (BF), a shortest path search algorithm, is commonly used for robotic motion planning applications. BF has a graph-based dependency pattern where the long-range dependencies within a certain distance could be efficiently supported by GenDP and the ultra-long range dependencies need accesses through DRAM. GenDP supports both objective functions of DTW and BF. Their performance and instruction comparisons with GPU [1, 64] are shown in Figure 11.
8 RELATED WORK
Dynamic Programming Accelerators in Genomics: Many custom genomics accelerators have been proposed to boost the performance of DP kernels in genomics pipelines, which significantly improve the performance over commodity hardware. However, these accelerators only support a single genomics kernel and must be customized and combined to support different stages of genomics pipelines. This increases both design cost and complexity. For example, [18–20, 24, 25, 45, 47, 70] are customized for read alignment and SquiggleFilter [23] is optimized for basecalling. GenASM [14] converts the DP objective function into bit-wise operations such as AND, OR, and SHIFT. Although GenASM partially supports the affine gap penalty model [52], bit-wise operations inherently fail to implement all the complex objective functions needed in different stages of genomics pipelines. SeGraM [15] extends GenASM to sequence-to-graph mapping and supports seeding, but supports limited DP objective functions as well. Race Logic [47] utilizes the race conditions in the circuit to accelerate the edit distance function in the bioinformatics field such as DNA sequence alignment and protein string comparison. However, other DP kernels such as PairHMM and Chain are not edit distance problems and have more complicated objective functions and higher numeric precision requirements. It is challenging to map such kernels to the Race Logic accelerator. GenDP aims to fill this gap and proposes a generalized acceleration framework that can be applied to accelerate the various flavors of dynamic programming common in genomics pipelines.
Besides genomics applications, dynamic programming algorithms are also accelerated in other domains, such as shortest path in robot motion planning [51]. There has also been industry interest in DP acceleration. For example, NVIDIA recently announced the dynamic programming instructions (DPX) in the Hopper architecture [8], but the corresponding products and CUDA library have not been released yet.
Domain Specific Accelerators in Genomics: Several domain specific accelerators have been explored for databases [76], machine learning [21, 33] and graph processing [22, 30]. However, there has been little work on domain specific accelerators for genomics. Based on the insight that data manipulation operations are also common in genomics pipelines, Genesis [29] proposes a domain-specific acceleration framework customized for data manipulation operations in genome sequencing analysis. Genesis uses an extended SQL as a domain specific language and provides the relevant hardware libraries. The framework is evaluated on AWS cloud FPGA and achieves up to 19.3× speedup over the 16-thread CPU baseline. However, the hardware modules only cover database operations for data manipulation and users need to add custom modules for different genomics pipelines. In contrast, different from Genesis’s
target, GenDP focuses on dynamic programming acceleration in genomics pipelines.
**General Accelerators:** There are also several works that have identified common compute and memory access patterns across both regular and irregular workloads and proposed reconfigurable, spacial and dataflow architectures for these patterns [27, 53, 56, 59]. But these accelerators are mostly optimized for data-parallel or data-intensive applications and not suitable for dynamic programming kernels. Plasticine [59] is a spatially reconfigurable architecture for parallel patterns, which supports broad applications, but has lower functional unit utilization on data-dependent applications such as PageRank (3.9%) and BFS (3.1%) than data-parallel applications (~50%). SoftBrain [53] and TIA [56] are discussed in Section 7.3.
## 9 CONCLUSION
In order to support general-purpose acceleration for genomics kernels in the commonly used sequencing pipelines, this work presented GenDP, a programmable dynamic programming acceleration framework, including DPax, a systolic array-based DP accelerator, and DPMap, a graph partitioning algorithm to map DP kernels to the accelerator. DPax supports multiple dependency patterns through flexible PE interconnections and different DP objective functions using programmable compute units. GenDP is evaluated on four important genomics kernels, achieving $157.8 \times$ throughput/mm$^2$ and $5.1 \times$ throughput/Watt compared to GPU, and $132.0 \times$ throughput/mm$^2$ over CPU baselines, and is also extended to DP algorithms in broader fields.
## ACKNOWLEDGMENTS
We thank the anonymous reviewers for their suggestions which helped improve this paper. This work was supported in part by the NSF under the CAREER-1652294 and NSF-1908601 awards, and the Applications Driving Architectures (ADA) Research Center, a JUMP Center co-sponsored by SRC and DARPA.
## A ARTIFACT APPENDIX
### A.1 Abstract
This document briefly describes how to reproduce the main performance results of this paper in Figure 10 (a) and (c). The instructions in this document include 1) how to download the datasets, 2) how to run CPU/GPU baselines, 3) how to run GenDP simulations. The source code and instructions are accessible from GitHub. The expected results are shown in Table 13, 14 and 15.
### A.2 Artifact check-list (meta-information)
- **Algorithm:** Banded Smith-Waterman (BSW), Chain, Pairwise Hidden Markov Model (PairHMM), Partial Order Alignment (POA).
- **Program:** C++ and Python
- **Compilation:** g++ 8.3.1 and Intel® oneAPI DPC++/C++ Compiler 2021.8.0
- **Data sets:** Illumina NA12878 human genome sample ERR194147 (BSW), PacBio SMRT sequencing data of the C.elegans worm (Chain), human chromosome 22 (PairHMM), Flye-assembled Staphylococcus aureus genome (POA).
- **Hardware:** Intel CPU with >= 16G memory and >= 40G storage, and NVIDIA GPU.
- **Execution:** Bash script for compilation and execution
- **Metrics:** Throughput: cell updates per second
- **Output:** CPU/GPU runtime and GenDP throughput
- **Experiments:** CPU/GPU baselines and GenDP simulation for 4 benchmarks (BSW, Chain, PairHMM and POA)
- **How much disk space required (approximately)?:** 40G
- **How much time is needed to prepare workflow (approximately)?:** ~ 1 hour
- **How much time is needed to complete experiments (approximately)?:** ~ 24 hours
- **Publicly available?:** Yes.
- **Archived (provide DOI)?:** https://doi.org/10.5281/zenodo.7792246
### A.3 Description
#### A.3.1 How to access.
The artifact could be accessed from GitHub and Zenodo.
#### A.3.2 Hardware dependencies.
(1) Intel CPU and NVIDIA GPU
(2) 16G memory and 40G storage
#### A.3.3 Software dependencies.
(1) Linux OS
(2) gcc >= 8.3.1
(3) cmake >= 3.16.0
(4) OpenMP >= 201511
(5) Intel® DPC++/C++ Compiler >= 2021.8.0
(6) ZLIB >= 1.2.8
(7) CUDA >= 10.0
(8) Python >= 3.7.9
(9) numactl >= 2.0.0
#### A.3.4 Data sets.
The list below shows the details of datasets and the table shows the approximate simulation time and corresponding input size. BSW simulation is fast and the default setting is entire dataset.
- BSW: Illumina NA12878 human genome sample ERR194147 (1932254 short reads with length <= 128)
- Chain: PacBio SMRT sequencing data of the C.elegans worm (10000 long reads)
- PairHMM: Human chromosome 22 (1420266 short reads)
- POA: Flye-assembled Staphylococcus aureus genome (6216 consensuses, each including 10 ~ 100 long reads)
### A.4 Installation
Download the code base from GitHub and install Intel DPC++/C++ Compiler (ICX).
### A.5 Experiment workflow
Please follow the instructions on GitHub.
**Step 1:** Check System Requirements
**Step 2:** Download Repository and Data sets (~ 10 min)
**Step 3:** Run CPU Baselines (~ 10 min)
**Step 4:** Run GPU Baselines (~ 10 min)
**Step 5:** Run GenDP Simulation (~ 24 hours)
Table 16 shows the relationship between data sets size and simulation time. We recommend to run scripts for ~ 6 hours or ~ 24 hours.
### Table 13: CPU Baselines
| CPU | Operating System | SIMD Flag | Threads | BSW | Chain | PairHMM | POA |
|----------------------------|------------------------|-----------|---------|-------|-------|---------|-----|
| Intel® Xeon® Platinum 8380 | CentOS Linux 7 (CORE) | AVX512 | 80 | 0.0504| 0.306 | 0.587 | 16.6|
| Intel® Xeon® Gold 6326 | Ubuntu 20.04.5 LTS | AVX512 | 32 | 0.0984| 0.473 | 0.792 | 34.3|
| Intel® Xeon® E5-2697 v3 | CentOS Linux 7 (CORE) | AVX2 | 28 | 0.196 | 2.35 | 2.13 | 41.7|
| 12th Gen Intel® Core™ i5-12600 | Ubuntu 22.04.2 LTS | AVX2 | 12 | 0.140 | 2.21 | 1.71 | 36.6|
| Intel® Core™ i7-7700 | Ubuntu 20.04.5 LTS | AVX2 | 8 | 0.29 | 4.79 | 4.51 | 98.5|
### Table 14: GPU Baselines
| GPU | Arch Code | CUDA Version | BSW | Chain | PairHMM | POA |
|----------------|-----------|--------------|-------|-------|---------|-----|
| NVIDIA A100 | sm_80 | 11.2 | 0.012 | 0.155 | 0.597 | 2.53|
| NVIDIA RTX A6000 | sm_86 | 12.0 | 0.012 | 0.339 | 0.572 | 3.70|
| NVIDIA TITAN Xp | sm_61 | 10.2 | 0.020 | 0.747 | 0.915 | 11.2|
### Table 15: GenDP Speedup over CPU and GPU Baselines
| | BSW | Chain | PairHMM | POA |
|--------------------------------|--------------|--------------|-------------|--------------|
| Total Cell Updates | 2,431,855,834| 20,736,142,007| 258,363,282,803| 6,448,581,509|
| CPU Runtime (seconds) | 0.0504 | 0.306 | 0.587 | 16.6 |
| CPU GCUPS | 44.91 | 19.61 | 32.88 | 14.51 |
| CPU Normalized $MCUPS/mm^2$ | 130.29 | 56.89 | 95.41 | 42.11 |
| GPU Runtime (seconds) | 0.012 | 0.155 | 0.597 | 2.53 |
| GPU GCUPS | 192.92 | 10.40 | 32.35 | 95.13 |
| GPU $MCUPS/mm^2$ | 239.16 | 12.89 | 40.11 | 117.94 |
| ASIC Normalized $MCUPS/mm^2$ | 118,950 | - | 51,867 | - |
| GenDP Normalized $MCUPS/mm^2$ | 47,574 | 3,626 | 17,681 | 2,965 |
| GenDP Speedup over CPU | 365.1x | 63.7x | 185.3x | 70.4x |
| GenDP Speedup over GPU | 198.9x | 281.4x | 440.8x | 25.1x |
### Table 16: Data Sets Size and Approximate Simulation Time
| Simulation time | BSW | Chain | PairHMM | POA |
|-----------------|-------|-------|---------|-----|
| ~ 6 hours | 1,932,254 | 100 | 100,000 | 100 |
| ~ 24 hours | 1,932,254 | 1,000 | 500,000 | 200 |
| ~ 250 hours | 1,932,254 | 10,000| 1,420,266| 6,216|
### A.6 Evaluation and expected results
- The CPU and GPU baselines are machine-dependent. Some reference results on different platforms are listed in Table 13 and Table 14.
- GenDP normalized throughputs are comparable to reported results. See Row 9 in Table 15. The CPU and GPU baselines shown in the table above are obtained from Xeon Platinum 8380 and NVIDIA A100 separately. The simulation results with entire datasets could reproduce the results but it may take ~ 250 hours and require ~ 2 TB storage space. We recommend to run scripts for ~ 6 hours or ~ 24 hours. The simulation results with limited input size could be different but comparable to the reported table above.
### REFERENCES
[1] Accelerating Bellman-Ford Single Source Shortest Path Algorithm on GPU using CUDA. https://github.com/sengorajkumar/gpu_graph_algorithms/
[2] Artic Network: real-time molecular epidemiology for outbreak response. https://artic.network/
[3] Caenorhabditis Elegans 40x Coverage Dataset, Pacific Biosciences. http://datasets.pacb.com.s3.amazonaws.com/2014/c_elegans/list.html.
[4] DRAMPower: Open-source DRAM Power and Energy Estimation Tool. https://github.com/tukl-msd/DRAMPower.
[5] A GPU-accelerated implementation of the Partial Order Alignment algorithm. https://github.com/clara-parabricks/GenomeWorks/blob/dev/cudapoa.
[6] Intel Ice Lake Xeon Platinum 8380 Review. https://www.tomshardware.com/news/intel-ice-lake-xeon-platinum-8380-review-10nm-debuts-for-the-data-center/.
[7] National Genomic Data Initiatives Review. https://www.ga4gh.org/news/ga4gh-publishes-review-of-national-genomic-data-initiatives/
[8] NVIDIA Hopper GPU Architecture Accelerates Dynamic Programming Up to 40x Using New DPX Instructions. https://blogs.nvidia.com/blog/2022/03/22/nvidia-hopper-accelerates-dynamic-programming-using-dpx-instructions/
[9] Nauman Ahmed, Jonathan Lévy, Shanshan Ren, Hamid Mushfaq, Koen Bertels, and Zaid Al-Ars. GASAL2: a GPU accelerated sequence alignment library for high-throughput NGS data. BMC bioinformatics 20, 1 (2019), 1–20. https://link.springer.com/article/10.1186/s12859-019-3086-z
[10] Ruedi Bannert. A Review of the Smith-Waterman GPU Landscape. Electrical Engineering and Computer Sciences University of California at Berkeley. Retrieved from https://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/ECE-2020-152.html (2020).
[11] Richard Bellman. Dynamic programming. Science 153, 3731 (1966), 34–37. https://doi.org/10.1126/science.153.3731.34
[12] Donald J. Berndt and James Clifford. Using Dynamic Time Warping to Find Patterns in Time Series. In Proceedings of the 3rd International Conference on Knowledge Discovery and Data Mining (Seattle, WA) (AAAI'S'94). AAAI Press, 359–370. https://doi.org/10.5555/3000850.3000887
[13] Christoph Bleidorn. Third generation sequencing technology and its potential impact on evolutionary biodiversity research. Systematics and biodiversity 14, 1 (2016), 1–8. https://www.tandfonline.com/doi/abs/10.1080/14772000.2015.1099575
[14] Damla Senol Cali, Gurpreet S. Kalsi, Zülał Bingöl, Can Firtina, Lavanya Subramanian, Jeremie S. Kim, Rachata Ausavarungnirun, Mohammed Alser, Juan Gomez-Luna, Amiralri Boroumand, Anant Noron, Allison Scibisz, Sreenivas Subramoney, Can Alkan, Saugata Ghose, and Onur Mutlu. GenASM: A High-Performance, Low-Power Approximate String Matching Acceleration Framework for Genome Sequence Analysis. In 2020 33rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), 951–966. https://doi.org/10.1109/MICRO50266.2020.00081
[15] Danah Sendel Cali, Konstantinos Kanellopoulos, Joël Lindegger, Zülał Bingöl, Guneet S. Kalsi, Ziyi Zuo, Can Firtina, Meryem Banu Cavlak, Jeremie Kim, Nikita Manzoor Ghiasi, Gadagendep Singh, Juan Gomez-Luna, Nour Almadhoun Alkerr, Mohammad Aghaei, Sreevani Subramoney, Can Alkan, Saugata Ghose, and Omur Mutlu. PairGraph: A Universal Hardware Accelerator for Genomic Sequence-to-Graph and Sequence-to-Sequence Mapping. (2022), 638–655. https://doi.org/10.1145/374096.3527436
[16] Biagioli E. et al. Carneiro M, Poplin R. Enabling high throughput haplotype analysis through hardware acceleration. https://github.com/MauricioCarneiro/PairHMM/tree/master/doc.
[17] Kun-Mao Chao, William R Pearson, and Webb Miller. Aligning two sequences within a specified diagonal band. Bioinformatics 8, 5 (1992), 481–487. https://academic.oup.com/bioinformatics/article-abstract/8/5/481/213891
[18] Peng Chen, Chao Wang, Xi Li, and Xuehai Zhou. Hardware acceleration for the banded Smith-Waterman algorithm with the cyclic systolic array. In 2013 International Conference on Field Programmable Technology (FPT), 480–481. https://doi.org/10.1109/FPT.2013.6718421
[19] Ruei-Ting Chien, Yi-Lun Liao, Chien-An Wang, Yu-Cheng Li, and Yi-Chang Lu. Three-Dimensional Dynamic Programming Accelerator for Multiple Sequence Alignment. In 2018 IEEE Nordic Circuits and Systems Conference (NORCAS): NORCHIP and International Symposium of System-on-Chip (SoC). 1–5. https://doi.org/10.1109/NORCHIP.2018.8573523
[20] Ruei-Ting Chien, Yi-Lun Liao, Chien-An Wang, Yu-Cheng Li, and Yi-Chang Lu. Three-Dimensional Dynamic Programming Accelerator for Multiple Sequence Alignment. In 2018 IEEE Nordic Circuits and Systems Conference (NORCAS): NORCHIP and International Symposium of System-on-Chip (SoC). 1–5. https://doi.org/10.1109/NORCHIP.2018.8573523
[21] Eric Chung, Jeremy Fowers, Kalin Ovtcharov, Michael Pampuchowski, Adrian Caulfield, Todd Massengill, Ming Liu, Daniel Lo, Shlomi Alkalay, Michael Haselman, Maleen Abeydeera, Logan Adams, Hari Angepat, Christian Boehn, Derek Chiou, Oren Firestein, Alessandro Forin, Kang Su Gatlin, Mahdi Ghandi, Stephen Heil, Kyle Holohan, Ahmad El Husseinie, Tamas Juhasz, Kari Kagi, Ratna K. Kovuri, Sitaram Lanka, Friedel van Meegen, Dima Mukhortov, Prerak Patel, Brandon Perez, Amanda Rapsang, Steven Reinhardt, Bita Rouhani, Adam Sapek, Raja Seera, Sangeetha Shekar, Balaji Sridharan, Gabriel Weisz, Lisa Woods, Phillip Y. Xiao, Dan Zhang, Ritesh Zhao, and Doug Burger. Serving DNNs in Real Time at Data Centers, Scale with Intel® Bareware. IEEE Micro 38, 2 (2018), 8–20. https://doi.org/10.1109/MM.2018.02297131
[22] Vidushi Dadu, Sihao Liu, and Tony Nowatzki. PolyGraph: Exposing the Value of Flexibility for Graph Processing Accelerators. In 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA). 595–608. https://doi.org/10.1109/ISCA52012.2021.00053
[23] Tim Dunn, Harisankar Sadasivan, Jack Wadden, Kush Gohila, Kuan-Yu Chen, David Blaauw, Reetuparna Das, and Satish Narayanasamy. SquiggleFilter: An Accelerator for Portable Virus Detection. In MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture (Virtual Event, Greece) (MICRO 21). Association for Computing Machinery, New York, NY, USA, 535–549. https://doi.org/10.1145/3466753.3484700
[24] Daichi Fujiki, Arun Subramanian, Tianjun Zhang, Yu Zeng, Reetuparna Das, David Blaauw, and Satish Narayanasamy. GenX: A Genome Sequencing Accelerator. In 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA). 69–82. https://doi.org/10.1109/ISCA.2018.00017
[25] Daichi Fujiki, Shunhao Wu, Nathan Ozog, Kush Gohila, David Blaauw, Satish Narayanasamy, and Reetuparna Das. SeedEx: A Genome Sequencing Accelerator for Optimal Alignments in Subminimal Space. In 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). 937–950. https://doi.org/10.1109/MICRO50266.2020.00083
[26] Hamed Gammadi, Jiaxin Chen, Chun Woi Lam, Gihan Jayathilaka, Hiruma Samarakoon, Jared T Simpson, Martin A Smith, and Sri Parameswaran. GPU accelerated adaptive banded event alignment for rapid comparative nanopore signal analysis. BMC bioinformatics 21 (2020), 1–13. https://link.springer.com/article/10.1186/s12859-020-03697-x
[27] Venkataraman Govindaraju, Chen-Han Ho, Tony Nowatzki, Jatin Chhugani, Nadadhur Satish, Karthikeyan Sankaranagaraj, and Changkyu Kim. DySER: Unifying Functionality and Parallelism Specialization for Energy-Efficient Computing. IEEE Micro 32, 5 (2012), 38–51. https://doi.org/10.1109/MM.2012.51
[28] Licheng Guo, Jason Lau, Zhenyuan Ruan, Peng Wang, and Jason Cong. Hardware Acceleration of Long Read Pairwise Overlapping in Genome Sequencing: A Race Between FPGA and GPU. In 2019 IEEE 27th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM). 127–135. https://doi.org/10.1109/FCCM.2019.00027
[29] Tae Jun Ham, David Bruns-Smith, Brendan Sweeney, Yejin Lee, Seong Hoon Seo, U Gyong Song, Young H. Oh, Krste Asanovic, Jae W. Lee, and Lisa Wu Wills. Genesis: A Hardware Acceleration Framework for Genomic Data Analysis. In 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA). 254–267. https://doi.org/10.1109/ISCA45697.2020.00031
[30] Tae Jun Ham, Lisa Wu, Narayanan Sundaram, Nadadhur Satish, and Margaret Martonosi. Graphicionado: A high-performance and energy-efficient accelerator for graph analytics. In 2016 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). 1–13. https://doi.org/10.1109/MICRO.2016.7783759
[31] Daniel S. Hirschberg. Algorithms for the Longest Common Subsequence Problem. J. ACM 24, 4 (oct 1977), 664–714. https://doi.org/10.1145/322033.322044
[32] Lei Jiang and Farrokh Zokai. EXMA: A Tool for Automatic Fast Exact Matching. In 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA). 399–411. https://doi.org/10.1109/HPCAS51647.2021.00041
[33] Norman P. Jouppi, Doe Hyun Yoon, Matthew Ashcraft, Matt Gottschke, Thomas B. Jablin, George Kurian, James Laudon, Sheng Li, Peter Ma, Xiaoyu Ma, Thomas Norrie, Nishant Patil, Sushma Prasad, Cliff Young, Zongwei Zhou, and David Patterson. Ten Lessons From Three Generations Shaped Google’s TPU v4: Industrial Product. In 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA). 1–14. https://doi.org/10.1109/ISCA52012.2021.00010
[34] Saarabh Kalikar, Chirag Jain, Vasimuddin Md, and Sanchit Misra. Accelerating long read analysis on modern CPUs. bioRxiv (2021). https://www.biorxiv.org/content/10.1101/2021.07.21.453294.abstract
[35] Saarabh Kalikar, Chirag Jain, Md Vasimuddin, and Sanchit Misra. Accelerating minimap2 for long-read sequencing applications on modern CPUs. Nature Computational Science 2, 2 (2022), 78–83. https://www.nature.com/articles/s43588-022-00201-8
[36] Youngki Kim, Weikun Yang, and Omur Mutlu. Ramulator: A Fast and Extensible DRAM Simulator. IEEE Computer Architecture Letters 15, 1 (2016), 45–49. https://doi.org/10.1109/LCA.2015.2414456
[37] Donald A. Kirk. Optimal control theory: an introduction. Courier Corporation.
[38] Mikhail Kolmogorov, Jeffrey Yuan, Mao Lin, and Pavel A Fevzner. Assembly of long error-corrected reads using repeat graphs. Nature biotechnology 37, 5 (2019), 540–546. https://www.nature.com/articles/s41587-019-0072-8
[39] Sergey Koren, Brian P Walenz, Konstantin Berlin, Jason R Miller, Nicholas H Bergman, and Adam M Phillippy. Canu: scalable and accurate long-read assembly via adaptive k-mer weighting and repeat separation. Genome research 27, 5 (2017), 722–736. https://genome.cshlp.org/content/27/5/722.short
[40] Niklas Krumm, Tychele N Turner, Carl Baker, Laura Vives, Kiana Mohajeri, Kali Witherspoon, Archana Raja, Bradley P Coe, Holly A Stetsman, Zong-Xiao He, et al. Excess of rare inherited truncating mutations in autism. Nature genetics 47, 6 (2015), 582–588. https://www.nature.com/articles/ng.3303
[41] Chandra Mouli, Ret Cogill, Esther Jia Hui Boey, Andrew Hui Qi Ng, Andreas Wilm, and Niranjnan Nagarajan. INSeq: accurate single molecule reads using nanopore sequencing. GigaScience 5, 1 (08 2016). https://doi.org/10.1186/s13742-016-0140-z
[42] Heng Li. Aligning sequence reads, clone sequences and assembly contigs with BWA-MEM. arXiv (2013). https://arxiv.org/abs/1303.3997
[43] Heng Li and Richard Durbin. Inference of human population history from individual whole-genome sequences. Nature 475, 7357 (2011), 493–496. https://www.nature.com/articles/nature10231
[44] Yi-Lun Liao, Yu-Cheng Li, Nan-Chyun Chen, and Yi-Chang Lu. Adaptively Bounded Smith-Waterman Algorithm for Long Reads and Its Hardware Accelerator. In 2018 IEEE 29th International Conference on Application-specific Systems, Architectures, and Processors (ASP). 1–9. https://doi.org/10.1109/ASP.2018.8445105
[45] Mao-Jan Lin, Yu-Cheng Li, and Yi-Chang Lu. Hardware Accelerator Design for Dynamic-Programming-Based Protein Sequence Alignment with Affine Gap Tracebacks. In 2019 IEEE Biomedical Circuits and Systems Conference (BioCAS). 1–4. https://doi.org/10.1109/BIOCAS.2019.8919080
[46] Ruibang Luo, Chak-Lim Wong, Yat-Sing Wong, Chi-Ian Tang, Chi-Man Liu, Chi-Ming Leung, and Tak-Wah Lam. Exploring the limit of using a deep neural network on pupil data for germline variant calling. Nature Machine Intelligence 2, 4 (2020), 220–225. https://www.nature.com/articles/s42256-020-0167-4
[47] Adam Madhav, Timothy Sherwood, and David Strukov. Race Loop: A hardware acceleration for dynamic programming algorithms. In 2014 ACM/IEEE 41st International Symposium on Computer Architecture (ISCA). 517–528. https://doi.org/10.1109/ISCA.2014.6853226
[48] Santiago Marco-Sola, Juan Carlos Moure, Miquel Moreto, and Antonio Espinosa. Fast gap-affine pairwise alignment using the wavefront algorithm. Bioinformatics 37, 4 (2021), 456–463. https://academic.oup.com/bioinformatics/article-abstract/37/4/456/5904262
[49] W Richard McCombie, John D McPherson, and Elaine R Mardis. Next-generation sequencing technologies. Cold Spring Harbor perspectives in medicine 9, 4 (2019), a036798. https://perspectivesinmedicine.cshlp.org/content/9/4/a036798.short
[50] Ruth R Miller, Vincent Montoya, Jennifer L Gurdy, David M Patrick, and Patrick Tang. Metagenomics for pathogen detection in public health. Genome medicine 5, 9 (2013), 1–14. https://link.springer.com/article/10.1186/gm485
[51] Sean Murray, Will Floyd-Jones, George Konidaris, and Daniel J Sorin. A Programmable Architecture for Robot Motion Planning Acceleration. In 2019 IEEE 30th International Conference on Application-specific Systems, Architectures and Processors (ASP), Vol. 2160–052X. 185–188. https://doi.org/10.1109/ASPASIA.2019.00040
[52] Eugene W Myers and Webb Miller. Optimal alignments in linear space. Bioinformatics 4, 1 (1988), 11–17. https://academic.oup.com/bioinformatics/article/4/1/11/250088
[53] Ilray Nowatzki, Vinay Gangadhar, Newsha Ardalan, and Karthikeyan Sankaranarayanan. Stream-dataflow acceleration. In 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA). 416–429. https://doi.org/10.1145/3076905.3080355
[54] Sergey Nerk, Sergey Koren, Arang Rhee, Mikko Rautiainen, Andrey V Bzikadze, Alla Mikhenko, Mitchell R Vollger, Nicolas Almehose, Lev Uralsky, Ariel Gershman, et al. The complete sequence of a human genome. Science 376, 6588 (2022), 44–53. https://www.science.org/doi/abs/10.1126/science.abj6987
[55] Yukiteru Ono, Kiyoshi Asai, and Michiaki Hamada. PBSIM2: a simulator for long-read sequencers with a novel generative model of quality scores. Bioinformatics 37, 5 (2021), 589–595. https://academic.oup.com/bioinformatics/article-abstract/37/5/589/5911629
[56] Angshuman Parashar, Michael Pelleaur, Michael Adler, Bushra Ahsan, Neal Crago, David Krieg, Vladimir Pavlov, Antonia Zhai, Mohit Gambhir, Aamer Javed, Randy Allmon, Rachel Raynes, Stephen Marche, and Joel Emer. Triggered Instructions: A Control Paradigm for Spatially-Programmed Architectures. In Proceedings of the 40th Annual International Symposium on Computer Architecture (Tel-Aviv, Israel) (ISCA ’13). Association for Computing Machinery, New York, NY, USA, 142–153. https://doi.org/10.1145/2485922.2485953
[57] Francesco Pervecelli, Lorenzo Di Tucci, Marco D. Santambrogio, Nan Ding, Steven Hofmeyr, Aydin Buluç, Leonid Oliker, and Katherine Yelick. GPU accelerated partial order multiple sequence alignment for long reads self-correction. In 2020 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). 1–9. https://doi.org/10.1109/IPDPSW50202.2020.00013
[58] Ivan Pujol, Valentin Ruprecht, Mark A DePristo, Tim J Fennell, Mauricio O Carretero, Geraldine A Van der Aa, David E Kling, Laura D Gauthier, Ami Levy-Moonshine, David Roazen, et al. Scaling accurate genetic variant discovery to tens of thousands of samples. BioRxiv (2017), 201178. https://www.biorxiv.org/content/10.1101/201178.abstract
[59] Raghu Prabhakar, Yaqi Zhang, David Koepfinger, Matt Feldman, Tian Zhao, Stefan Hadjis, Ardavan Pedram, Christos Kozyrakis, and Kunle Olukotun. Plasticine: A reconfigurable architecture for parallel patterns. In 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA). 389–402. https://doi.org/10.1145/3079856.3080258
[60] Joshua Quinlan, Ethan D Graubach, Steven T Pullan, Ingra M Claro, Andrew D Smith, Karthik Ganapathi, Glenn Oliveira, Refugio Robles-Sikucua, Thomas F Rogers, Nathan A Beutler, et al. Multiplex PCR method for MiniON and Illumina sequencing of Zika and other virus genomes directly from clinical samples. Nature protocols 12, 6 (2017), 1261–1276. https://www.nature.com/articles/nprot.2017.066
[61] Shanshan Ren, Koen Bertels, and Zaid Al-Ars. Efficient acceleration of the pair-hmm forward algorithm for gatk haplotyperoll on graphics processing units. Evolutionary Bioinformatics 14 (2018), 1176934318760543. https://journals.sagepub.com/doi/pdf/10.1177/1176934318760543
[62] Harisankar Sadasivam, Milos Maric, Eric Dawson, Vishanth Iyer, Johnny Israeeli, and Satish Narayanasamy. Accelerating Minimap2 for accurate long read alignment on GPUs. J Biotechnol Biomed 6, 1 (2023), 13–23. https://doi.org/10.1109/NORCHIP.2013.6571525
[63] Harisankar Sadasivam, Daniel Stiffler, Ajay Tirumala, Johnny Israeili, and Satish Narayanasamy. GPU-accelerated Dynamic Time Warping for Selective Nanopore Sequencing. bioRxiv (2023), 2023–03.
[64] Bertil Schmidt and Christian Hundt. cuDTW++: Ultra-Fast Dynamic Time Warping on CUDA-Enabled GPUs. In European Conference on Parallel Processing. Springer, 597–612. https://link.springer.com/chapter/10.1007%2F978-3-030-57675-2_37
[65] James E. Smith. Decoupled Access/Execute Computer Architectures. (1982), 119–119.
[66] Temple F Smith, Michael S Waterman, et al. Identification of common molecular subsequences. Journal of molecular biology 147, 1 (1981), 195–197. https://doi.org/10.1016/0022-2836(81)90087-5
[67] Aaron Stillmaker and Bevan Baas. Scaling equations for the accurate prediction of CMOS device performance from 180 nm to 7 nm. Integration 58 (2017), 74–81. https://www.sciencedirect.com/science/article/pii/S0167931717301555
[68] Arun Subramanyam, Yufera Gu, Tushar Dutt, Somnath Mitra, Md Vasimuddin, Sanchit Misra, David Blaauw, Satish Narayanasamy, and Reetuparna Das. GenomicsBench: A Benchmark Suite for Genomics. In 2021 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS). 1–12. https://doi.org/10.1109/ISPASS51385.2021.00012
[69] Jesmin Jahan Tithi, Neal C. Crago, and Joel S. Emer. Exploiting spatial architectures for edit distance algorithms. In 2014 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS). 23–34. https://doi.org/10.1109/ISPASS.2014.6844458
[70] Yatish Turakhia, Gill Bejerano, and William I. Dally. Darwin: A Genomics Co-Processor Provides up to 15,000X Acceleration on Long Read Assembly. (2018), 199–213. https://doi.org/10.1145/3175162.3175193
[71] MCJ van Berkel, LW Bergink, WWR, B Carvalho, and GA Meijer. Early detection: The impact of genomics. Virchows Arch 471, 2 (2017), 163–173. https://link.springer.com/article/10.1007/s00428-017-2159-2
[72] Robert Vaser, Ivan Šović, Niranjian Nagarajan, and Mile Šikić. Fast and accurate de novo genome assembly from long uncorrected reads. Genome research 27, 5 (2017), 737–746. https://genome.cshlp.org/content/27/5/737.short
[73] Md. Vasimuddin, Sanchit Misra, Heng Li, and Srinivas Aluru. Efficient Architecture-Aware Acceleration of BWA-MEM for Multicore Systems. In 2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS). 314–324. https://doi.org/10.1109/IPDPS.2019.00050
[74] Kris A. Wetterstrand. DNA sequencing costs: Data. https://www.genome.gov/about-genomics/fact-sheets/DNA-Sequencing-Costs-Data
[75] Ryan R Wick, Louise M Judd, and Kathryn E Holt. Performance of neural network basecalling tools for Oxford Nanopore sequencing. Genome biology 20, 1 (2019), 1–10. https://link.springer.com/article/10.1186/s13059-019-1727-y
[76] Lisa Wu, Andrea Lottarini, Timothy K. Paine, Martha A. Kim, and Kenneth A. Ross. Q100: The Architecture and Design of a Database Processing Unit. In Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems (Salt Lake City, Utah, USA) (ASPLOS ’14). Association for Computing Machinery, New York, NY, USA, 255–268. https://doi.org/10.1145/2541948.2541961
[77] Xiao Wu, Arun Subramaniam, Zhezhong Wang, Satish Narayanasamy, Reetuparna Das, and David Blaauw. A High-Throughput Pruning-Based Pair-Hidden-Markov-Model Hardware Accelerator for Next-Generation DNA Sequencing. IEEE Solid-State Circuits Letters 4 (2021), 31–35. https://doi.org/10.1109/LSCL.2020.3045148
[78] Eleftheria Zeggini, Anna L. Glynn, Anne C. Barton, and Louise V. Wain. Translational genomics and precision medicine: Moving from the lab to the clinic. Science 365, 6460 (2019), 1409–1413. https://doi.org/10.1126/science.aax4588
Optimization Notice: Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance.
Intel, Xeon, and Intel Xeon Phi are trademarks of Intel Corporation in the U.S. and/or other countries.
Received 21 Nov, 2022; Revised 20 Feb, 2023; Accepted 9 Mar, 2023
|
A basic bibliography
Doctor of Arts program, Media Lab Aalto University, School of Arts, Design and Architecture
December 2015
CULTURAL STUDIES
Baudrillard, Jean,Simulacra and Simulation, translated by Sheila Faria Glaser, University of Michigan Press, 1995.
Lyon, David,Surveillance Society, Monitoring Everyday Life, Open University Press, 2001.
DESIGN
Binder, T., Michelis, G. de D., Ehn, P., Jacucci, G., Linde, P., & Wagner, I.Design Things. The MIT Press, 2011.
Boden, Margaret A.The Creative Mind: Myths and Mechanisms, Routledge, 2003.
Cross, N. (2007).Designerly ways of knowing. London: Springer-‐Verlag. DiSalvo, Carl:Adversarial Design, MIT Press, 2012.
Dunne, Anthony and Raby, Fiona:Speculative Everything: Design, Fiction and Social Dreaming,MIT Press, 2013.
Flusser, Vilem,The Shape of Things to Come, A Philosophy of Design, Reaktion, 1999.
Folkmann, Mads Nygaard, The Aesthetic of the Imagination in Design, The MIT Press, Cambridge, MA, 2013.
Greenbaum, J. M., & Kyng, M. (Eds.).Design at Work: cooperative design of computer systems (1st ed.). Lawrence Erlbaum Associates, 1991.
Julier, G.The Culture of Design (2nd ed.). Sage Publications Ltd, 2008.
Krippendorff, Klaus,The Semantic Turn, A New Foundation for Design, Taylor Francis, Boca Raton, FLA, 2006.
1
Löwgren, J., & Stolterman, E.Thoughtful Interaction Design: A Design Perspective on Information Technology, The MIT Press, 2007.
Mijksenaar, Paul,Visual Function, An Introduction to Information Design, Princeton Architectural Press, 1997.
Nelson, Harold G. and Stolterman Erik,The Design Way, Intentional Change in an Unpredictable World,MIT Press, 2012.
Norman, Don.The Design of Everyday Things, Basic Books, 2002.
Samson, Kristine, Hertzum, Morten, Strandvad, Sara Malou, Svabo, Connie, Simonsen, Jesper, & Hansen, Ole Erik.Situated Design Methods. Retrieved from https://mitpress.mit.edu/books/situated-‐design-‐methods, 2014
Simonsen, J., & Robertson, T. (Eds.)Routledge International Handbook of Participatory Design. Routledge, 2012.
MEDIA
Huhtamo, Erkki,Illusions in Motion, Media Archaeology of the Moving Panorama and Related Spectacles, The MIT Press, 2013.
Karp I., Kreamer C. M., & Lavine S. D. (Eds.)Museums and communities. The politics of public culture. Washington, DC: Smithsonian Institution Press, 1992
Keene, S.Fragments of the world. Uses of museum collections. Oxford:
Elsevier. Butterworth Heinemann.2005
Kittler, Friedric,Optical Media, Polity Press, 2010.
Mitchell, W. J. T., Hansen, Mark B. N. (Eds):Critical Terms for Media Studies, Univ of Chicago Press, 2010.
Parikka, Jussi,What is Media Archeology?, Polity Press, 2012.
Parry, R.Decoding the museum. London: Routledge, 2007
Zielinski, Siegfried,Deep Time of the Media, The MIT Press, 2006.
NEW MEDIA
Aarseth, Espen J.,Cybertext: Perspectives on Ergodic Literature, John Hopkins
University Press, 1997.
Ascott Roy,Reframing Consciousness, Intellect Press, 1999.
Benedikt, Michael (Ed.)Cyberspace, First Steps, The MIT Press, 1992.
Bolter, Jay David,Writing Space: The Computer, Hypertext, and the History of Writing, Lawrence Erlbaum Publishers, 1991.
Binder, T., Löwgren, J., & Malmborg, L.(Re)Searching the Digital Bauhaus (1st ed.). Springer.Communication. Cambridge University Press, 2008.
Bolter, Jay David and Grusin, Richard,Remediation: Understanding New Media, MIT Press, 2000.
Bolter, Jay David,Writing Space, Computers, Hypertext and the Remediation of Print, Lawrence Earlbaum Associates, Inc. 2001.
Bush, Vannevar. "As We May Think."Atlantic Monthly July 1945: 101-‐108. (See:http://www.theatlantic.com/magazine/archive/1969/12/as-‐we-‐may-‐ think/3881/)
Cameron, Fiona and Sarah Kenderdine,Theorizing Digital Cultural Heritage. A Critical Discourse, The MIT Press, 2007.
Carr, Nicholas,The Shallows, What the Internet is Doing to Our Brains, W. W. Norton & Company, 2010.
Carr, Nicholas,The Glass Cage,Bodley Head,2015.
Chun, Wendy Hui Kyong,Programmed Visions: Software and Memory,The MIT Press, 2011.
Chun, Wendy Hui Kyong & Keenan, Thomas (Eds):New Media, Old Media: A History and Theory Reader, Routledge, 2006.
Chun, Wendy Hui Kyong,Control and Freedom: Power and Paranoia in the Age of Fiber Optics,The MIT Press, 2006.
Coyne, Richard,Designing Information Technology in the Postmodern Age, From
Method to Metaphor, The MIT Press, 1995.
Coyne, Richard,Programmed Visions, Software and Memory, The MIT Press, 2011.
Dourish, Paul.Where the Action is: The Foundation of Embodied Interaction, The MIT Press, 2004.
Fry, Ben,Visualizing Data, Exploring and Explaining Data with the Processing Environment, O'Reilly, Sebastopol, Ca, 2008.
Goodman, Cynthia,Digital Visions, Computers and Art, Harry N. Abrams, 1997.
Haraway, Donna,Simians, Cyborgs, and Women: The Reinvention of Nature, Routledge, 1991.
Hayles, Katherine,How we Became Posthuman, Virtual Bodies in Cybernetics, Literature, and Informatics, University of Chicago Press, 1999.
Hansen, Mark B.,Bodies in Code: Interfaces with New Media, Routledge, 2006.
Hayles, Katherine,How We Think: Digital Media and Contemporary Technogenesis, University of Chicago Press, 2012.
Jenkins, Henry,Convergence Culture, Where Old and new Media Collide, New York University Press, 2006.
Jonas, Hans,The Imperative of Responsibility. In search of an Ethics for the Technological Age, University of Chicago Press, 1984.
Kirschembaum, Matthew G.,Mechanisms, New Media and the Forensic Imagination, The MIT Press, 2007.
Kwastek, Katja,Aesthetics of Interaction in Digital Art, The MIT Press, 2013.
Landow, George P.Hypertext: The Convergence of Contemporary Critical Theory and Technology, London: Johns Hopkins University Press, 1992.
Laurel, Brenda,Computers as Theatre, Addison-‐Wesley, 1993, 1991.
Lessig, Lawrence,Code and Other Laws of Cyberspace, Basic Books, 2000.
,
Levy, Pierre,Collective Intelligence, Mankind's Emerging World in CyberSpace Perseus Books, 1999.
Lovejoy, Margot,Digital Currents, Art in the Electronic Age, Routledge, 2004.
Löwgren, J., & Stolterman, E.Thoughtful interaction design: A design perspective on information technology.Cambridge, Mass: MIT Press, 2007.
Manovich, Lev,The Language of New Media, Leonardo Books, MIT Press, 2002.
Mattelmart, Armand,The Information Society: An Introduction, Sage, 2003.
,
McLuhan, Marshall,The Gutenberg Galaxy: The Making of Typographic Man University of Toronto Press, 1965.
McLuhan, Marshall,Understanding Media, The Extensions of Man, The MIT Press, 1994.
Medina, Eden,Cybernetic Revolutionaries, Technology and Politics in Allende's Chile, The MIT Press, 2011.
Murray Janet,Hamlet on the Holodeck, the Future of Narrative in Cyberspace, Free Press, 1997.
Negroponte, Nicholas,Being Digital, Vintage Books, 1996.
O'Neil, Shaleph.Interactive Media: The Semiotics of Embodied Interaction, Springer, 2008.
Rheinhold, Howard,The Virtual Community, Harper & Collins, 1994 (http://www.rheingold.com/vc/book/)
Salter, Chris,Entangled, Technology and the Transformation of Performance, The MIT Press, 2010.
Scoble, Robert, Israel Shel, Age of Context, Mobile, Sensors, Data and the Future of Privacy, Self-‐published under Patrick Brewster Press, 2013.
Shanken, Edward,Art and Electronic Media, Phaidon Press, 2009.
Solove, Daniel,Nothing to Hide, The False Tradeoff Between Privacy and Security, Yale University Press, 2011.
5
Stern, Nathaniel,Interactive Art and Embodiment, The Implicit Body as Performance, Gylphi Limited, 2013.
Stone, Allucquére Rosanne,The War of Desire and Technology at the Close of the Mechanical Age, The MIT Press, 1995.
Suchman, L. A.Plans and Situated Actions: The Problem of Human-‐Machine, 1987
Turkle, Sherry,Life on the Screen: Identity in the Age of the Internet, Simon & Schuster, 1995.
Terranova, Tiziana,Network Culture, Politics for the Information Age, Pluto Press, 2006.
Veltman, Kim H.Understanding new media: augmented knowledge & culture University of Calgary Press, 2006.
,
MUSIC AND SOUND DESIGN
Cox, Christoph and Warner, Daniel.Audio Culture: Readings in Modern Music, The Continuum, 2008.
Leman, Marc. Embodied music cognition and mediation technology. Mit Press, 2008.
Miranda, Eduardo Reck and Wanderley, Marcelo M.New Digital Musical Instruments: Control And Interaction Beyond the Keyboard. Middleton,Wisconsin: AR Editions, 2006
Nyman, Michael.Experimental music: Cage and beyond. Vol. 9. Cambridge University Press, 1999.
Peters, Deniz, Gerhard Eckel, and Andreas Dorschel, eds. Bodily expression in electronic music: perspectives on reclaiming performativity. Routledge, 2012.
Rowe, Robert.Machine Musicianship. Cambridge, The MIT Press, 2001.
6
PHILOSOPHY
Arbib, Michael A, Fellous, Jean-‐Marc eds.Who Needs Emotions?The Brain Meets the Robot,Oxford University Press, 2005.
Barad, Karen, Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter an Meaning,Duke University Press, 2007.
Bergson, Henri,Matter and Memory, Cosimo Classics, New York, 2007. Original published in 1917.
Danto, Arthur,The Body/Body Problem, University of California Press, 1999.
DeLanda Manuel,Intensive Science & Virtual Philosophy,Continuum Press, 2002.
DeLanda Manuel,Philosophy and Simulation, The Emergence of Synthetic Reason, Continuum Press, 2011.
Dewey, John,Art As Experience. Perigee Trade. Originally published in 1934.
Heidegger, Martin,Basic Writings, Farrell Krell, D. (ed.), HarperCollins Publisher, 1993.
Lakoff, George, and Mark Johnson. Philosophy in the flesh: The embodied mind and its challenge to western thought. Basic books, 1999.
Latour, Bruno,We Have Never Been Modern, Harvard University Press, 1993.
Luhmann, Niklas,Art as a Social System, Stanford University Press, 2000.
Noë, Alva. Varieties of presence. Harvard University Press, 2012.
Rancière, Jacques,Politics and Aesthetics,Continuum International Publishing Group, 2006.
Ramachandran, V. S.The Tell-‐Tale Brain: A Neuroscientist's Quest for What Makes Us Human,W. W. Norton & Company, 2011.
Stiegler, Bernard,Techics and Time, 1: The Fault of Epimetheus. Stanford University Press, 1998.
Varela, Francisco, Thompson, Evan and Roch, Eleanor,The Embodied Mind,MIT Press 1993.
RECOMMENDED JOURNALS
CoDesign International Journal of CoCreation in Design and the Arts. http://www.tandfonline.com/loi/ncdn20 -‐ .Ve0cuKbVWJ8
Convergence, The International Journal of Research into New Media Technologies http://con.sagepub.com
Design Issues http://www.mitpressjournals.org/loi/desi
Design Studies, Interdisciplinary Journal of Design Research http://www.journals.elsevier.com/design-‐studies/
Digital Creativity http://www.tandfonline.com/loi/ndcr20 -‐ .Ve0AjKVNr7I
Leonardo, Journal of the International Society for the Arts, Sciences and Technology http://www.leonardo.info/leoinfo.html
|
REVIEW
by Assoc. Dr. Delyan Delev, PhD
Head of the Department of Pharmacology and Clinical Pharmacology,
Faculty of Medicine, MU - Plovdiv
of a dissertation for awarding the educational and scientific degree 'doctor'
professional direction 7. Health care and sports; 7.1. Medicine
Doctoral Program "Physiology"
Author: Dr. Veselin Atanasov Vassilev
Form of doctoral studies: self-study basis
Department: Physiology
Topic: EFFECT OF SELECTIVE ANDROGEN RECEPTOR MODULATORS (SARM) ON PHYSICAL WORK CAPACITY AND SOME SIDE EFFECTS IN AN EXPERIMENTAL MODEL
Research supervisor: Prof. Dr. Nikolay Boyadzhiev, MD, PhD
1. General presentation of the procedure and the doctoral student
The presented set of materials on paper / electronic media is in accordance with Art. 70 (1) of I. Section. Acquisition of educational and scientific degree "DOCTOR" and scientific degree "DOCTOR OF SCIENCES" at MU-Plovdiv; Regulations of MU-Plovdiv dated 28.01.2021 and includes the following documents:
- Application to the Rector of MU-Plovdiv for disclosure of the procedure for the defense of a dissertation work
- curriculum vitae in European format with the doctoral student's signature
- a notarized copy of a higher education diploma
- orders for enrollment in doctoral studies, interruption of studies (due to maternity) and for continuation of studies; for deduction with right of defense
- order for conducting an exam from the individual plan and corresponding protocol for a passed exam or doctoral minimum in the specialty
- minutes of the departmental council for the preliminary discussion of the pre-dissertation work and the decisions made for the disclosure of the procedure and for the composition of the scientific jury
- dissertation work
- abstract
- list of scientific publications on the topic of the dissertation
- copies of scientific publications
- list of participations in scientific forums
- list of noticed citations
- declaration of originality and authenticity of the attached documents
- other documents related to the course of the procedure
The PhD student has attached 5 publications.
I have no notes or comments on the documents.
2. Brief biographical data of the PhD student
Dr. Veselin Atanasov Vasilev was born on 06/04/1993. He graduated from the Language High School - "Plovdiv" in 2012. In the period 2012 - 2018, he studied at the Medical University - Plovdiv and acquired the qualification "Master degree in Medicine". Since 22.01.2019, he is an assistant in the Department of Physiology of MU-Plovdiv. He acquired a specialty in the discipline in 2023. He was enrolled in a doctoral course of independent preparation according to order P-425/27.02.2023 of the Rector. Successfully passed the candidate's minimum exam following the order of the Rector of MU-Plovdiv R-№1831/28.06.2023. with an overall score of Excellent (5.50). It was registered with the right of defense P-№3595/06.12.2023.
3. Actuality of the topic and appropriateness of the set goals and tasks
The topic of the presented dissertation is relevant for specialists in various fields such as medicine, sports and public health. The data presented will significantly enrich the knowledge about the effects of SARMs. Some adverse effects of the mentioned group of substances are also reported. Non-steroidal SARMs are widely available online, typically used in doses many times higher than those used in various clinical trials. Abuse of them puts the health of their users at risk. There are data on positive doping samples and punishments resulting from their use, as well as data on their induced toxicity in various body systems. The obtained results can be used to improve public health, upgrade knowledge about SARMs as candidate therapeutics that could improve the therapeutic process and quality of life in a number of socially significant diseases. The reported data could also serve professional athletes with the aim of more carefully selecting the nutritional supplements used and avoiding the presence of SARMs in them.
4. Knowing the problem
From the documents presented to me, incl. project for a dissertation and abstract shows that the doctoral student knows the state of the problem in detail and creatively evaluates the literary material. The literature review is up-to-date, contains 256 sources and is extremely well selected.
5. Research methodology
The chosen research methodology allows fully achieving the set goal and obtaining an adequate answer to the tasks solved in the dissertation work. The aim of the dissertation work is to investigate the isolated effects and the combined action of two main factors: administration of a non-steroidal
representative of SARM (ostarine or liandrol) and systemic submaximal training, on physical work capacity and different body systems, in rats.
In order to achieve the goal, clear tasks have been set that examine its implementation:
To investigate the effects of selective androgen receptor modulators on endurance, maximal sprint speed and VO2max.
To investigate the effects of selective androgen receptor modulators on some markers of muscle oxidative capacity and carbohydrate metabolism.
To investigate the effects of SARMs on lipid profile and gene expression in m. gastrocnemius.
To study the adverse effects of selective androgen receptor modulators.
To investigate changes in the concentration of hormones – LH, FSH and testosterone in response to systemic submaximal training combined with SARMs intake.
Statistical methods are appropriately chosen for this type of study.
6. Characterization and evaluation of the dissertation work
The dissertation contains 174 pages and is illustrated with 90 figures and 53 tables. follows a classical structure and is composed as follows: List of abbreviations (1 page); Introduction (1 page); Literature review (29 pages); Purpose and tasks (1 page); Material and methods (9 pages); Results and Discussion (93 pages); Conclusions (2 pages); Statement of contributions (1 page); Bibliography (32 pages); Scientific publications related to the topic and participation in scientific forums related to the topic (2 pages).
Two experiments were conducted with a total of 80 rats to perform the tasks. Each of the experiments included 4 groups of 10 animals reared in individual metabolic cells. The allocation of groups made by the doctoral student was appropriate because in each experiment there was a training and a non-training group receiving an active substance, as well as a training and a non-training group treated with a placebo. Body weight and food consumption of the rats were monitored weekly. Dr. Veselin Vasilev has researched different groups of indicators to achieve the goal. Appropriate authorities were selected for the purposes of the study. The methods used by the doctoral student are modern and suitable for the implementation of the tasks. Survey data were processed using appropriate statistical methods. As a result of the conducted experiments, Dr. Vesselin Vasilev makes some important conclusions that reflect the results obtained:
The 8-week intake of ostarine significantly increased the concentration of total cholesterol in the blood plasma.
Treatment with ostarine does not cause significant changes in the concentration of gonadotropic hormones and testosterone.
It found a decrease in glucose plasma concentration in animals receiving ostarine.
Administration of ostarine decreases submaximal endurance but does not alter maximal oxygen consumption.
The administration of ostarine does not change the number of different types of blood cells as well as the hemoglobin concentration.
Ostarim treatment increased myostatin gene expression in m. gastrocnemius.
A two-month intake of ligandrol increases the plasma concentration of triglycerides.
Administration of ligandrol decreased maximal oxygen consumption but did not affect maximal sprint speed.
Both of the non-steroidal SARMs used increase the time spent in sleep.
Treatment with ligandrol increased the grip strength of the rats but did not change the maximal time to exhaustion.
Dr. Vassilev also found a decrease in follicle-stimulating hormone induced by taking ligandrol. The results are presented correctly, in the form of graphs and tables. The discussion is correctly presented and reflects the results of the doctoral student in the light of modern literary sources on the problem.
7. Contributions and significance of the development for science and practice
The dissertation work has a contributing character, as the conclusions from its implementation will significantly enrich the knowledge about the effects of non-steroidal SARMs on adaptation changes in the body during systematic endurance training. The paper presents, for the first time, the effects of nonsteroidal androgen receptor modulators on maximal oxygen consumption, running economy, maximal sprint speed, submaximal endurance, and maximal exhaustion time in rats. Specific effects on carbohydrate metabolism, lipid profile, on muscle gene expression, on indices of muscle oxidative capacity, on basic hematological parameters and some hormonal parameters are also reported.
The results of the development of the topic "Influence of selective androgen receptor modulators (SARM) on physical work capacity and some side effects in an experimental model" will bring theoretical and practical contributions to specialists in various fields such as medicine, sports and public health.
In conclusion, Dr. Vassilev formulated 5 scientific contributions, namely:
1. For the first time, the effects of non-steroidal AR modulators are being investigated on the indicators of physical work capacity - MCC, SMI, VO$_2$$_{\text{max}}$, MVI, in training and non-training rats.
2. For the first time it was found that non-steroidal SARMs reduce submaximal endurance in conditions of submaximal training, do not affect MVI and MCC, but may have cross-directional effects regarding VO$_{2\text{max}}$.
3. Some adverse effects of nonsteroidal SARMs have been reported on the lipid and hormonal profile of rats.
4. Effects of non-steroidal AR modulators have been established on carbohydrate metabolism and gene expression of myostatin, VEGF-A, IGF-1 in m. gastrocnemius.
5. An effect of non-steroidal SARMs was found on sleep duration in trained and non-trained rats.
8. Evaluation of publications on the dissertation work
The doctoral student presents 5 publications, of which: one in a journal with an Impact factor, 2 referenced in internationally recognized databases (Scopus, WoS) and 2 in refereed journals. They are all in English. He is the first author of 3 of them, and equal co-author of the others, which speaks of the personal participation of the doctoral student in the conducted dissertation research and proves that the formulated contributions and obtained results are his personal merit. Dr. Vassilev also participated in 5 scientific forums related to the topic of the dissertation, 4 abroad and one in Bulgaria.
9. Personal participation of the doctoral student
The presented materials undoubtedly show the personal participation of the doctoral student in the conducted dissertation research, as well as that the formulated contributions and obtained results are his personal merit.
10. Abstract
The abstract contains 55 pages, is illustrated with 30 figures and 14 tables, is made according to the requirements of the relevant regulations and reflects the main results achieved in the dissertation.
11. Critical remarks and recommendations
critical remarks and recommendations to the conducted research and set of materials.
12. Personal impressions
Dr. Vasilev is an ambitious young scientist and teacher, dedicated to his profession with high knowledge of the discipline "Physiology".
13. Recommendations for future use of dissertation contributions and results
I recommend Dr. Vassilev to continue and further develop his scientific research in this interesting field.
CONCLUSION
The dissertation contains scientific, scientific-applied and applied results, which represent an original contribution to science fully meets the requirements of the Act on Development of the Academic Staff in the Republic of Bulgaria (ADASRB), the Regulations for its implementation and the Regulations of the Medical University of Plovdiv. The presented materials and dissertation results fully correspond to the specific requirements regarding the Regulations of the Medical University of Plovdiv for implementation of the ADASRB.
The dissertation shows that the doctoral student Dr. Veselin Atanasov Vassilev possesses in-depth theoretical knowledge and professional skills in the scientific specialty of Physiology, demonstrating qualities and skills for independent conduct of scientific research.
I confidently give my positive assessment of the research, presented by the above peer-reviewed dissertation, author’s abstract, obtained results and scientific contributions, and would recommend to the honorable members of the scientific jury to award the educational and scientific degree "Doctor" to Dr. Veselin Atanasov Vassilev in the doctoral program in "Physiology", professional direction 7. Health and sports; 7.1. Medicine.
18.03.2024
Reviewer:
Associate Professor Delyan Penev Delev, Ph.D
|
Luminescence properties of SrSi$_2$O$_2$N$_2$ doped with divalent rare earth ions
Volker Bachmann$^{a,b,*}$, Thomas Jüstel$^{a,c}$, Andries Meijerink$^b$, Cees Ronda$^{a,b}$, Peter J. Schmidt$^a$
$^a$Philips Research Laboratories, Weisshausstr. 2, D-52066 Aachen, Germany
$^b$Department of Condensed Matter, Debye Institute, Utrecht University, P.O. Box 80 000, 3508 TA Utrecht, The Netherlands
$^c$University of Applied Sciences Münster, Stegerwaldstr. 39, D-48565 Steinfurt, Germany
Received 19 August 2005; accepted 18 November 2005
Available online 28 December 2005
Abstract
The optical properties of SrSi$_2$O$_2$N$_2$ doped with divalent Eu$^{2+}$ and Yb$^{2+}$ are investigated. The Eu$^{2+}$ doped material shows efficient green emission peaking at around 540 nm that is consistent with $4f^7 \rightarrow 4f^65d$ transitions of Eu$^{2+}$. Due to the high quantum yield (90%) and high quenching temperature (> 500 K) of luminescence, SrSi$_2$O$_2$N$_2$:Eu$^{2+}$ is a promising material for application in phosphor conversion LEDs. The Yb$^{2+}$ luminescence is markedly different from Eu$^{2+}$ and is characterized by a larger Stokes shift and a lower quenching temperature. The anomalous luminescence properties are ascribed to impurity trapped exciton emission. Based on temperature and time dependent luminescence measurements, a schematic energy level diagram is derived for both Eu$^{2+}$ and Yb$^{2+}$ relative to the valence and conduction bands of the oxonitridosilicate host material.
© 2005 Elsevier B.V. All rights reserved.
Keywords: Luminescence conversion; Divalent lanthanides; Oxonitrides; Phosphor converted LEDs; Ytterbium; Europium
1. Introduction
An exciting new development in the field of luminescent materials is the search for new phosphors for the conversion of the near-UV or blue emission from (In,Ga)N LED’s into visible light. In the past, luminescence research has mainly focused on conventional phosphors for the conversion of 254 nm UV radiation from a mercury discharge into visible light. Research has contributed to the development of a mature product with stable and efficient (90% quantum efficiency) phosphors and, at present, work on phosphors for fluorescent tubes is aimed at incremental improvements in the stability,
morphology and efficiency of existing phosphors. Luminescence conversion of near-UV or blue light into longer wavelength radiation applied in state-of-the-art white LED lamps poses new challenges in phosphor research, especially, in view of the small energy difference between pump and emission wavelength. In the search for new phosphors for inorganic LEDs, luminescent materials with high charge densities between activator and its surroundings are investigated. Doping such materials with Ce$^{3+}$ or Eu$^{2+}$ can lead to strong absorption in the near-UV to blue spectral range combined with efficient emission in the visible. For example, oxides like Y$_3$Al$_5$O$_{12}$:Ce$^{3+}$ (YAG:Ce) or sulfides like CaS:Eu$^{2+}$ are currently applied in phosphor-converted LEDs (pcLEDs) as luminescence converters [1,2]. Nevertheless, pronounced temperature quenching of luminescence and low chemical stability restricts their use in LED lighting applications where a long device life time under harsh conditions is required.
Oxonitridosilicates, known as siones, represent a class of solid compounds, which can be formally derived from oxosilicates by partial substitution of oxygen by nitrogen. They can combine attractive material properties like high mechanical hardness and strength; also, exceptional thermal and chemical stability is reported [3]. Higher condensed siones are known to be highly covalent and stable towards oxidation and hydrolysis [3]. The SrSi$_2$O$_2$N$_2$ host lattice we are about to report was developed in a synthesis approach that earlier led to efficient nitridosilicate phosphors M$_2$Si$_2$N$_8$:Eu$^{2+}$ (M = Sr, Ba) [4–6]. The strong interest in this class of novel materials is reflected by a number of publications in the last few years on the system Ln—Si—O—N (Ln = La, Gd, Y) in general [7] and the title compound in detail [8–12]. This paper describes the synthesis and continuative studies on the optical properties of luminescent materials based on the SrSi$_2$O$_2$N$_2$ host lattice doped with divalent rare earth ions, viz. Eu$^{2+}$ and Yb$^{2+}$. The luminescence properties are investigated and the luminescence mechanism is discussed. The luminescence characteristics of the Eu$^{2+}$-doped SrSi$_2$O$_2$N$_2$ are shown to be very promising for application in pcLEDs. This is substantiated by a recent paper on an all-nitride, phosphor-converted white light emitting diode using the Eu$^{2+}$-doped title compound [13].
2. Experimental methods
All samples were synthesized by a conventional solid-state reaction. Mixtures of SrCO$_3$ (Philips Lighting Components, 99.9%), Si$_3$N$_{4-x}$(NH)$_{3/2x}$ ($x \approx 1$, made by thermal decomposition of Si(NH)$_2$ as described in Ref. [14], O content < 2 wt%), and the rare earth dopants Eu$_2$O$_3$ (Alfa Aesar, REacton 99.999%) or Yb$_2$O$_3$ (Auer-Remy, 99.99%) were prepared by ball milling, and fired for 2–4 h at 1200–1500 °C in a reducing atmosphere (H$_2$/N$_2$) in a tube furnace. After milling the raw product, powders were washed with water and isopropanol. XRD analysis was done on a Philips diffractometer PW 1729 at RT, using Cu K$_\alpha$ radiation.
Luminescence spectra were recorded between 4 and 300 K on a Spex Fluorolog 2 spectrofluorometer equipped with a helium flow cryostat. The set-up is described in detail in Ref. [15]. To study thermal quenching between 300 and 600 K, luminescence spectra were measured on a modified spectrofluorimeter system FL900 of Edinburgh Instruments using a Xe-lamp as excitation source. The spectra were measured with a spectral resolution of 0.5–1.0 nm. The complete set-up was described earlier [16]. Luminescence life time measurements were measured using a Lambda Physik dye laser pumped by a Lambda Physik LPX100 excimer laser (operating at 308 nm) for pulsed excitation at 450 nm using a Coumarin 47 dye. Luminescence decay curves were measured using a 0.25 m Acton Research monochromator and an RCA C31034 photomultiplier tube in combination with a Tektronix Digital Oscilloscope.
3. Experimental results
3.1. Crystal structure
Phase purity of SrSi$_2$O$_2$N$_2$ samples was checked by means of X-ray powder diffraction. The crystal
structure is similar to the structure of CaSi$_2$O$_2$N$_2$ [17]. Both compounds represent a new class of layered materials with layers of (Si$_2$O$_2$N$_2$)$^{2-}$ that consist exclusively of SiON$_3$-tetrahedrons. The N atom bridges three Si atoms, while the O atom is bound terminally to the Si atom. There are four types of sites for the Sr$^{2+}$ ions, each surrounded by six oxygen atoms in a distorted trigonal prismatic manner [18]. The XRD patterns of the SrSi$_2$O$_2$N$_2$ phases described in this paper show similarities to the pattern assigned to a high temperature (HT) phase of SrSi$_2$O$_2$N$_2$ in Ref. [19].
### 3.2. SrSi$_2$O$_2$N$_2$:Eu$^{2+}$
The room temperature excitation and emission spectra of the Eu$^{2+}$-doped SrSi$_2$O$_2$N$_2$ are shown in Fig. 1. The emission spectrum shows a single emission band peaking at 539 nm. The position and width of the emission band differs from that reported for a SrSi$_2$O$_{2-\delta}$N$_{2+\frac{2}{3}\delta}$:Eu ($\delta \sim 1$) compound with a similar XRD pattern [12], and is comparable with the emission spectrum of a phase that was described as the high-temperature modification of SrSi$_2$O$_2$N$_2$:Eu [11]. The emission band is assigned to the $4f^65d^1 \rightarrow 4f^7$ (fd) transition on Eu$^{2+}$. The Stokes’ shift (SS) of the fd emission can be estimated by taking twice the energy difference between the zero phonon line energy and the energy of the emission maximum [20]. The spectral position of the zero phonon line was taken to be the point of intersection of absorption and emission spectrum, $\lambda_{0-0} = 494$ nm (2.51 eV). This yields a value of 0.42 eV for SS, which is a typical value for the fd emission from Eu$^{2+}$ [20,21]. For example, in aluminates, silicates and phosphate host lattices, SSs are reported between 0.25 and 1 eV for fd emission from Eu$^{2+}$ [20,21]. The full-width at half-maximum (FWHM) of the emission band is 0.3 eV, and is slightly smaller than the FWHM as is commonly observed for Eu$^{2+}$ fd emission [22]. The position of the emission band is at longer wavelengths than for Eu$^{2+}$ in oxosilicates [23] showing Eu$^{2+}$ emission typically in the blue spectral region [24]. One of a few exceptions is (Ba,Sr)$_2$SiO$_4$:Eu$^{2+}$ which emits in the green to yellow spectral range, viz. 520–580 nm [21]. The shift of the emission to a longer wavelength is ascribed to a higher degree of covalency between the activator ion and its surroundings (nephelausetic effect) [25]. For SrSi$_2$O$_2$N$_2$:RE$^{2+}$ the bonding situation is comparable to orthosilicates like M$_2$SiO$_4$:RE$^{2+}$, where the RE dopant is coordinated by only O-atoms as well [25]. However, due to the higher degree of condensation of the SrSi$_2$O$_2$N$_2$ lattice, the stability against hydrolysis is much higher compared to alkaline earth orthosilicates.
The decay time of the fd emission of Eu$^{2+}$ is short, since the transition involved is parity allowed. Typical values are around 1 $\mu$s [24]. In Fig. 2a the luminescence decay curves for the Eu$^{2+}$ emission are shown for various temperatures between 4 and 723 K. All decay curves show a single exponential decay behavior. Between 4 and 450 K the decay time $\tau_{1/e}$ is 1.15 $\mu$s, close to the typical value for the fd emission from Eu$^{2+}$. It has been shown that the radiative decay time of the Eu$^{2+}$ fd emission decreases with increasing energy of the emission maximum, in agreement with the theoretical relation between the radiative transition probability and the energy of electric dipole transition [24]. The radiative decay rate $A_R$ (= 1/ $\tau_{1/e}$) is proportional to the third power of the energy of the emission band ($\sigma$, energy in
Fig. 2. (a) Temperature dependent decay curves of the Eu$^{2+}$ emission (for 539 nm emission under 450 nm excitation) SrSi$_2$O$_2$N$_2$:Eu$^{2+}$ 2%; (b) temperature dependence of the integrated emission intensity (black squares) and luminescence decay times (open squares) derived from the curves in (a). The lines through the data points are fits to Eq. (2).
cm$^{-1}$) [24]:
$$A_R(\text{Eu}) = \frac{1}{\tau} = 5.06 \times 10^{-8} |\langle 5d|r|4f\rangle|^2 \chi \sigma^3,$$
(1)
where $\langle 5d|r|4f\rangle$ is the radial overlap integral and $\chi$ equals $(n(n^2 + 2)^2)/9$, and corrects for the dependence of $A_R$ on the refractive index $n$ [24]. With $\langle 5d|r|4f\rangle = 0.81$ [24] and $n = 1.75$ (estimated from the refractive index reported for similar oxonitridosilicate [26–28], the value for $\chi = 5$ and the calculated radiative life time is 0.94 $\mu$s. This is very close to the experimentally observed decay time (1.15 $\mu$s), and the error is within the uncertainty range caused, e.g., by the fact that the refractive index for SrSi$_2$O$_2$N$_2$ is not known exactly, giving an uncertainty of about 10% in $\chi$.
The observation that the experimentally observed luminescence life time is very close to the predicted radiative life time in the temperature regime 4–450 K indicates that in this temperature range the luminescence quantum efficiency of the Eu$^{2+}$ emission is close to unity.
Above 450 K the luminescence decay time decreases as is shown in Fig. 2b. The decrease in the luminescence decay time above 450 K is accompanied by a decrease in the emission intensity indicating that non-radiative relaxation sets in above 450 K. In Fig. 2b both the temperature dependence of the luminescence life time and the luminescence intensity are plotted. For the plot of the emission intensity care has been taken to take changes in the absorption due to thermal broadening into account. An analysis based on the thermal broadening of the absorption band and the Kubelka–Munk model for the reflection shows that the changes in reflection over the temperature investigated is less than 5%, and will not significantly influence the measured luminescence intensity. From the temperature dependence of the luminescence decay time and the emission intensity, the luminescence quenching temperature (temperature at which the luminescence intensity has decreased to half its initial values) is estimated to be about 600 K. Current high brightness LEDs can reach temperatures around 450 K. At this temperature the thermal quenching of the fd luminescence of Eu$^{2+}$ in SrSi$_2$O$_2$N$_2$ is marginal.
The mechanism for quenching of the fd luminescence of Eu$^{2+}$ may be either quenching by thermally activated cross-over from the $4f^65d$ excited state to the $4f^7$ ground state, or thermally activated photoionization from the $4f^65d$ state to the conduction band. The former mechanism is historically the most widely used explanation, and for a proper analysis of the temperature dependence of the luminescence intensity the Struck–Fonger model can be used [29]. More recently, it has been shown that in several host lattices thermally induced ionization of the 5d electron is responsible for the quenching of the Eu$^{2+}$ fd luminescence [30]. Especially for host lattices, where the SS for the fd emission is small and, yet, the quenching temperature for the luminescence is low, it is clear that thermally induced ionization from the fd state to the conduction band is responsible for temperature quenching of the luminescence. Confirmation has been obtained by temperature dependent photoconductivity experiments [31]. In the present composition the SS is small and the luminescence quenching temperature is relatively high, and there is no clear proof for either of the mechanisms being responsible for the temperature quenching. Based on the results for Yb$^{2+}$ (vide infra) it is likely that the 5d state of Eu$^{2+}$ is close to the conduction band edge, and that thermally activated ionization from the $4f^65d$ state is responsible for temperature quenching of the luminescence. In this case, the temperature dependence of the luminescence intensity and decay time are described by a modified Arrhenius equation. This equation is easily derived by taking the radiative and thermally activated non-radiative decay into account.
$$\tau(T) = \frac{\tau_0}{1 + \tau_0 C e^{-E_A/kT}}, \quad I(T) = \frac{I_0}{1 + D e^{-E_A/kT}},$$
(2)
where $\tau(T)$ and $I(T)$ are luminescence decay time and intensity at temperature $T[K]$, respectively, $\tau_0$ and $I_0$ the decay time and intensity at 0 K, $C$ is a rate constant for the thermally activated escape as is $D$ which contains $I_0$ as well, $E_A$ is the activation energy for this process which is the energy gap between the Eu$^{2+}$ $4f^65d^1$ excited level and the bottom of the conduction band, and $k$ is the Boltzmann constant. The intensity at 0 K was normalized to 1. In Fig. 2b the best fits to this equation are shown for both the emission and the luminescence decay time. The energy gap $E_A$ for the best fits of emission intensity and decay time is 0.6 eV (marked as 2 in Fig. 4). For the emission intensity, $D$ and $I_0$ are $2.76 \times 10^4$ and 0.976, respectively. For the decay time, $C$ and $\tau_0$ are $4.24 \times 10^{10}$ and 1.16 $\mu$s, respectively.
The optical band gap of the SrSi$_2$O$_2$N$_2$ host lattice can be obtained from the reflection spectrum of the undoped host lattice. In Fig. 3 the reflection spectrum is shown. The band edge is situated at 220 nm, which shows that the band gap is around 5.6 eV. Together with the information of the energy difference between the lowest $4f^65d$
Fig. 3. Diffuse reflection spectrum of undoped SrSi$_2$O$_2$N$_2$ ($T = 298\, \text{K}$). See Ref. [6] for experimental details.
Yb$^{2+}$ is at a slightly higher energy ($\sim 0.1\, \text{eV}$) than the $4f^65d$ excited state of Eu$^{2+}$ resulting in a slightly blue-shifted fd emission for Yb$^{2+}$. In Fig. 5 the excitation, emission and reflection spectra are depicted for Yb$^{2+}$-doped SrSi$_2$O$_2$N$_2$. The lowest energy fd excitation band is around 450 nm, very similar to the position of the lowest energy fd excitation band for Eu$^{2+}$. Comparing the excitation and reflection spectra of the Eu$^{2+}$- and Yb$^{2+}$-doped sample one must come to the conclusion that the crystal field splitting of Yb$^{2+}$ must be larger than the one for Eu$^{2+}$. This can be seen in the much more pronounced fine structure for the two Yb$^{2+}$ spectra. Reasons for the larger crystal field splitting are the larger effective nuclear

**Fig. 4.** Energy level diagram for Eu$^{2+}$ (left-hand side) and Yb$^{2+}$ (right-hand side) in SrSi$_2$O$_2$N$_2$ derived from luminescence measurements. (* emissive state of the trapped exciton emission of Yb$^{2+}$). See also text.

**Fig. 5.** Excitation spectrum (dashed line) for 615 nm emission, emission spectrum (drawn line) under 450 nm excitation and diffuse reflection spectrum (dotted line) of SrSi$_2$O$_2$N$_2$:Yb$^{2+}$ 2%. All spectra have been recorded at 298 K.
state and the $4f^7$ ground state (for which the energy of the zero-phonon line, 2.5 eV, marked as 1 in Fig. 4, is used) a complete energy level diagram for the Eu$^{2+}$ ion in the SrSi$_2$O$_2$N$_2$ host lattice can be derived. The diagram is shown on the left hand side in Fig. 4. In this diagram, the $4f^7$ ground state is situated 2.5 eV above the top of the valence band. Note that, although useful as a schematic picture, one should realize that in this type of diagrams energies from results for relaxed and unrelaxed excited configurations are combined in one diagram.
The temperature dependent luminescence properties show that luminescence quenching does not set in below 500 K. A high quenching temperature is important, especially in high brightness LED’s where the temperature of the phosphor can increase to about 450 K. The presently used phosphors in white light LEDs suffer from luminescence temperature quenching at these elevated temperatures. The absence of luminescence quenching below 500 K in combination with the high luminescence quantum yield (90% for SrSi$_2$O$_2$N$_2$:Eu$^{2+}$ 2%) implies that this material is very promising for application as a luminescence converter in phosphor-converted LEDs [13].
### 3.3. SrSi$_2$O$_2$N$_2$: Yb$^{2+}$
The luminescence properties of Eu$^{2+}$ and Yb$^{2+}$ are often quite similar for the same host lattice [22,32–35]. In general, the $4f^{13}5d$ excited state of
charge and the smaller atomic radius of Yb$^{2+}$ at the same formal charge compared to Eu$^{2+}$. The Yb$^{2+}$ emission band, however, is strongly red-shifted ($\lambda_{\text{max}} = 620 \text{ nm}$ for Yb vs. $540 \text{ nm}$ for Eu) and both the FWHM (0.32 eV for Yb vs. 0.30 for Eu) and the Stokes shift (0.54 eV for Yb vs. 0.42 eV for Eu) are larger than for the Eu$^{2+}$ emission band. This type of anomalous red-shifted emission has been reported before for Yb$^{2+}$. For example, in the fluorite host lattices CaF$_2$ and SrF$_2$ it has been observed that for CaF$_2$ the fd luminescence of Eu$^{2+}$ and Yb$^{2+}$ are quite similar, but in SrF$_2$ the emission for Yb$^{2+}$ is red-shifted (rather than slightly blue-shifted), and the emission is characterized by a larger SS and FWHM and a lower luminescence quenching temperature [36]. The anomalous luminescence properties for Yb$^{2+}$ are explained by considering the position of the lowest energy fd state. The anomalous emission is observed when the lowest energy fd state is situated in the conduction band. When the fd state is at energies higher than the conduction band edge, excitation into the fd state is followed by photoionization and trapping of the electron close to the lanthanide impurity forming an impurity-trapped exciton state. Luminescence is observed from this impurity-trapped exciton state. Convincing evidence for this model was obtained by performing photoconductivity experiments [37,38]. In host lattices where Yb$^{2+}$ or Eu$^{2+}$ show the anomalous emission, excitation into the lowest energy fd band gave a clear photoconductivity confirming that the lowest energy fd state is situated in the conduction band. At present, there are a number of oxide and fluoride host lattices in which impurity-trapped exciton emission is observed for Yb$^{2+}$ and Eu$^{2+}$ [39]. Moine and co-workers describe in Refs. [36,40] the temperature dependence of Yb$^{2+}$ emission in SrF$_2$. The present results on Eu$^{2+}$- and Yb$^{2+}$-doped SrSi$_2$O$_2$N$_2$ show that in this oxonitridosilicate the lowest fd state is below the conduction band for Eu$^{2+}$ and above the conduction band edge for Yb$^{2+}$, similar to the situation found for SrF$_2$.
Luminescence decay times have been measured for the Yb$^{2+}$ emission between 4 and 640 K. In Fig. 6a the luminescence decay curves are shown for several temperatures. All decay curves are single-exponential. In Fig. 6b the luminescence decay times derived from fitting the curves of Fig. 6a to a single exponential function are plotted as a function of temperature. The luminescence decay time at 4 K is 65 $\mu$s. This is shorter than the low temperature decay time of the fd emission from Yb$^{2+}$ (typically 10 ms), and is in line with the low temperature decay times for impurity (Yb) -trapped exciton emission (typically 50–500 $\mu$s). The relatively short luminescence decay time supports the assignment of the Yb emission to impurity-trapped exciton emission. Above 50 K the luminescence decay time decreases, as is shown in Fig. 6b. There are two possible reasons for this effect. First, thermal quenching may set in leading to an increase in the non-radiative relaxation rate and, therefore, to a decrease of the overall decay time. Second, the thermal population of higher energy levels with faster decay times can explain this decrease. To distinguish between the two scenarios we studied the emission intensity in the same temperature range as the decay time. If non-radiative relaxation is responsible, the intensity is expected to decrease in the same manner as the decrease in decay time. Contrary to this, we found an increase of the emission intensity starting above 50 K. This increase continues when heating the sample further, until a decrease in the emission intensity is observed above 280 K. Based on this observation, it is evident that thermal population of a higher energy level with a faster decay time is responsible for the decrease of the luminescence life time between 50 and 280 K. The increase in emission intensity concurrent with the decrease in life time has been observed before, and is explained by a constant non-radiative decay channel at low temperatures. An increase in the radiative decay rate is in that case accompanied by an increase in luminescence intensity, since the radiative decay channel is favored over the non-radiative channel. Above 280 K, thermal quenching sets in and decreases the emission intensity and causes a further decrease of the decay time. To calculate the activation energy of the thermal quenching we used the emission intensities measured between 280 and 640 K. The best-fit following Eq. (2) gives an activation energy $E_A$ of 0.3 eV. From this fit the values for $I_0$ and $D$ are $4 \times 10^6$
Fig. 6. Temperature dependent decay curves of the Yb$^{2+}$ emission (for 615 nm emission under 450 nm excitation) SrSi$_2$O$_2$N$_2$:Yb$^{2+}$ 2%; (b) temperature dependence of the integrated emission intensity (black diamonds) and luminescence decay times (open diamonds) derived from the curves in (a). The lines through the data points are fits to Eq. (2).
and 2441, respectively. Together with the information of the energy difference between the emitting impurity-trapped exciton state (marked as * in Fig. 4) and the 4f$^{14}$ ground state (for which the energy of the zero-phonon line, 2.3 eV, marked in Fig. 4 as 3, is used) a complete energy level diagram for Yb$^{2+}$ in the SrSi$_2$O$_2$N$_2$ host lattice can be derived. As shown in Fig. 4, on the right-hand side of the diagram the 4f$^{14}$ ground state is located 3.0 eV above the valence band.
4. Conclusions
The temperature dependence and life times are reported for the f–d luminescence of Eu$^{2+}$ and
Yb$^{2+}$ in SrSi$_2$O$_2$N$_2$. For Eu$^{2+}$ a very efficient 4f$^6$5d $\rightarrow$ 4f$^7$ emission at relatively long wavelength in the green spectral range (around 540 nm) is observed. The quenching temperature of the emission is high (no quenching below 500 K), and makes SrSi$_2$O$_2$N$_2$:Eu$^{2+}$ a promising phosphor for high-power, phosphor-converted LEDs [13]. For Yb$^{2+}$ an anomalous emission is observed around 615 nm, which is characterized by a large Stokes’ shift and a low quenching temperature. The emission is ascribed to an Yb$^{2+}$-trapped exciton luminescence. Based on the luminescence measurements, an energy diagram is derived in which the positions of local states of Eu$^{2+}$ and Yb$^{2+}$ are included, as well as bottom and top of conduction and valence band of the oxonitridotrisilicate host. For Eu$^{2+}$, the lowest energy fd state is below the edge of the conduction band, whereas for Yb$^{2+}$ the lowest fd state is positioned within the conduction band.
References
[1] P. Schlotter, J. Baur, Ch. Hielscher, M. Kunzer, H. Obloh, R. Schmidt, J. Schneider, Mater. Sci. Eng. B 59 (1999) 390.
[2] Y. Hu, W. Zhuang, H. Ye, S. Zhang, Y. Fang, X. Huang, J. Lumin. 111 (2005) 139.
[3] W. Schnick, Int. J. Inorg. Mater. 3 (2001) 1267.
[4] H. Huppertz, W. Schnick, Acta Crystallogr. C 53 (1997) 1751.
[5] T. Schlieper, W. Milius, W. Schnick, Z. Anorg. Allg. Chem. 621 (1995) 1380.
[6] H.A. Höppe, H. Lutz, P. Morys, W. Schnick, A. Seilmeyer, J. Phys. Chem. Sol. 61 (2000) 2001.
[7] J.W.H. van Krevel, On new rare-earth doped M–Si–Al–O–N materials, Ph.D. Thesis, Universiteitsdrukkerij TU Eindhoven, Eindhoven, 2000.
[8] V. Bachmann, T. Jüstel, C.R. Ronda, P.J. Schmidt, European Patent EP04106286.0.
[9] P. Schmidt, T. Jüstel, W. Mayr, H.-D. Bausen, W. Schnick, H. Höppe, World Patent WO 2004/036962 A1.
[10] H. Tamaki, S. Kakashima, World Patent WO 2004/039915 A1.
[11] T. Fiedler, F. Jermann, World Patent WO 2005/030905 A1.
[12] Y.Q. Li, A.C.A. Delsing, G. de With, H.T. Hintzen, Chem. Mater. 17 (2005) 3242.
[13] R. Mueller-Mach, G. Mueller, M.R. Krames, H.A. Höppe, F. Stadler, W. Schnick, T. Juestel, P. Schmidt, Phys. Stat. Sol. (A) 202 (2005) 1721.
[14] H. Lange, G. Wötting, G. Winter, Angew. Chem. 103 (1991) 1606.
[15] J.F. Suyver, J.J. Kelly, A. Meijerink, J. Lumin. 104 (3) (2003) 187.
[16] T. Jüstel, J.-C. Krupa, D.U. Wiechert, J. Lumin. 93 (2001) 179.
[17] H.A. Höppe, F. Stadler, O. Oeckler, W. Schnick, Angew. Chem. 116 (41) (2004) 5656.
[18] W. Schnick, private communication.
[19] W.H. Zhu, P.L. Wang, W.Y. Sun, D.S. Yan, J. Mater. Sci. Lett. 13 (1994) 560.
[20] A. Meijerink, G. Blasse, J. Lumin. 43 (1989) 283.
[21] S.H.M. Poort, H.M. Reijnhoudt, H.O.T. van der Kuip, G. Blasse, J. Alloys Comp. 241 (1996) 75.
[22] P. Dorenbos, J. Lumin. 104 (4) (2003) 239.
[23] W.M. Yen, M.J. Weber, Inorganic Phosphors, CRC Press, Boca Raton, FL, USA, 2004.
[24] S.H.M. Poort, A. Meijerink, G. Blasse, J. Phys. Chem. Sol. 58 (9) (1997) 1451.
[25] P. Schmidt, T. Jüstel, C. Clausen, W. Mayr, Meeting Abstracts Volume 2002-1 of the 201st Metting of the Electrochemical Society, Abstract No. 1195.
[26] D.C. Pereyra, M.I. Alayo, Mater. Char. 50 (2003) 167.
[27] K. Worhoff, L.T.M. Hilderink, A. Driessen, P.V. Lambeek, Proceedings ECS, 2001-7, 191.
[28] M. Serenyi, M. Racz, T. Lohner, Vacuum 61 (2001) 245.
[29] C.W. Struck, W.H. Fonger, J. Lumin. 10 (1975) 1.
[30] E.v.d. Kolk, P. Dorenbos, C.W.E. van Eijk, S.A. Basun, G.F. Imbusch, W.M. Yen, Phys. Rev. B 71 (2005) 165120/1.
[31] U. Happek, S.A. Basun, J. Choi, J.K. Krebs, M. Raukas, J. Alloys Comp. 303–304 (2000) 198.
[32] S. Lizzo, A. Meijerink, G. Blasse, J. Lumin. 59 (1994) 185.
[33] S. Lizzo, A. Meijerink, G.J. Dirksen, G. Blasse, J. Lumin. 63 (1995) 223.
[34] S. Lizzo, A.H. Velders, A. Meijerink, G.J. Dirksen, G. Blasse, J. Lumin. 65 (1995) 303.
[35] J.W.M. Verwey, G.J. Dirksen, G. Blasse, J. Phys. Chem. Sol. 53 (3) (1992) 367.
[36] B. Moine, B. Courtois, C. Pedrini, J. Lumin. 48 & 49 (1991) 501.
[37] B. Moine, C. Bruna, C. Pedrini, J. Lumin. 45 (1990) 248.
[38] M. Raukas, S.A. Basun, W. van Schaik, U. Happek, W.M. Yen, Mater. Sci. Forum 239–241 (1999) 249.
[39] P. Dorenbos, J. Phys.: Condens. Matter 15 (2003) 2645.
[40] B. Moine, C. Pedrini, D.S. McClure, H. Bill, J. Lumin. 40&41 (1988) 299.
|
Kinetic Axion $F(R)$ Gravity Inflation
V.K. Oikonomou,1)
1) Department of Physics, Aristotle University of Thessaloniki, Thessaloniki 54124, Greece∗
In this work we investigate the quantitative effects of the misalignment kinetic axion on $R^2$ inflation. Due to the fact that the kinetic axion possesses a large kinetic energy which dominates its potential energy, during inflation its energy density redshifts as stiff matter fluid and evolves in a constant-roll way, making the second slow-roll index to be non-trivial. At the equations of motion level, the $R^2$ term dominates the evolution, thus the next possible effect of the axion could be found at the cosmological perturbations level, via the second slow-roll index which is non-trivial. As we show, the kinetic element causes features in the power spectrum of the scalar perturbations that extends the duration of the inflationary era to an extent that it may cause a 15% decrease in the tensor-to-scalar ratio of the vacuum $R^2$ model. This occurs because as the $R^2$ model approaches its unstable quasi-de Sitter attractor in the phase space of $F(R)$ gravity due to the $(R^2)$ fluctuations, the kinetic axion dominates over the $R^2$ inflation and in effect the background equation of state is described by a stiff era, or equivalently a kination era, different from the ordinary radiation domination era. This in turn affects the duration of the inflationary era, increasing the e-foldings number up to 5 e-foldings in some cases, depending on the reheating temperature, which in turn has a significant quantitative effect on the observational indices of inflation and especially on the tensor-to-scalar ratio.
PACS numbers: 04.50.Kd, 95.36.+x, 98.80.-k, 98.80.Cq, 11.25.-w
I. INTRODUCTION
Dark matter, along with inflation, dark energy and the mysterious reheating-early radiation domination era, are the mysteries of modern theoretical physics. These problems have been puzzling theoretical physicists for decades and to date no definite answer is given to answer the questions imposed by these problems. Of the above evolution eras of our Universe, only the dark energy era has been observationally verified, whereas the rest of the eras remain still at the speculation level. However, inflation and the closely related post-inflationary reheating era, are going to be severely scrutinized in the next fifteen years, by both stage-4 Cosmic Microwave Background (CMB) experiments [1, 2] and by interferometric and not only gravitational waves experiments like the LISA, DECIGO, BBO, Einstein telescope and so on [3–10], see also [11]. The gravitational wave interferometric experiments will directly probe tensor modes which reentered the Hubble horizon during the mysterious reheating era, thus small wavelength modes that carry information for both the observational indices of inflation and also for the reheating era, while the stage-4 CMB experiments will seek for the $B$-mode polarization in the CMB. The B-modes can be generated by two distinct effects, by the $E$-mode conversion to $B$-modes via gravitational lensing for small angular scales or large-$\ell$ CMB modes, or by tensor modes for large angular scales or small-$\ell$ CMB modes. Dark matter though seems unreachable to us for the time being. Although there are many proposals for dark matter [12–17], currently it still remains a mystery what dark matter is comprised of. The basic known facts about dark matter is that it has a particle nature, based on observations like the bullet cluster, and the dark matter particle definitely has a small mass. An appealing candidate, as elusive among other particles as dark matter itself, is the axion [18–77]. With the terminology axion, we do not refer to the QCD axion, but to an axion like particle, in which case the primordial $U(1)$ Peccei-Quinn symmetry of the axion is broken during inflation, and the axion develops a non-zero vacuum expectation value. The axion is a light scalar field, thus it is highly motivated from a string theory point of view, since scalar fields are the string moduli, which are a basic and profound characteristic of string theory. The axion is an elusive particle with extremely small mass, and admittedly quite hard to detect, however there are direct and indirect ways to detect it, for example from observations of neutron stars [67], or due to black hole superradiance effects [53–56]. However, in the future it might also be detected on ground experiments, utilizing the conversion of the axion to photons in the presence of a strong magnetic field. The most fascinating feature of the axion and of axion like particles in general is the fact that during inflation, the primordial $U(1)$ symmetry is broken, thus allowing the axion to have a large vacuum expectation value, and no cosmic string remnants pollute the post-inflationary era. Another fascinating fact about the axion is that when the Hubble rate of the Universe becomes of the same order as the axion mass, the axion commences oscillations...
around its vacuum expectation value and its energy density redshifts as $\rho_a \sim a^{-3}$ thus as cold dark matter. Hence the axion can be the predominant component of cold dark matter in the Universe.
Modified gravity [78–82] offers an appealing theoretical framework, in the context of which the dark energy era and the inflationary era can be described in a unified way, see the pioneer article [83] for the first attempts toward this direction and also Refs. [76, 84–90] for later developments in this research line. In both the inflationary era and dark energy era general relativistic descriptions, the use of a scalar field, minimally or non-minimally coupled to gravity, is inevitably needed. With regard to the inflationary era, the scalar field description can be somewhat problematic, since the scalar field must inevitably be coupled to the Standard Model particles, and the couplings are arbitrary. Also with regard to the dark energy era, the scalar field description is problematic, since the dark energy equation of state (EoS) parameter is allowed to take values beyond the phantom divide line, so it can be less than $-1$. The scalar field description of such an evolution requires tachyon fields, which are not appealing at all in any context. Modified gravity in its various forms offers a consistent framework which can describe both the inflationary and the dark energy era, without the shortcomings of the scalar field description.
Among all the modified gravities, the $f(R, \phi)$ theories are the most motivated, since in a fundamental primordial scalar field in its vacuum configuration, the first quantum corrections are higher powers of the Ricci scalar, see [91] for more details, also combinations of the Ricci scalar with Riemann and Ricci tensors, such as the Einstein-Gauss-Bonnet theories. In this article we shall assume that the inflationary era is controlled by an $F(R)$ gravity in the presence of a primordial axion field. The axion field shall be assumed to be the misalignment axion, in which case the primordial $U(1)$ Peccei-Quinn symmetry that the axion possessed is broken during inflation. There are two misalignment axion models in the literature, the canonical misalignment axion [21] and the kinetic misalignment axion models [25–27]. The difference between the two models is that during inflation, the canonical misalignment axion possesses no kinetic energy, and on the contrary in the case of the kinetic misalignment axion case, the axion possesses a large kinetic energy, which dominates the potential energy. Thus the axion oscillations in the latter case commences much more later, compared to the former axion model. In this work we shall investigate the effects of the kinetic misalignment axion model on the inflationary era generated by an $F(R)$ gravity and specifically on the $R^2$ inflationary era. At the equations of motion level, the effects are absent, however at the cosmological perturbations level, the axion may affect directly the evolution via the second slow-roll parameter $\epsilon_2$. As we show, the kinetic axion obeys a constant-roll evolution, which dominates the evolution at the late stages of the $R^2$ controlled inflationary era. The kination era caused by the kinetic axion basically dominates over the $\langle R^2 \rangle$ fluctuations which destabilize the inflationary quasi-de Sitter vacuum. This causes the total EoS of the Universe to be described by a short kination era, described by a stiff perfect fluid evolution, which eventually affects the total number of the $e$-foldings. Remarkably though, the fact that the scalar field obeys a constant-roll evolution during inflation, does not affect at all the observational indices of inflation, since the contribution of the slow-roll parameter $\epsilon_2$ is elegantly cancelled.
This paper is organized as follows: In section II we present the essential features of the kinetic axion $F(R)$ gravity model. We describe in brief the kinetic axion model, and we also present the way in which the axion post-inflationary may mimic cold dark matter. In section III, we investigate in detail the inflationary dynamics of the kinetic axion $F(R)$ gravity model. We show how the constant-roll evolution of the kinetic axion during inflation eventually leaves unaffected the dynamics of inflation at the cosmological perturbations level, and also we show how the kination era at the last stages of the $R^2$ controlled inflationary era, eventually prolongs the inflationary era, increasing the total number of $e$-foldings. The conclusions of this work follow in the end of the article.
II. ESSENTIAL FEATURES OF THE $F(R)$ GRAVITY-KINETIC AXION MODEL
Before we get to the study of the inflationary dynamics for the $F(R)$ gravity-kinetic axion model let us first present the theoretical framework of the model in some detail. The $F(R)$ gravity-kinetic axion model is basically an $f(R, \phi)$ gravity theory, in which case the gravitational action has the following form,
$$S = \int d^4x \sqrt{-g} \left[ \frac{1}{2\kappa^2} F(R) - \frac{1}{2} g^{\mu\nu} \phi \partial_\mu \phi - V(\phi) + L_m \right],$$
where $\kappa^2 = \frac{1}{8\pi G} = \frac{1}{M_p^2}$, with $G$ being Newton’s gravitational constant and $M_p$ stands for the reduced Planck mass. Also, $L_m$ denotes the Lagrangian density of the perfect matter fluids that are present, which we will assume that only radiation is present. The dark matter perfect fluid will be composed solely by the axion particles present, with the latter being identified with the scalar field $\phi$. Now, with regard to the $F(R)$ gravity model, for phenomenological reasons we will assume that it has the following form,
$$F(R) = R + \frac{1}{M^2} R^2 - \frac{\Lambda \left( \frac{R}{m^2} \right)^{\delta}}{\zeta},$$
(2)
with \( m_a \) being defined as \( m_a^2 = \frac{\kappa^2 \rho_m^{(0)}}{3} \), also \( \rho_m^{(0)} \) is the energy density of cold dark matter at present day, and \( 0 < \delta < 1 \). Finally, \( \zeta \) and \( \gamma \) are some freely chosen dimensionless constants for which we shall discuss at a latter section. The \( F(R) \) gravity model is composed by an \( R^2 \) model which will control the primordial inflationary era, and by a power law term \( \sim R^\delta \), which eventually will control the late-time dynamics. In fact the model (2) can lead to a viable dark energy era, as was shown in detail in [76, 77, 91] so we will not address the late-time dynamics issue here.
With regard to the parameter \( M \) appearing in the \( R^2 \) term, it will be chosen to be \( M = 1.5 \times 10^{-5} \left( \frac{N}{50} \right)^{-1} M_p \), a value imposed by inflationary phenomenological reasoning [92], with \( N \) being the e-foldings number. Also the parameter \( \Lambda \) in Eq. (2) is assumed to be of the same order as the cosmological constant at present day. In this work we shall consider a flat Friedmann-Robertson-Walker (FRW) background with line element,
\[
ds^2 = -dt^2 + a(t)^2 \sum_{i=1,2,3} (dx^i)^2,
\]
so the field equations for the \( f(R) \) gravity with the axion scalar field in the presence of radiation are,
\[
3H^2F_R = \frac{RF_R - F}{2} - 3H\dot{F}_R + \kappa^2 \left( \rho_r + \frac{1}{2} \dot{\phi}^2 + V(\phi) \right),
\]
\[
-2\dot{H}F = \kappa^2 \dot{\phi}^2 + \ddot{F}_R - H\dot{F}_R + \frac{4\kappa^2}{3} \rho_r,
\]
\[
\ddot{\phi} + 3H\dot{\phi} + V'(\phi) = 0
\]
where \( F_R = \frac{\partial F}{\partial R} \), while the “dot” denotes differentiation with respect to the cosmic time, and the “prime” denotes differentiation with respect to the axion scalar field. With regard to the axion field, we shall consider the misalignment axion [21, 76], in which case the axion should not be related to the QCD axion, but it is some axion like particle, which we call axion. In the literature there are two misalignment axion models, the canonical misalignment model [21] and the kinetic misalignment model [25–27]. In this work we shall consider the effects of the kinetic misalignment axion model on the inflationary dynamics of \( F(R) \) gravity, and we shall see in which way it affects eventually the duration of the inflationary era. In the kinetic misalignment axion model, the axion has a primordial Peccei-Quinn \( U(1) \) symmetry which is broken during the inflationary era. The fact that the original Peccei-Quinn symmetry is broken is particularly important for the inflationary phenomenology since no remnant cosmic strings remain after inflation ends. This however was a problem in standard QCD axion models, which is absent though in all misalignment axion models. After the \( U(1) \) symmetry is broken, the axion acquires a large vacuum expectation value \( \langle \phi \rangle = \theta_a f_a \), where \( \theta_a \) is the initial misalignment angle, and \( f_a \) is the axion decay constant. The misalignment angle is in reality a dynamical field and can take values in the range \( 0 < \theta_a < 1 \), however, in the way that it enters in the vacuum expectation value of the axion, it is not considered as a dynamical field, but as an average value throughout the whole Universe at the time of inflation. With regard to the axion decay constant \( f_a \), this parameter is of fundamental phenomenological importance, and in conjunction with the axion mass, constitute the two most important phenomenological parameters for the axion dynamics. Regarding the axion having a vacuum expectation value during inflation, this fact does not mean that the axion is actually constant during inflation, but it basically has small deviations about its vacuum expectation value, different from the small oscillations about its vacuum expectation value after the inflationary era ends. Let us describe in brief the kinetic misalignment axion dynamics during inflation. Schematically, this is depicted in Fig. 1. As it can be seen in Fig. 1, the axion during inflation has a small initial displacement from its vacuum expectation value, and more importantly it has an non-zero initial velocity. This initial velocity is what justifies the terminology “kinetic”. The axion rolls down its potential and due to the initial velocity it ends up uphill again, at a position different than the one corresponding to the canonical misalignment axion model, in which case the axion after it rolls down and reaches the minimum, starts to oscillate around its vacuum expectation value. Thus in the kinetic misalignment case, the axion ends up uphill and it then rolls downhill until it reaches the minimum and commences to oscillate around its vacuum expectation value when the Hubble rate of the Universe becomes comparable to the axion mass \( H \sim m_a \). In the kinetic misalignment axion mechanism the oscillations start at a later time compared to the canonical misalignment mechanism, thus in the kinetic axion model, the temperature at which oscillations commence is lower compared to the canonical misalignment axion model. Let us briefly quantify the above picture in terms of the potential and the axion mass. Primordially, the axion potential has the following form,
\[
V(\phi) = m_a^2 f_a^2 \left( 1 - \cos(\frac{\phi}{f_a}) \right),
\]
and when the axion acquires a vacuum expectation value during inflation, for small displacements from its vacuum expectation value, its potential can be approximated as follows,
\[
V(\phi) \simeq \frac{1}{2} m_a^2 \dot{\phi}^2,
\]
an approximation which is valid when \( \phi \ll f_a \) or similarly \( \dot{\phi} \ll \langle \phi \rangle \). Initially, when the axion is uphill at both ends of the potential, we have \( H \gg m_a \), however, when the axion reaches the minimum for the second time, it starts to oscillate around its vacuum expectation value when \( H \leq m_a \), and at that point the axion energy density redshifts as dark matter [21, 76]. The most important feature of the kinetic misalignment axion model is the fact that initially the axion kinetic energy term \( \dot{\phi}^2 \) is quite larger than the axion potential energy \( \dot{\phi}^2 \gg V \). This continues until some time instance during the reheating, the potential and the kinetic energy become comparable, when the temperature of the Universe is of the order,
\[
\dot{\phi}(\tilde{T}) = m_a(\tilde{T}),
\]
at which temperature the axion has not a large kinetic energy anymore so it becomes trapped in the potential barrier and the oscillations around the minimum commence. The canonical misalignment temperature when the oscillations start \( T_c \) is larger than \( \tilde{T} \). So basically, when the axion mass is larger than the Hubble rate \( m_a(\tilde{T}) \geq H(\tilde{T}) = \frac{3}{M_P} \sqrt{\frac{\pi^2}{10}} g_* \tilde{T}^2 \), the roll down and up of the axion occurs, and when \( m_a(\tilde{T}) \leq H(\tilde{T}) = \frac{3}{M_P} \sqrt{\frac{\pi^2}{10}} g_* \tilde{T}^2 \), the axion starts to oscillate with abundance \( \rho_a \sim m_a(T=0) \frac{\dot{\phi}^2}{2} \), where \( s \) is the entropy density, and \( m_a(T=0) = m_a \) is the actual axion mass as a dark matter particle. In general, in the kinetic axion misalignment model, the axion dark matter mass is larger to the one compared to the canonical misalignment axion case. A useful relation that connects the axion mass with the axion decay constant is the following,
\[
m_a(T) = 6 \text{meV} \frac{10^{9} \text{GeV}}{f_a},
\]
and in order to the kinetic axion to account for the current dark matter abundance, the axion decay constant must satisfy \( f_a \leq 1.5 \times 10^{11} \text{GeV} \).
Let us further quantify the dynamics of the axion during inflation during inflation, since this will be important for the study of the inflationary phenomenology. Since initially, the kinetic energy of the axion is quite larger than the potential energy, that is, \( \dot{\phi}^2 \gg m_a^2 \dot{\phi}^2 \), the field equation for the axion becomes approximately,
\[
\ddot{\phi} + 3H\dot{\phi} \simeq 0,
\]
which can be solved to yield,
\[
\dot{\phi} \sim a^{-3}.
\]
So primordially, the energy density of the axion which is \( \rho_a = \frac{\dot{\phi}^2}{2} + V(\phi) \simeq \frac{\dot{\phi}^2}{2} \) becomes \( \rho_a \sim a^{-6} \). Thus this is an era of kination for the axion dynamics, with its effective equation of state parameter being \( w = 1 \), which describes stiff matter fluid. Hence the kinetic axion during inflation behaves as a stiff perfect matter fluid. In the next section
we shall consider in a quantitative way the direct effects of the kinetic misalignment axion scalar on the inflationary dynamics of $F(R)$ gravity and we shall reveal how the stiff axion fluid eventually prolongs the inflationary era.
Before closing this section, let us note that the kinetic misalignment axion mechanism is inherently related to the initial explicit breaking of the Peccei-Quinn symmetry which is broken by a higher dimensional effective operator in the same way as in the Affleck-Dine mechanism.
### III. DYNAMICS OF INFLATION FOR THE KINETIC AXION $F(R)$ GRAVITY MODEL
Let us now use the results of the previous section in order to determine the inflationary dynamics and the corresponding phenomenology in terms of the slow-roll indices. Recall that in the previous section we showed that the kinetic axion field redshifts as a perfect matter fluid during inflation with a stiff EoS, since $\rho_\phi \sim a^{-6}$, so its energy density is smaller compared to the radiation fluid energy density. Assuming a low scale inflationary era, with the Hubble rate during inflation being of the order $H_I = 10^{13} \text{GeV}$, let us investigate which terms effectively dominate in the field equations of the kinetic axion $F(R)$ gravity. The Ricci scalar takes quite large values for $H_I = 10^{13} \text{GeV}$, thus the $R^2$ dominates the evolution roughly speaking. Let us see this in some detail, and recall that $m_p^2 \simeq 1.87101 \times 10^{-47} \text{eV}^2$ and also the parameter $M$ which appears in the $R^2$ term in Eq. (2) is $M = 1.5 \times 10^{-5} \left( \frac{\Lambda}{M} \right)^{-1} M_p$ [92], hence by roughly taking for $N \sim 60$, $M$ is of the order $M \simeq 3.04375 \times 10^{12} \text{eV}$. Also, due to the fact that during inflation the slow-roll conditions are satisfied, we approximately have $R \simeq 12 H^2$ and therefore $R \sim 1.2 \times 10^{45} \text{eV}^2$. Furthermore, $M_p \simeq 2.435 \times 10^{18} \text{eV}$, and also the parameter $\Lambda$ is of the order of the cosmological constant at present day, that is, $\Lambda \simeq 11.895 \times 10^{-67} \text{eV}^2$. Finally, the vacuum expectation value of the axion is roughly of the same order as the axion decay constant, therefore $\langle \phi \rangle = \phi_i \simeq \mathcal{O}(10^{15}) \text{GeV}$ and approximately $m_a \simeq \mathcal{O}(10^{-14}) \text{eV}$. Thus, the potential term is of the order $\kappa^2 V(\phi_i) \sim \mathcal{O}(8.41897 \times 10^{-30}) \text{eV}^2$, while the two curvature terms $R$ and $R^2$ are of the order, $R \sim 1.2 \times \mathcal{O}(10^{45}) \text{eV}^2$ and also $R^2/M^2 \sim \mathcal{O}(1.55 \times 10^{45}) \text{eV}^2$ and the power-law curvature term is of the order $\frac{\Lambda (\frac{R}{M})^\delta}{0.2} \sim \mathcal{O}(10^{-55}) \text{eV}^2$ for $\delta = 0.1$ and $\zeta = 0.2$, with the latter being phenomenologically acceptable values. Also, during inflation the radiation density term $\kappa^2 \rho_r \sim \kappa^2 e^{-4N}$ and also a similar relation applies for the kinetic misalignment axion energy density $\rho_\phi$ and the corresponding term scales as $\kappa^2 \rho_\phi \sim \kappa^2 e^{-6N}$. Therefore, at the equations of motion level, the resulting theory is basically effectively described by a vacuum $R^2$, in which case,
$$F(R) \simeq R + \frac{1}{M^2} R^2.$$
Then the field equations take the form
$$\ddot{H} - \frac{\dot{H}^2}{2H} + \frac{HM^2}{2} = -3H\dot{H},$$
and due to the slow-roll conditions,
$$-\frac{M^2}{6} = \dot{H},$$
which has as a solution,
$$H(t) = H_I - \frac{M^2}{6} t,$$
which is a quasi-de Sitter solution, with $H_I$ being an arbitrary integration constant, with profound physical significance since this is the scale of inflation.
Now one might consider that effectively the kinetic axion does not affect at all the dynamics of inflation, however this is not true. It is certain that the axion does not control the Hubble rate at the level of the equations of motion for sure, however the dynamics of inflation are not affected only by the background evolution. As we will show the axion may affect inflation in two ways, firstly it may directly affect the scalar curvature perturbations and secondly it prolongs the inflationary era, since the axion effective equation of state is $w = 1$ so inflation is prolonged as we show shortly.
The cosmological scalar curvature perturbations are dynamically quantified by the slow-roll indices, which for the $f(R, \phi)$ theory at hand are defined to be [78, 93, 94],
$$\epsilon_1 = -\frac{\dot{H}}{H^2}, \quad \epsilon_2 = \frac{\ddot{\phi}}{H\dot{\phi}}, \quad \epsilon_3 = \frac{\dot{F}_R}{2HF_R}, \quad \epsilon_4 = \frac{\dot{E}}{2H E},$$
where the function $E$ for the $f(R, \phi)$ theory at hand has the following form,
$$E = F_R + \frac{3\dot{F}_R^2}{2\kappa^2 \dot{\phi}^2}. \tag{17}$$
Now the most important effect that the kinetic axion theory brings along in the $F(R)$ gravity is contained in the parameter $\epsilon_2$. Since the axion obeys the stiff scalar differential equation (10), this means that in our case, the slow-roll parameter $\epsilon_2$ takes the value $\epsilon_2 = -3$, therefore the axion obeys a constant-roll condition in its dynamics. The question is, does $\epsilon_2$ affect the inflationary dynamics? As we now show, at leading order, the contribution of the axion field is elegantly cancelled in the observational indices of inflation and specifically when the spectral index of the primordial scalar curvature perturbations. To this end, let us present the details of the calculation of the parameter $\epsilon_4$ in which the dynamics of the axion is found. We have explicitly at leading order during inflation,
$$E \approx \frac{3\dot{F}_R^2}{2\kappa^2 \dot{\phi}^2}, \tag{18}$$
so $\epsilon_4$ is approximately equal to,
$$\epsilon_4 \simeq \frac{3}{2\kappa^2} \frac{2\ddot{F}_R \dot{F}_R \dot{\phi}^2 - \dot{F}_R^2 \ddot{\phi} \dot{\phi}}{\dot{\phi}^4}, \tag{19}$$
which is simplified to,
$$\epsilon_4 \simeq \frac{\ddot{F}_{RR}}{H \dot{F}_R} - \frac{\ddot{\phi}}{H \dot{\phi}} = \frac{\ddot{F}_{RR}}{H \dot{F}_R} - \epsilon_2. \tag{20}$$
Let us further elaborate on the parameter $\epsilon_4$ which after some algebra is written as follows,
$$\epsilon_4 \simeq -\frac{24F_{RRR}H^2}{F_{RR}}\epsilon_1 - 3\epsilon_1 + \frac{\dot{\epsilon}_1}{H\epsilon_1} - \epsilon_2. \tag{21}$$
The term $\dot{\epsilon}_1$ can be written as,
$$\dot{\epsilon}_1 = -\frac{\ddot{H}H^2 - 2\dot{H}^2 H}{H^4} = -\frac{\ddot{H}}{H^2} + \frac{2\dot{H}^2}{H^3} \simeq 2H\dot{\epsilon}_1, \tag{22}$$
hence $\epsilon_4$ becomes,
$$\epsilon_4 \simeq -\frac{24F_{RRR}H^2}{F_{RR}}\epsilon_1 - \epsilon_1 - \epsilon_2. \tag{23}$$
Upon introducing $x$ we have,
$$x = \frac{48F_{RRR}H^2}{F_{RR}}, \tag{24}$$
and $\epsilon_4$ can be written in terms of it as follows,
$$\epsilon_4 \simeq -\frac{x}{2}\epsilon_1 - \epsilon_1 - \epsilon_2. \tag{25}$$
Now for the $f(R, \phi)$ gravity, the scalar spectral index of the scalar curvature perturbations is [78, 93, 94],
$$n_S = 1 - 4\epsilon_1 - 2\epsilon_2 + 2\epsilon_3 - 2\epsilon_4, \tag{26}$$
thus by substituting the expression for $\epsilon_4$ we obtained in Eq. (23), we can see that elegantly the contribution of $\epsilon_2$ cancels, thus the spectral index takes the form,
$$n_S \simeq 1 - (2 - x)\epsilon_1 + 2\epsilon_3. \tag{27}$$
Also the scalar-to-tensor ratio for the case at hand is equal to [78, 93, 94],
$$r \simeq 48\epsilon_1^2. \tag{28}$$
Since the dominant part of the $F(R)$ gravity during inflation is an $R^2$ gravity, the term $x$ is equal to zero thus, the scalar spectral index is greatly simplified. For the quasi-de Sitter solution at hand, the first slow-roll index is easily calculated to be,
$$\epsilon_1 = - \frac{6M^2}{(M^2 t - 6H_I)^2},$$
and by solving the algebraic equation $\epsilon_1(t_f) = 1$, the time instance at which inflation ends is,
$$t_f = (6H_I + \sqrt{6}M)/M^2.$$
Using the definition of the $e$-foldings number $N$,
$$N = \int_{t_i}^{t_f} H(t) dt,$$
the time instance at which inflation commences is,
$$t_i = \frac{2\sqrt{9H_I^2 - 3M^2Y + 6H_I}}{M^2},$$
so the first slow-roll index at first horizon crossing is,
$$\epsilon_1(t_i) = \frac{1}{1 + 2N},$$
hence at leading order in terms of the $e$-foldings number, the spectral index and the tensor-to-scalar ratio take the form $n_s \sim 1 - \frac{2}{N}$ and $r \sim \frac{12}{N}$. Now for $N \sim 60$ the resulting phenomenology is identical to the Starobinsky model, however the axion stiff equation of state causes another effect on inflation. Basically it prolongs the inflationary era to some extent as we now evince. As inflation comes to an end near the time instance $t_f$, the background total EoS of the Universe is no longer described by a quasi-de Sitter EoS hence the stiff EoS of the axion describes the Universe, since the matter perfect fluids become more dominant slowly-by-slowly. Therefore the total EoS parameter of the background evolution approaches the stiff EoS value $w = 1$. This fact prolongs the inflationary era, causing the $e$-foldings number to be larger than 60. The physical picture behind the increase of the $e$-foldings number relies on the combined presence of the $R^2$ term and the large kinetic term of the kinetic misalignment axion. In standard $R^2$ gravity, inflation tends to its end when the curvature fluctuations $\langle R^2 \rangle$ become quite strong and make the de Sitter attractor unstable. This phenomenological picture is possible only to the $R^2$ gravity, and as it was shown in Ref. [95],
the Starobinsky model has an unstable de Sitter attractor. Let us show this in brief, see also [95] for more details. By introducing the dimensionless variables,
\[
x_1 = - \frac{F_R'(R)}{F_R(R)H}, \quad x_2 = - \frac{F(R)}{6F(R)H^2}, \quad x_3 = \frac{R}{6H^2},
\]
(34)
and by using the e-foldings number as a dynamical variable instead of the cosmic time, the field equations of vacuum $F(R)$ gravity can be written in terms of the following dynamical system,
\[
\frac{\text{d}x_1}{\text{d}N} = -4 - 3x_1 + 2x_3 - x_1x_3 + x_1^2,
\]
\[
\frac{\text{d}x_2}{\text{d}N} = 8 + m - 4x_3 + x_2x_1 - 2x_2x_3 + 4x_2,
\]
\[
\frac{\text{d}x_3}{\text{d}N} = -8 - m + 8x_3 - 2x_3^2,
\]
with the dynamical parameter $m$ being equal to,
\[
m = -\frac{\ddot{H}}{H^3}.
\]
(36)
The dynamical system (35) is autonomous when the parameter $m$ takes constant values, and for a quasi-de Sitter evolution $a(t) = e^{H_0t - H_1t^2}$ the parameter $m$ is equal to zero. The total EoS of the cosmological system is defined as [78]
\[
w_{eff} = -1 - \frac{2\dot{H}}{3H^2},
\]
(37)
and it can directly be expressed in terms of the variable $x_3$ in the following way,
\[
w_{eff} = -\frac{1}{3}(2x_3 - 1).
\]
(38)
Now, by performing a fixed point analysis of the dynamical system for $m = 0$, we easily obtain the fixed points,
\[
\phi_+^1 = (-1, 0, 2), \quad \phi_-^2 = (0, -1, 2),
\]
(39)
and the corresponding eigenvalues of the matrix which corresponds to the dynamical system for $\phi_+^1$ are $(-1, -1, 0)$, while in the case of the fixed point $\phi_-^2$ these are $(1, 0, 0)$. Therefore, the dynamical system possesses two non-hyperbolic, but the fixed point $\phi_+^1$ is stable and in contrast, the fixed point $\phi_-^2$ is unstable, with the latter being the most interesting fixed point from a phenomenological point of view. It is noticeable that for both the fixed points, we have $x_3 = 2$ hence, from Eq. (38), we get $w_{eff} = -1$. This feature basically shows that both fixed points are de Sitter fixed points. As we already mentioned, the second de Sitter fixed point, namely, $\phi_-^2 = (0, -1, 2)$, is the most interesting phenomenologically, since for this equilibrium, the conditions $x_1 \simeq 0$ and $x_2 \simeq -1$ yield,
\[
-\frac{d^2F}{dR^2}\frac{\dot{R}}{H}\frac{dF}{dR} \simeq 0, \quad -\frac{F}{H^2}\frac{dF}{dR} \simeq -1.
\]
(40)
Using the slow-roll approximation during inflation for the Ricci scalar curvature $\dot{R} \simeq 12H^2$, for the quasi-de Sitter evolution, we can write the second differential equation as follows,
\[
F \simeq \frac{\text{d}F}{\text{d}R}\frac{R}{2},
\]
(41)
which when solved it yields,
\[
F(R) \simeq \alpha R^2,
\]
(42)
where $\alpha$ is some arbitrary integration constant, which describes which $F(R)$ gravity generates the quasi-de Sitter evolution. Clearly the $R^2$ model possesses an unstable de Sitter point. Thus when this unstable de Sitter attractor is reached, the system is repelled from it in the phase space. The time instance for which this happens is determined roughly by the condition $\epsilon_1(t_f) = 1$. Now in the presence of the large axion kinetic term which dominates over its
FIG. 3. The Planck likelihood curves and the kinetic axion $F(R)$ gravity model (red dots) and the vacuum $R^2$ model (green dots). The kinetic $R^2$ model serves as a viable deformation of the vacuum Starobinsky model.
potentials, things are somewhat different when the cosmological system reaches the de Sitter attractor. Particularly, the cosmological system initially is controlled by the $R^2$ term so it reaches the quasi-de Sitter attractor. However, when it is repelled from the unstable de Sitter attractor point, the cosmological system does not enter directly the reheating era and the $\langle R \rangle$ reheating fluctuations do not commence directly, but the kinetic term which was subdominant, dominates over the $R^2$ term and thus control the dynamics at the end of inflation, after the cosmological system is repelled from the quasi-de Sitter attractor. Thus the end of inflation is somewhat prolonged for the kinetic axion $R^2$ model. This can be schematically seen in Fig. 2, in which it is shown that in the ordinary $R^2$ model, the system after it reaches the unstable de Sitter attractor, the $\langle R^2 \rangle$ fluctuations make the system to be repelled from the attractor, and the cosmological system enters the reheating era controlled by the $\langle R \rangle$ fluctuations. In the presence of the kinetic axion, after the $\langle R^2 \rangle$ fluctuations cause the system to be repelled from the de Sitter attractor, the cosmological system does not enter directly the reheating era, but the kinetic term dominates the evolution and the background EoS is not the one corresponding to an ordinary reheating era $w = 1/3$ but it corresponds to a stiff era with $w = 1$. The system stays in this stiff era and the ordinary reheating era commences when the axion oscillations begin, so when $\dot{\phi}^2 \sim V$, at which point the axion redshifts as ordinary dark matter and the radiation fluid controls the evolution thereafter. Thus the number of $e$-foldings is somewhat extended in the kinetic axion $F(R)$ gravity picture.
Another striking feature of the kinetic axion $F(R)$ gravity model is the fact that the $R^2$ term actually enhances significantly the kinetic axion physics, delaying further the kinetic axion to start its oscillations. In a near future work we shall demonstrate using a dynamical systems approach how this can happen. Now let us quantify the qualitative picture we described above, and let us see how the stiff era affects the $e$-foldings number, thus somewhat extending inflation for some $e$-foldings. As we will show, this feature is strongly affected by the reheating temperature. In a general setting, the $e$-foldings number for a primordial scalar mode with wavenumber $k$, which became superhorizon at the beginning of inflation, is equal to [96],
$$\frac{a_k H_k}{a_0 H_0} = e^{-N} \frac{H_k a_{end}}{a_{reh} H_{reh}} \frac{H_{reh} a_{reh}}{a_{eq} H_{eq}} \frac{H_{eq} a_{eq}}{a_0 H_0},$$
(43)
with $a_k$ and $H_k$ being the scale factor and the Hubble rate at the time instance where the primordial mode $k$ became superhorizon at the beginning of inflation (at first horizon crossing), $a_{end}$ stands for the scale factor at the end of the inflationary era, and finally $a_{reh}$ and $H_{reh}$ denote the scale factor and the Hubble rate when the reheating era ends. Furthermore, $a_{eq}$ and $H_{eq}$ denote the scale factor and the Hubble rate at the time instance that the matter-radiation equality occurs, and moreover $a_0$ and $H_0$ denote the present day scale factor and the Hubble rate respectively. Now, if near the end of inflation, the total EoS parameter is $w$ (different from the value $w = 1/3$), we get,
$$\ln \left( \frac{a_{end} H_{end}}{a_{reh} H_{reh}} \right) = - \frac{1 + 3w}{6(1 + w)} \ln \left( \frac{\rho_{reh}}{\rho_{end}} \right),$$
(44)
with $H_{end}$ being the Hubble rate when inflation ends, and the energy densities $\rho_{end}$ and $\rho_{reh}$ stand for the total energy density of the Universe when inflation ends and when the reheating era ends. Note that for the derivation of Eq. (44), we assumed that the total EoS parameter at the end of the reheating era and at the end of inflation, is constant and equal to $w$. Then, when the $\langle R^2 \rangle$ commence, causing instability to the de Sitter period, the constant EoS stiff era of
the kinetic axion commences, so the $e$-foldings number of the inflationary era is extended as follows [96],
$$N = 56.12 - \ln \left( \frac{k}{k_*} \right) + \frac{1}{3(1 + w)} \ln \left( \frac{2}{3} \right) + \ln \left( \frac{\rho_k^{1/4}}{\rho_{reh}^{1/4}} \right) + \frac{1 - 3w}{3(1 + w)} \ln \left( \frac{\rho_{reh}^{1/4}}{\rho_{end}^{1/4}} \right) + \ln \left( \frac{\rho_k^{1/4}}{10^{16}\text{GeV}} \right),$$
(45)
with $\rho_k$ being the Universe’s total energy density at the beginning of the inflationary era, exactly when the mode $k$ became superhorizon. We shall also assume that the pivot scale $k_*$ is $k_* = 0.05\text{Mpc}^{-1}$ and furthermore, we shall assume that the degrees of freedom of particles $g_*$ during the inflationary era, just after this era is nearly constant. Thus the energy density of the Universe at a temperature $T$ is equal to $\rho = \frac{\pi^2}{30} g_* T^4$. Hence, the expression of Eq. (45) can be rewritten in terms of the temperatures at the various epochs and not in terms of the energy densities. In effect
| $e$-foldings number and Inflationary Indices | $T_R = 10^{12}\text{GeV}$ | $T_R = 10^7\text{GeV}$ |
|---------------------------------------------|--------------------------|------------------------|
| $e$-foldings number $N$ | 65.3439 | 61.5063 |
| Spectral index $n_S$ | 0.969393 | 0.967483 |
| Tensor-to-Scalar Ratio $r$ | 0.00281042 | 0.00317206 |
TABLE I. The $e$-foldings number for the kinetic axion $F(R)$ gravity model for various reheating temperatures, to be compared with the standard $R^2$ model results $n_S = 0.966667$ and $r = 0.00333333$ and a standard reheating scenario.
if the total number of $e$-foldings changes, the parameter $M$ coupled to the $R^2$ gravity will also be somewhat affected and this should be taken into account for the inflationary phenomenology of the current model. In Table III we present the phenomenological behavior of the basically prolonged $R^2$ inflationary model, for three reheating temperatures, namely a large reheating temperature $T_R = 10^{12}\text{GeV}$, and an intermediate reheating temperature $T_R = 10^7\text{GeV}$. The perspective of having low reheating temperatures is already discussed in the literature, even having MeV scale reheating temperatures, see for example [97]. As it can be seen, in all cases, the inflationary era is prolonged and the results are different from the standard $R^2$ model for $N = 60$ with the changes being of the order 15% for the case of the tensor-to-scalar ratio. Also as expected, since the inflationary era generated by the kinetic axion $F(R)$ gravity theory is a deformation of the $R^2$ model, it produces a viable phenomenology. This can be seen in Fig. 3 where we confront the kinetic axion $F(R)$ gravity model with the Planck likelihood curves for various reheating temperatures in the range $10^7 - 10^{12}\text{GeV}$. As it can be seen, the model is well fitted in the sweet spot of the Planck data. In the plots, the green dots correspond to the vacuum $R^2$ model, and the red dots correspond to the kinetic axion $R^2$ model. As it can be seen, the kinetic axion $R^2$ model is a measurable deformation of the vacuum $R^2$ model.
IV. CONCLUSIONS
In this work we investigated how a kinetic misalignment axion can affect the inflationary era generated by an $R^2$ model of $F(R)$ gravity. In the context of the kinetic misalignment axion, the primordial $U(1)$ Peccei-Quinn symmetry is broken in the axion sector during inflation, thus the axion has a non-zero vacuum expectation value, however it also possesses a large kinetic energy. The kinetic energy term of the axion dominates over its potential, however during inflation and at the equations of motion level, the vacuum $R^2$ model dominates the evolution. Thus the axion may affect the dynamics of the inflationary era at the cosmological perturbations level, through the second slow-roll index. Due to the dominance of the axion’s kinetic energy over its potential, the axion evolves in a constant-roll way, thus the second slow-roll index is constant and large. We calculated the observational indices including the kinetic axion effects, and as we showed, the contribution of the second slow-roll index elegantly cancels. Thus, at the cosmological perturbations level, the kinetic axion does not affect the $R^2$ inflationary era. However, the kinetic axion affects the duration of the inflationary era, causing in some cases 15% differences in the tensor-to-scalar ratio compared with the vacuum $R^2$ model. This change is due to the fact that the kinetic axion has an EoS parameter which corresponds to that of a stiff era. As the $R^2$ inflationary era reaches its unstable quasi-de Sitter attractor in the phase space, the kinetic axion starts to dominate the evolution over the $R^2$ term, thus the Universe enters a stiff evolution era, and era of kination with background total EoS parameter $w = 1$. This stiff background directly affects the $e$-foldings number, thus extending the inflationary era up to 5 $e$-foldings in some cases, and quantitatively in some cases this amounts to a decrease of the tensor-to-scalar ratio of about 15% compared to the vacuum $R^2$ model. A particularly interesting extension of this work is to further consider in the Lagrangian an Einstein-Gauss-Bonnet term. Due to the fact that the axion is not constant during inflation, but it is fluctuating around its vacuum expectation value, the Einstein-Gauss-Bonnet does not trivially vanish, thus it would be interesting to investigate the consequences of the kinetic axion in this class of theories. In fact, it would be furthermore interesting to investigate the late-time evolution
of the unified model, because when the axion starts to oscillate around its vacuum expectation value, it redshifts as dark matter. These issues shall be addressed in a future work.
[1] K. N. Abazajian et al. [CMB-S4], arXiv:1610.02743 [astro-ph.CO].
[2] M. H. Abtibol et al. [Simons Observatory], Bull. Am. Astron. Soc. 51 (2019), 147 [arXiv:1907.08284 [astro-ph.IM]].
[3] S. Hild, M. Abermathy, F. Acernese, P. Amaro-Seoane, N. Andersson, K. Arun, F. Barone, B. Barr, M. Barsuglia and M. Beker, et al. Class. Quant. Grav. 28 (2011) 094013 doi:10.1088/0264-9381/28/9/094013 [arXiv:1012.0908 [gr-qc]].
[4] J. Baker, J. Belovary, P. L. Bender, E. Berti, R. Caldwell, J. Camp, J. W. Conklin, N. Cornish, C. Cutler and R. DeRosa, et al. [arXiv:1907.06482 [astro-ph.IM]].
[5] T. L. Smith and R. Caldwell, Phys. Rev. D 100 (2019) no.10, 104055 doi:10.1103/PhysRevD.100.104055 [arXiv:1908.00546 [astro-ph.CO]].
[6] J. Crowder and N. J. Cornish, Phys. Rev. D 72 (2005), 083005 doi:10.1103/PhysRevD.72.083005 [arXiv:gr-qc/0506015 [gr-qc]].
[7] T. L. Smith and R. Caldwell, Phys. Rev. D 95 (2017) no.4, 044036 doi:10.1103/PhysRevD.95.044036 [arXiv:1609.05901 [gr-qc]].
[8] N. Seto, S. Kawamura and T. Nakamura, Phys. Rev. Lett. 87 (2001), 221103 doi:10.1103/PhysRevLett.87.221103 [arXiv:astro-ph/0108011 [astro-ph]].
[9] S. Kawamura, M. Ando, N. Seto, S. Sato, M. Musha, I. Kawaiwo, J. Yokoyama, T. Tanaka, K. Ioka and T. Akutsu, et al. [arXiv:2006.13545 [gr-qc]].
[10] A. Weltman, P. Bull, S. Camera, K. Kelley, H. Padmanabhan, J. Pritchard, A. Raccanelli, S. Riemer-Sørensen, L. Shao and S. Andrianomena, et al. Publ. Astron. Soc. Austral. 37 (2020), e002 doi:10.1017/pasa.2019.42 [arXiv:1810.02680 [astro-ph.CO]].
[11] P. Audrard et al. [LISA Cosmology Working Group], arXiv:2206.05434 [astro-ph.CO].
[12] G. Bertone, D. Hooper and J. Silk, Phys. Rept. 405 (2005) 279 doi:10.1016/j.physrep.2004.08.031 [hep-ph/0404175].
[13] L. Bergstrom, Rept. Prog. Phys. 63 (2000) 793 doi:10.1088/0034-4885/63/5/203 [hep-ph/0002126].
[14] Y. Mambrini, S. Profumo and F. B. Queiroz, Phys. Lett. B 760 (2016) 807 [arXiv:1508.06635 [hep-ph]].
[15] S. Profumo, arXiv:1301.0952 [hep-ph].
[16] D. Hooper and S. Profumo, Phys. Rept. 453 (2007) 29 [hep-ph/0701197].
[17] V. K. Oikonomou, J. D. Vergados and C. C. Mountakidis, Nucl. Phys. B 773 (2007) 19 [hep-ph/0612293].
[18] J. Preskill, M. B. Wise and F. Wilczek, Phys. Lett. B 120B (1983) 127. doi:10.1016/0370-2693(83)90637-8
[19] L. F. Abbott and P. Sikivie, Phys. Lett. 120B (1983) 133. doi:10.1016/0370-2693(83)90638-X
[20] M. Dine and W. Fischler, Phys. Lett. 120B (1983) 137. doi:10.1016/0370-2693(83)90639-9
[21] D. J. E. Marsh, Phys. Rept. 643 (2016) 1 [arXiv:1510.07633 [astro-ph.CO]].
[22] P. Sikivie, Lect. Notes Phys. 741 (2008) 19 [astro-ph/0610440].
[23] G. G. Raffelt, Lect. Notes Phys. 741 (2008) 51 [hep-ph/0611350].
[24] A. D. Linde, Phys. Lett. B 259 (1991) 38.
[25] R. T. Co, L. J. Hall and K. Harigaya, Phys. Rev. Lett. 124 (2020) no.25, 251802 doi:10.1103/PhysRevLett.124.251802 [arXiv:1910.14152 [hep-ph]].
[26] R. T. Co, L. J. Hall, K. Harigaya, K. A. Olive and S. Verner, JCAP 08 (2020), 036 doi:10.1088/1475-7516/2020/08/036 [arXiv:2004.00629 [hep-ph]].
[27] B. Barman, N. Bernal, N. Ramberg and L. Vignielli, [arXiv:2111.03677 [hep-ph]].
[28] M. C. D. Marsh, H. R. Russell, A. C. Fabian, B. P. McNamara, P. Nulsen and C. S. Reynolds, JCAP 1712 (2017) no.12, 036 [arXiv:1703.07354 [hep-ph]].
[29] S. D. Odintsov and V. K. Oikonomou, Phys. Rev. D 99 (2019) no.6, 064049 [arXiv:1901.05363 [gr-qc]].
[30] S. Nojiri, S. D. Odintsov, V. K. Oikonomou and A. A. Popov, Phys. Rev. D 100 (2019) no.8, 084009 [arXiv:1909.01324 [gr-qc]].
[31] S. Nojiri, S. D. Odintsov and V. K. Oikonomou, Annals Phys. 418 (2020), 168186 doi:10.1016/j.aop.2020.168186 [arXiv:1907.01625 [gr-qc]].
[32] S. D. Odintsov and V. K. Oikonomou, Phys. Rev. D 99 (2019) no.10, 104070 [arXiv:1905.03496 [gr-qc]].
[33] M. Cicoli, V. Guidetti and F. G. Pedro, arXiv:1903.01497 [hep-th].
[34] H. Fukunaga, N. Kitajima and Y. Urakawa, arXiv:1903.02119 [astro-ph.CO].
[35] A. Caputo, arXiv:1902.02666 [hep-ph].
[36] A.S.Sakharov and M.Yu.Khlopov, Yadernaya Fizika (1994) V. 57, PP. 514–516. ( Phys.Atom.Nucl. (1994) V. 57, PP. 485-487); A.S.Sakharov, D.D.Sokoloff and M.Yu.Khlopov, Yadernaya Fizika (1996) V. 59, PP. 1050-1055. (Phys.Atom.Nucl. (1996) V. 59, PP. 1005-1010); M. Yu.Khlopov, A.S.Sakharov and D.D.Sokoloff, Nucl.Phys. B (Proc. Suppl.) (1999) V. 72, 105-109.
[37] J. H. Chang, R. Essig and S. D. McDermott, JHEP 1809 (2018) 051 [arXiv:1803.00993 [hep-ph]]. Chang:2018rsr,Irastorza:2018dyq,
[38] I. G. Irastorza and J. Redondo, Prog. Part. Nucl. Phys. 102 (2018) 89 [arXiv:1801.08127 [hep-ph]].
[39] V. Anastassopoulos et al. [CAST Collaboration], Nature Phys. 13 (2017) 584 [arXiv:1705.02290 [hep-ex]].
[40] P. Sikivie, Phys. Rev. Lett. 113 (2014) no.20, 201301 [arXiv:1409.2806 [hep-ph]].
[41] P. Sikivie, Phys. Lett. B 695 (2011) 22 [arXiv:1003.2426 [astro-ph.GA]].
[42] P. Sikivie and Q. Yang, Phys. Rev. Lett. 103 (2009) 111301 [arXiv:0901.1100 [hep-ph]].
[43] A. Caputo, L. Sberna, M. Frias, D. Blas, P. Pani, L. Shao and W. Yan, Phys. Rev. D 100 (2019) no.6, 063515 [arXiv:1902.02695 [astro-ph.CO]].
[44] E. Masaki, A. Aoki and J. Soda, arXiv:1909.11470 [hep-ph].
[45] J. Soda and D. Yoshida, Galaxies 5 (2017) no.4, 96.
[46] J. Soda and K. Yaburakura, Eur. Phys. J. C 78 (2018) no.9, 779 [arXiv:1710.00305 [astro-ph.CO]].
[47] A. Aoki and J. Soda, Phys. Rev. D 96 (2017) no.2, 023534 [arXiv:1703.03589 [astro-ph.CO]].
[48] E. Masaki, A. Aoki and J. Soda, Phys. Rev. D 96 (2017) no.4, 043519 [arXiv:1702.08843 [astro-ph.CO]].
[49] A. Aoki and J. Soda, Int. J. Mod. Phys. D 26 (2016) no.07, 1750063 [arXiv:1608.05933 [astro-ph.CO]].
[50] I. Ohta and J. Soda, Phys. Rev. D 94 (2016) no.4, 044062 [arXiv:1607.01847 [astro-ph.CO]].
[51] A. Aoki and J. Soda, Phys. Rev. D 93 (2016) no.8, 083503 [arXiv:1601.03904 [hep-ph]].
[52] T. Ikeda, R. Brito and V. Cardoso, Phys. Rev. Lett. 122 (2019) no.8, 081101 [arXiv:1811.04950 [gr-qc]].
[53] A. Arvanitaki, S. Dimopoulos, M. Galanis, L. Lehner, J. O. Thompson and K. Van Tilburg, arXiv:1909.11665 [astro-ph.CO].
[54] A. Arvanitaki, M. Baryakhtar, S. Dimopoulos, S. Dubovsky and R. Lasenby, Phys. Rev. D 95 (2017) no.4, 043001 [arXiv:1604.03958 [hep-ph]].
[55] A. Arvanitaki, M. Baryakhtar and X. Huang, Phys. Rev. D 91 (2015) no.8, 084011 [arXiv:1411.2263 [hep-ph]].
[56] A. Arvanitaki and A. A. Geraci, Phys. Rev. Lett. 113 (2014) no.16, 161801 [arXiv:1403.1290 [hep-ph]].
[57] S. Sen, Phys. Rev. D 98 (2018) no.10, 103012 [arXiv:1805.06471 [hep-ph]].
[58] V. Cardoso, S. J. C. Dias, G. S. Hartnett, M. Middleton, P. Pani and J. E. Santos, JCAP 1803 (2018) 043 [arXiv:1801.01420 [gr-qc]].
[59] J. G. Rosa and T. W. Kephart, Phys. Rev. Lett. 120 (2018) no.23, 231102 [arXiv:1709.06581 [gr-qc]].
[60] H. Yoshino and H. Kodama, PTEP 2014 (2014) 043E02 [arXiv:1312.2326 [gr-qc]].
[61] C. S. Machado, W. Ratzinger, P. Schwaller and B. A. Stefanek, arXiv:1912.01007 [hep-ph].
[62] A. Korochkin, A. Neronov and D. Semikoz, arXiv:1911.13291 [hep-ph].
[63] A. S. Chou, Astrophys. Space Sci. Prog. 359 (2019) 41.
[64] C.-H. Chang and Y.-H. Lin, arXiv:1911.11885 [hep-ph].
[65] N. Crisostomi, R. Flauger, N. S. Sullivan, D. B. Tanner and J. Yang, arXiv:1911.05772 [astro-ph.CO].
[66] K. Choi, H. Seong and S. Yun, arXiv:1911.00532 [hep-ph].
[67] M. Kavic, S. L. Liebling, M. Lippert and J. H. Simonettti, arXiv:1910.06977 [astro-ph.HE].
[68] D. Blas, A. Caputo, M. M. Ivanov and L. Sberna, arXiv:1910.06218 [hep-ph].
[69] D. Guerra, C. F. B. Macedo and P. Pani, JCAP 1909 (2019) no.09, 061 [arXiv:1909.05515 [gr-qc]].
[70] T. Tenkanen and L. Visinelli, JCAP 1908 (2019) 033 [arXiv:1906.11837 [astro-ph.CO]].
[71] G. Y. Huang and S. Zhou, Phys. Rev. D 100 (2019) no.3, 035010 [arXiv:1905.00367 [hep-ph]].
[72] D. Croon, R. Houtz and V. Sanz, JHEP 1907 (2019) 146 [arXiv:1904.10967 [hep-ph]].
[73] F. V. Day and J. I. McDonald, JCAP 1910 (2019) no.10, 051 [arXiv:1904.08341 [hep-ph]].
[74] S. D. Odintsov and V. K. Oikonomou, EPL 129 (2020) no.4, 40001 doi:10.1209/0295-5075/129/40001 [arXiv:2003.06671 [gr-qc]].
[75] S. Nojiri, S. D. Odintsov, V. K. Oikonomou and A. A. Popov, Phys. Dark Univ. 28 (2020), 100514 doi:10.1016/j.dark.2020.100514 [arXiv:2002.10402 [gr-qc]].
[76] S. D. Odintsov and V. K. Oikonomou, Phys. Rev. D 101 (2020) no.4, 044009 doi:10.1103/PhysRevD.101.044009 [arXiv:2001.06830 [gr-qc]].
[77] V. K. Oikonomou, Phys. Rev. D 103 (2021) no.4, 044036 doi:10.1103/PhysRevD.103.044036 [arXiv:2012.00586 [astro-ph.CO]].
[78] S. Nojiri, S. D. Odintsov and V. K. Oikonomou, Phys. Rept. 692 (2017) 1 [arXiv:1705.11098 [gr-qc]].
[79] S. Capozziello, M. De Laurentis, Phys. Rept. 509, 167 (2011);
V. Faraoni, S. Capozziello, Found. Phys. 170 (2010).
[80] S. Nojiri, S.D. Odintsov, Class. CQG002061, 06 (2006) [Int. J. Geom. Meth. Mod. Phys. 4, 115 (2007)].
[81] S. Nojiri, S.D. Odintsov, Phys. Rept. 505, 59 (2011).
[82] G. J. Olmo, Int. J. Mod. Phys. D 20 (2011) 413 [arXiv:1101.3864 [gr-qc]].
[83] S. Nojiri and S. D. Odintsov, Phys. Rev. D 68 (2003) 123512 doi:10.1103/PhysRevD.68.123512 [hep-th/0307288].
[84] S. Nojiri and S. D. Odintsov, Phys. Lett. B 657 (2007) 238 doi:10.1016/j.physletb.2007.10.027 [arXiv:0707.1941 [hep-th]].
[85] S. Nojiri and S. D. Odintsov, Phys. Rev. D 77 (2008) 026007 doi:10.1103/PhysRevD.77.026007 [arXiv:0710.1738 [hep-th]].
[86] G. Cognola, E. Elizalde, S. Nojiri, S. D. Odintsov, L. Sebastiani and S. Zerbini, Phys. Rev. D 77 (2008) 046009 doi:10.1103/PhysRevD.77.046009 [arXiv:0712.4017 [hep-th]].
[87] S. Nojiri and S. D. Odintsov, Phys. Rev. D 74 (2006) 086005 doi:10.1103/PhysRevD.74.086005 [hep-th/0608008].
[88] S. A. Appleby and R. A. Battye, Phys. Lett. B 654 (2007) 7 doi:10.1016/j.physletb.2007.08.037 [arXiv:0705.3199 [astro-ph]].
[89] E. Elizalde, S. Nojiri, S. D. Odintsov, L. Sebastiani and S. Zerbini, Phys. Rev. D 83 (2011) 086006 doi:10.1103/PhysRevD.83.086006 [arXiv:1012.2280 [hep-th]].
[90] P. M. Sá, Phys. Rev. D 102 (2020) no.10, 103519 doi:10.1103/PhysRevD.102.103519 [arXiv:2007.07109 [gr-qc]].
[91] V. K. Oikonomou and I. Giannakoudi, Nucl. Phys. B 978 (2022), 115779 doi:10.1016/j.nuclphysb.2022.115779 [arXiv:2204.02454 [gr-qc]].
[92] S. A. Appleby, R. A. Battye and A. A. Starobinsky, JCAP 06 (2010), 005 doi:10.1088/1475-7516/2010/06/005 [arXiv:0909.1737 [astro-ph.CO]].
[93] J. c. Hwang and H. Noh, Phys. Rev. D 71 (2005), 063536 doi:10.1103/PhysRevD.71.063536 [arXiv:gr-qc/0412126 [gr-qc]].
[94] S. D. Odintsov and V. K. Oikonomou, Phys. Lett. B 807 (2020), 135576 doi:10.1016/j.physletb.2020.135576 [arXiv:2005.12804 [gr-qc]].
[95] S. D. Odintsov and V. K. Oikonomou, Phys. Rev. D 96 (2017) no.10, 104049 doi:10.1103/PhysRevD.96.104049 [arXiv:1711.02230 [gr-qc]].
[96] P. Adhikari, R. Easther, J. Pritchard and A. Loeb, JCAP 02 (2011), 021 doi:10.1088/1475-7516/2011/02/021 [arXiv:1007.3718 [astro-ph.CO]].
[97] T. Hasegawa, N. Hiraoka, K. Kohri, R. S. L. Hansen, T. Tram and S. Hannestad, JCAP 12 (2019), 012 doi:10.1088/1475-7516/2019/12/012 [arXiv:1908.10189 [hep-ph]].
|
Mechanical Properties of Chitosan Incorporated in Maxillofacial Silicone and its Anti Candidal Activity In Vitro
Al-Hakam J Ibrahim*, Hikmat Jameel Al-Judy
Department of Prosthodontics, College of Dentistry, University of Baghdad, Iraq
ABSTRACT
Background: One of the major problems associated with the use of maxillofacial silicone material is microorganisms and fungal growth, especially *Candida albicans*, which can result in chronic infection, inflammation, and degradation of silicone material. That's why the development of antimicrobial silicone elastomer became so important. So multiple studies are performed trying to improve this property; this study concentrated on incorporating chitosan micro particles into the silicone matrix.
Aim of the Study: The aim of this study was to evaluate the effect of adding different concentrations of chitosan micro particles into the silicone matrix on antifungal activity and some mechanical properties such as tear and tensile strength.
Materials and Methods: Chitosan particles were added in different concentrations (1.5%, 2.5%, and 3.5% by weight) into room temperature vulcanized silicone elastomer material. 220 specimens were made and divided according to the test to be performed into four groups. Viable count test and disk diffusion test were performed to evaluate the antifungal properties of silicone material incorporated by chitosan. Tear and tensile strength of the maxillofacial silicone was also tested.
Results: The results of viable count showed a highly significant decrease in colony forming unit of *C. albicans* in experimental groups compared to the control group. In disk diffusion test, the measurement of inhibition zone was concentration-dependent, so the inhibition zone increased as the concentration increased. There was a highly significant increase in the mean value of tear strength while there was a significant decrease in the tensile strength of silicone after chitosan addition.
Conclusion: The addition of chitosan micro particles into RTV silicone resulted in developing anti-candidal activity for maxillofacial material. Also, this addition resulted in an increase in tear strength while the tensile strength decreased.
Key words: Chitosan, Maxillofacial silicone, *Candida albicans*, Mechanical properties
INTRODUCTION
Head and neck malformations may result from congenital or developmental anomaly, trauma, or cancer resection [1]. These anomalies can be managed either surgically by plastic repair conducted on living tissues or artificially by alloplastic repair. In favorable circumstances, the treatment of choice is the plastic repair. It is more desirable than the artificial repair with a maxillofacial prosthesis [2].
However, when functional and esthetic demands cannot be surgically fulfilled because of insufficient residual hard and soft tissue, reduction in vascularization, and possibility of unwanted effects of treatment modalities, limitation of the surgical treatment may appear; thus, practical and attractive alternatives by fabrication of a facial prosthesis considered to be the best solution [3].
Taking into account social and psychological pressures of patients with facial defects, the use of maxillofacial prosthetic materials has rapidly increased to enhance both esthetic and functional deficiencies found in the present materials [4].
The maxillofacial prostheses surfaces are made of silicone exposed to soft tissues, saliva, and nasal secretion. These body fluids may cause colonization of different microorganisms on their surfaces leading to silicone elastomer degradation or infection. One of the most common colonizing microorganisms is *Candida albicans*. Plaque which contains high *C. albicans* levels becomes acidic as a result of candidal metabolism, which may later produce inflammation of mucosal surfaces or microbial colonization on the hard tissues [5]. There is a continuous increase in the percentage of drug consumption due to the development of drug resistance by microbial agents [6].
This opportunistic pathogen causes great problems; it is resistant to most antimicrobial products, namely amphotericin-B, which is regarded as the standard choice for...
the treatment of mycoses. Despite still being regarded the drug of choice against *C. albicans*, these antifungal agents are being reported as inefficient with many cases of resistances, particularly to fluconazole, being observed. This problem has led to seek for alternative drugs to be used in the treatment of *C. albicans* infections [7].
In this study an attempt was made to fabricate maxillofacial silicone material with antimicrobial effect especially against *C. albicans* by incorporation Chitosan micro-particles. Chitosan, a natural extract from chitin, is a polysaccharide that has been proven to have a broad spectrum of antimicrobial activity that encompasses effect against fungi, yeast and bacteria. While newest studies have revealed a significant antibiofilm activity upon several microorganisms, including *C. albicans* [7].
**MATERIALS AND METHODS**
Room temperature vulcanized VST-50F silicone elastomer (Factor II Inc., USA) was used in this study. Chitosan powder (Cheng Du Micxy Chemicali, China) was incorporated into the maxillofacial silicone in different percentages (1.5%, 2.5% and 3.5% by weight). A total of two hundred and twenty (220) specimens were prepared and divided into four groups according to the test to be performed. FTIR analysis was performed to determine if there is any chemical reaction between chitosan and the maxillofacial silicone.
**Mold making for specimens’ fabrication**
Samples dimensions were designed using AutoCAD 2015 then processed by the CNC machine to form the matrix part of the mold into which the material was poured. Matrices were four clear acrylic sheets of $2.2 \pm 0.05$ mm thickness they were machined according to the sample shape of each test [8].
**Chitosan and maxillofacial material mixing and pouring**
The chitosan powder was first weighted followed by the addition of accurate weight of silicone (part A) to prevent dispersion of the filler. The silicone with chitosan were mixed by a vacuum mixer for 10 minutes, the vacuum was shut off for the first three minutes to avoid suction of the Chitosan powder and then turned on for the remaining 7 minutes at 360 rpm speed and vacuum value of -10 bar. Then the cross linker (part B) added and mixed again by vacuum mixer for 5 minutes [9].
The silicone then poured into the mold cavity. The cover part was placed over the mold and fixed in place with screws and nuts at the corner and G-clamps at the mold borders (Figure 1). The material was allowed to set at room temperature ($23^\circ C \pm 2^\circ C$) for 2-4 hours according to manufacture instruction, then molds were opened and the silicone sample cleaned and washed under tap water to remove remaining separating medium and then dried with a paper towel [9].

**Evaluating the antifungal activity of chitosan loaded silicone specimens using viable count test**
**Specimen fabrication**
Mold prepared with Specimens dimensions of $10 \times 10 \times 2.3$ mm (length, width and thickness respectively) [10]. Then mixing and pouring of chitosan loaded silicon to obtain special specimens used to estimate viable count of *C. albicans*.
**Isolation of *C. albicans***
*C. albicans* was gained from the oral cavity of 16 patients with signs of oral thrush and denture stomatitis. Isolation of *Candida* from the oral cavity with the use of a smear was taken by a swab [11].
This includes gentle rubbing of the lesion by a sterile cotton swab, and then inoculating a primary isolation medium like Sabouraud dextrose agar [12].
These swabs were cultured on Sabouraud dextrose agar and then aerobically incubated at $37^\circ C$ for 24-48 hours after that kept at $4^\circ C$ for more investigation [13].
**Identification of *C. albicans***
*Candida* was identified by colony morphology on SDA which appear creamy, smooth, pasty convex [13], and by using Gram stain method in microscopical examination [14], also, germ tube formation procedure was utilized [15], and the final confirmation was done by biochemical way by using API Candida system (bioMérieux).
**Evaluating viable count of *C. albicans***
To determine the antifungal activity of the chitosan/silicone composites, *C. albicans* was diluted in 0.9% NaCl, and a candidal suspension of about 107 CFU/ml (0.5 McFarland standards) was made using a McFarland densitometer. Each chitosan/silicone specimen was putted in a tube containing 9.9 ml of Sabouraud dextrose broth, into which were dispensed 100 $\mu l$ of the fungal suspension. The final cells concentration was 105 CFU/ml [16].
Then incubated for 24 hours at 37°C, after that 100 μl of each mixture was placed in 9.9 ml of NaCl (0.9%) and tenfold dilution was applied. 100 μl was taken from the second dilution and spread on SDA and aerobically incubated for 24 h at 37°C (Figure 2).
This procedure had been repeated after 14 days and 30 days of specimens’ storage in artificial saliva at 37°C.

**Figure 2:** (A) Placement of specimen in the broth, (B) Serial dilution, (C) *C. albicans* growth after 24 hours of spread plate
**Evaluating the antimicrobial activity of chitosan loaded silicone specimens using disk diffusion test**
Mold prepared with 10 cm width and length and 0.5 mm in thickness. After mixing and pouring of chitosan with silicone result in silicone sheet of 0.5 mm in thickness. Then a metal rod with 6 mm in diameter and sharp edge was used to cut small discs (6 mm) to be used in disk diffusion test. Mueller-Hinton (MH) agar containing 5 μg/ml methylene blue and 2% glucose was used [17].
The media was prepared depending on manufacturer’s instructions. Five well isolated colonies of *C. albicans* from Sabouraud dextrose agar taken and suspended in 0.85% normal saline 5 ml to achieve turbidity of 0.5 McFarland. A sterile swab was immersed into the inoculum suspension and excess fluid was pressed out. Glucose Methylene blue-MH agar was swabbed carefully in 3 directions to enhance even growth on the surface of the agar plate [18].
The agar surface has been left for about 5 minutes, then the silicone disks (with and without chitosan) were placed on the agar surface and the plates were left at room temperature for 120 minutes for diffusion of the antifungal agents [19], then plates were incubated aerobically for 24 h. At 37°C, electronic digital caliper was utilized to measure the inhibition zone that appears around the disks.
**Tear strength test**
An angle test sample without nick was fabricated according to ISO 34-1:2015 specifications [20].
The sample with one apex (right angle) and two tab ends with a thickness of 2 ± 0.2 mm. The un-nicked angle sample is used to measure tear initiation and propagation; the stress was concentrated at the angle until tear was started; followed by more stresses which was responsible for propagation of tear.
Thickness of the sample was determined by a digital caliper in the area where tearing is estimated to occur (at the right angle). The sample was fixed in a computerized universal testing machine. Then the sample was stretched at a speed of 500 mm/min of grips separation until it tears completely. At break the maximum force was recorded. The tear strength, presented in N/mm, was calculated according the following formulation (1):
\[ \text{Tear strength} = \frac{F}{d} \quad (1) \]
Where, F is the maximum force in Newtons; d is the thickness in Millimeters.
**Tensile Strength**
Type 2 dumb-bell sample was fabricated According to ISO 37:2017 [21].
Sample thickness was determined at the centre and at both end of the testing length (narrow portion) by an electronic digital caliper. The average of them was used to measure the area of the narrow part of the sample. The width and the length of the narrow part were also measured.
The sample was fixed in a computerized universal testing machine. Then the sample was stretched at a speed of 500 mm/min of grips separation until it separated completely. The maximum force and at break were then recorded. The tensile strength, presented in MPa, was measured using the following formula (2):
\[ \text{Tensile strength} = \frac{F_m}{Wt} \quad (2) \]
Where, Fm is the maximum force in Newtons; W is the narrow portion width of sample in Millimetres; t is the sample thickness over the narrow portion in Millimetres.
**RESULTS**
The results of the study showed that there was no chemical reaction between chitosan micro particles and maxillofacial silicone material as FTIR (Fourier transform infrared spectroscopy) analysis (Figures 3 and 4).

**Figure 3:** FTIR spectrum of VST-50F platinum RTV silicone elastomer before the addition of chitosan
The results of viable count test showed a highly significant decrease in colony forming unit of *C. albicans* in experimental groups (1.5%, 2.5%, 3.5% chitosan) contrast with control group. The decrease in colony forming unit of *C. albicans* was chitosan concentration dependent with lowest mean value at 3.5% chitosan (Figure 5 and Table 1). In disk diffusion test the result showed highly significant increase in the measurement of inhibition zone as the concentration of chitosan increase (Figure 6 and Table 2).
**Table 1: Comparison of the means of *C. albicans* count-using one way ANOVA-in different groups in each incubation period**
| Time | Sum of Squares | df | Mean Square | F | Sig. | Effect size (Partial eta square) |
|--------|----------------|-----|-------------|-------|------|---------------------------------|
| | Contrast | 290894.1 | 3 | 96964.69 | 1254.8 | 0 | 0.991 |
| 1 day | Error | 2781.9 | 36 | 77.275 | | | |
| | Contrast | 418721.3 | 3 | 139573.8 | 3363.674 | 0 | 0.996 |
| 14 days| Error | 1493.8 | 36 | 41.494 | | | |
| | Contrast | 431058.9 | 3 | 143686.3 | 3187.127 | 0 | 0.996 |
| 3 days | Error | 1623 | 36 | 45.083 | | | |
**Table 2: Comparison the means of disk diffusion test-using one-way ANOVA-in different groups**
| Disk diffusion test | Sum of Squares | df | Mean Square | F | Sig. | Effect size |
|---------------------|----------------|-----|-------------|-------|------|-------------|
| Between Groups | 804.416 | 3 | 268.139 | 2513.715 | .000 HS | 0.998 |
| Within Groups | 3.84 | 36 | 0.107 | | - | |
| Total | 808.256 | 39 | | | - | |
The results showed a highly significant increase in the mean value of tear strength after incorporation of chitosan into silicone elastomer for all experimental groups (Figure 7 and Table 3) with a significant decrease in the mean value of tensile strength of silicone after chitosan addition (Figure 8 and Table 4).
**Table 3: One-way ANOVA for tear strength test results**
| Tear strength test | Sum of Squares | df | Mean Square | F | Sig. | Effect size (eta) |
|--------------------|----------------|----|-------------|-----|----------|------------------|
| Between Groups | 50.534 | 2 | 25.267 | 13.44 | .000 HS | 0.706 |
| Within Groups | 50.758 | 27 | 1.88 | | | |
| Total | 101.292 | 29 | | | | |
**Table 4: One-way ANOVA for tensile strength test results**
| Tensile strength test | Sum of Squares | df | Mean Square | F | Sig. | Effect size |
|-----------------------|----------------|----|-------------|-----|----------|-------------|
| Between Groups | 7.524 | 2 | 3.762 | 16.28 | .000 HS | 0.739 |
| Within Groups | 6.239 | 27 | 0.231 | | | |
| Total | 13.762 | 29 | | | | |
**DISCUSSION**
The maxillofacial prostheses surfaces made of silicone in contact with soft tissues, saliva and nasal secretion. These fluids may cause colonization of different microorganisms on their surfaces leading to silicone elastomer degradation or infection. One of the most common colonizing microorganisms is *Candida albicans*. Plaque which contains high *C. albicans* levels becomes acidic as a result of candidal metabolism which may later produce inflammation of mucosal surfaces or microbial colonization on the hard tissues [5].
In this study we select VST-50F RTV (room temperature vulcanized) silicone elastomer because of its favourable mechanical properties like tear and tensile strength so it can be removed from the adjacent tissues with little chance of distortion. It’s an economical product with low viscosity when compared with other maxillofacial silicone material so it provide availability and ease in manipulation. It has a fast setting time about 4-6 hour at room temperature and there is no need for water bath and other equipment that used with heat vulcanized silicone. All these characteristics made it suitable for this study.
Chitosan, a natural extract from chitin, is a polysaccharide that has been proven to have a broad spectrum of antimicrobial activity that encompasses effect against fungi, yeast and bacteria. While newest studies have revealed a significant anti-biofilm activity upon several microorganisms, including *C. albicans* [7].
**The effect of chitosan on candida growth**
The results of the study for viable count test revealed a highly significant decrease in colony forming units/ml of *C. albicans* after incorporating the silicone material with chitosan micro particles which indicate the development of a composite with antifungal activity.
Disk diffusion test showed highly significant increase in the inhibition zone measurement as the chitosan concentrations increase with highest result for 3.5% chitosan (12.2 mm).
Multiple explanations have been proposed about chitosan antimicrobial activity; it was suggested that the interference between microbial cells and chitosan could be on the cell surface, which increase permeability and irregularity of cell walls lead to subsequent leakage of intracellular components which may prevent RNA and DNA synthesis and direct cell death. Another proposed action is the chitosan with its positive charge interacting with the negatively charged cell membrane which alters its permeability, allowing release of intracellular material to the media. The poly cationic characteristic of chitosan and its derivatives is fundamental for antifungal efficacy against *Candida* species. The high susceptibility of *C. albicans* to react with chitosan could be attributed to the presence of sialic acid in cell wall components, as terminal residues of glycoprotein glycan. Sialic acid-rich glycoprotein binds selectins (a family of cell adhesion molecules that bind sugar polymers) in humans and other organisms [22].
Also it was suggested that low concentrations of chitosan will cause: [23]
- An efflux of K⁺ and stimulation of extracellular acidification.
- An increased transmembrane potential difference of the cells.
- An increased uptake of Ca²⁺.
These effects are due to a decrease the negative surface charge of the cells. At higher concentrations, in addition to the efflux of K⁺, it produced:
- A large efflux of phosphates.
- A decreased uptake of Ca²⁺.
- An inhibition of respiration and fermentation.
- The inhibition of growth.
Another suggestion proposes that the chitosan acts as a chelating agent and minimize access of the fungi to
nutritional materials found in the surrounding environment. Another mechanism is the capability of chitosan to penetrate the cell wall of fungi and interact with DNA, which prevent mRNA transcription and protein and enzyme synthesis in the cells. Also chitosan can bond to metals, which are toxic products involved in the growth of microorganisms [24]. There for the chitosan considered to be more efficient in reducing the *Candida albicans* growth.
**The effect of chitosan on mechanical properties of maxillofacial silicone**
**Tear strength:** After chitosan addition to silicon elastomer (2.5% and 3.5%), the tear strength is increased in comparison to the control group by (6.498% and 14.058%) respectively. This increase may be due to the fact that chitosan may contain 2.5% and 3.5% of the monomer entrapped within the silicone polymer substructure, this monomer may regard as an impurity that could be evaporated easily from the material [25]. Such impurities decrease the curing rate by contaminating the catalyst [26]. The tear strength is higher when the matrix is slightly under-cured [27].
**Tensile strength:** The results showed significant decrease in tensile strength of silicone elastomer after chitosan addition with least tensile strength at 3.5% chitosan. The explanation suggested that the chitosan particles were distributed into the continuous polymeric phase of the elastomer; therefore, the functional cross-sectional area of the composite was smaller than that of the pure polymeric matrix of elastomer, even if there were no bubbles or cracks between the matrix and the chitosan particles. Tensile stress pulled the chitosan particles off the polymeric matrix with detachment proceeding along the interface of chitosan particle and polymeric matrix. Furthermore, the chitosan impart a certain strengthening property as the micro cracks between the matrix and chitosan can consume energy. When the concentration of chitosan increased to a certain amount, the chitosan particles became close to one another or aggregated and then the micro cracks transform into macro-defects, which resulted in a decrease in tensile strength [28].
**CONCLUSION**
Taking in consideration the limitations of the present study, it was concluded that the incorporation of chitosan micro particles into VST-50F maxillofacial silicone material helps to produce silicone elastomer with antifungal activity, thus decrease the susceptibility to develop Candidal colonization and *Candida* associated infection. This activity seemed to be concentration dependent. This incorporation also enhanced the tear strength so decrease the chance of distortion when being removed from the tissues but decrease the tensile strength of the material.
**CONFLICT OF INTEREST**
The authors declared no potential conflicts of interests with respect to the authorship and/or publication of this paper.
**REFERENCES**
1. Chang TL, Garrett N, Roumanas E, et al. Treatment satisfaction with facial prostheses. J Prosthet Dent 2005; 94:275-80.
2. Fatihallah AA, Alsamaraay ME. Effect of polyamide (nylon 6) micro-particles incorporation into rtv maxillofacial silicone elastomer on tear and tensile strength. JBCD 2017; 29:7-12.
3. Kurunmäki H, Kantola R, Hatamleh MM, et al. A fiber-reinforced composite prosthesis restoring a lateral midfacial defect: A clinical report. J Prosthet Dent 2008; 100:348-52.
4. Scolozzi P, Jaques B. Treatment of midfacial defects using prostheses supported by ITI dental implants. Plast Reconstr Surg 2004; 114:1395-404.
5. Kurtulmus H, Kumbuloglu O, Özcan M, et al. *Candida albicans* adherence on silicone elastomers: Effect of polymerisation duration and exposure to simulated saliva and nasal secretion. Dent Mater 2010; 26:76-82.
6. Oves M, Aslam M, Rauf MA, et al. Antimicrobial and anticancer activities of silver nanoparticles synthesized from the root hair extract of *Phoenix dactylifera*. Mater Sci Eng C 2018; 89:429-43.
7. Costa E, Silva S, Tavaria F, et al. Antimicrobial and antibiofilm activity of chitosan on the oral pathogen *Candida albicans*. Pathogens 2014; 3:908-19.
8. Yeh HC. Effect of silica filler on the mechanical properties of silicone maxillofacial prosthesis. Doctoral Dissertation, Indiana University School of Dentistry.
9. Tukmachı M, Moudhaffer M. Effect of nano silicon dioxide addition on some properties of heat vulcanized maxillofacial silicone elastomer. JPBS 2017; 12:37-43.
10. Chladek G, Mertas A, Barszczewska-Rybarek I, et al. Antifungal activity of denture soft lining material modified by silver nanoparticles—A pilot study. Int J Mol Sci 2011; 12:4735-44.
11. Issa MI, Abdul-Fattah N. Evaluating the effect of silver nanoparticles incorporation on antifungal activity and some properties of soft denture lining material. JBCD 2015; 27:17-23.
12. Axell T, Henricsson V. Association between recurrent aphthous ulcers and tobacco habits. Eur J Oral Sci 1985; 93:239-42.
13. Baveja CP. Textbook of microbiology for dental students. Arya 2006.
14. Marler LM, Siders JA, Allen SD. Direct smear atlas: A monograph of gram-stained
preparations of clinical specimens. Lippincott Williams & Wilkins 2001.
15. Betty AF, Daniel FS, Alice SW, et al. Bailey & Scott’s diagnostic microbiology. International Edition 2007; 778-97.
16. Monteiro DR, Gorup LF, Takamiya AS, et al. Silver distribution and release from an antimicrobial denture base resin containing silver colloidal nanoparticles. J Prosthodont 2012; 21:7-15.
17. National Committee for Clinical Laboratory Standards. Method for antifungal disk diffusion susceptibility testing of yeast: Proposed guideline M44P. Wayne 2004.
18. Lee SC, Lo HJ, Fung CP, et al. Disk diffusion test and E-test with enriched Mueller-Hinton agar for determining susceptibility of Candida species to voriconazole and fluconazole. J Microbiol Immunol Infect 2009; 42:148-53.
19. Möller AJ. Microbiological examination of root canals and periapical tissues of human teeth. Methodological studies. Odontol Tidskr 1966; 74.
20. ISO B. 34–1. Rubber, vulcanised or thermoplastic—determination of tear strength, part 1: Trouser, angle and crescent test pieces. Deutsches Institut für Normung eV, Berlin, DIN ISO 2004; 34-1.
21. ISO B. 37: 2011 Rubber, vulcanized or thermoplastic—Determination of tensile stress-strain properties. British Standards Institution: London, UK 2011.
22. Tayel AA, Moussa S, Wael F, et al. Anticandidal action of fungal chitosan against Candida albicans. Int J Biol Macromol 2010; 47:454-7.
23. Peña A, Sánchez NS, Calahorra M. Effects of chitosan on Candida albicans: Conditions for its antifungal activity. Biomed Res Int 2013; 2013.
24. Atai Z, Atai M, Amini J. In vivo study of antifungal effects of low-molecular-weight chitosan against Candida albicans. J Oral Sci 2017; 59:425-30.
25. Goldblatt MW, Farquharson ME, Bennett G, et al. ε-Caprolactam. Br J Ind Med 1954; 11:1.
26. Hu X. Analyses of effects of pigments on maxillofacial prosthetic material. Doctoral Dissertation, The Ohio State University.
27. Sreeja TD, Narayanankutty SK. Studies on short nylon fiber-reclaimed rubber/elastomer composites. Doctoral Dissertation, Cochin University of Science & Technology.
28. Liu Q, Shao LQ, Xiang HF, et al. Biomechanical characterization of a low density silicone elastomer filled with hollow microspheres for maxillofacial prostheses. J Biomater Sci Polym Ed 2013; 24:1378-90.
|
Interbasin Compact Committee Basin Roundtable
Rio Grande Interbasin Roundtable MEETING MINUTES March 14, 2017
Attending – Those who signed in are as follows: Ron Brink, Cindy Medina, Rio de la Vista, Wayne Schwab, Charles Spielman, Keith Holland, Terry Chiles, Karla Shriver, Stan Moyer, Virginia Christensen, Zeke Ward, Larry Sveum, Helen Smith, Bethany Howell, Brenda Felmlee, Robert Getz, Christi Bode, Gene Farish, Ann Bunting, Andrea Bachman, Adam Moore, Eugene Jacquez, Charlotte Bobicki, Craig Cotten, Dale Pizel, Matthew Gallegos, Megan Holcomb, Greg Johnson, Nathan Coombs, Andrea Taillacq, Heather Dutton, Travis Smith, Ruth Heide, Matt Hildner.
Welcome and Introduction: Vice Chair Heather Dutton called the meeting to order at 3 p.m. at the offices of the San Luis Valley Water Conservancy District in Alamosa, CO. Those in attendance were introduced. A quorum of 13 was established.
Agenda Approval:Ron Brink moved that the agenda be approved with the addition of the Statewide Water Supply Initiative report. Rio de la Vista seconded the motion, which carried unanimously.
Approval of February 14, 2017 Minutes: Keith Holland motioned to approve the minutes. Judy Lopez seconded the motion, which was approved.
Public Comment:
* Ron Brink thanked everyone who worked on the Ag Producers workshop in February.
* Adam Moore, of the Colorado State Forest Service, brought copies of state's annual forest health survey. The focus of the report is, in part, how fire has impacted watersheds.
Mountain Home Dam Rehab Engineering Project Funding Request:Wayne Schwab, of the Trinchera Irrigation District, presented the proposal. The irrigation district, which has 47 stockholders and irrigates roughly 12,000 acres, is seeking $70,000 from the Basin Account for engineering studies on an upgrade to improve dam safety. The overall budget is $100,000. Currently only one of the three valve gates works and the Colorado Division of Water Resources has asked that all three be functional to increase drawdown capacity. Another problem, the dam leaks nearly 2,000 acre-‐feet per year. Schwab said the engineering study will produce three alternatives, preliminary designs, and construction-‐cost estimates. In response to questions from roundtable members, grant writer Nicole Langley said Mountain Home has conducted earlier releases to limit the leakage and meet irrigation schedules. Schwab said that practice has not impacted the conservation pool, nor will the draining of the reservoir to conduct the repairs bring water levels down to the conservation pool. In response to further questions, Schwab said divers did an evaluation of the dam in phase one and will dive again for certain repairs, and the irrigation district will work with the Trinchera Subdistrict, although most of the subdistrict's replacements will be in the middle reach of the Rio Grande. Travis Smith recommended a description of the subdistrict in the application would be helpful in securing other funding. Heather Dutton said it is a good project to invest in for nonconsumptive needs but most of all meets another basin need by improving storage capacity. Karla Shriver motioned to recommend approval with the provision that subdistrict uses be recognized in the application. Judy Lopez seconded the motion. On further discussion, Brink said approval was merited because the project fits into many categories of need and added if the project doesn't go forward, the reservoir won't have a conservation pool. A voice vote to approve the project carried unanimously.
Rio Grande State Wildlife Area Design Project Funding Request:Andrea Bachman, program manager for the Rio Grande Headwaters Restoration Project, presented on the proposal. A video taken of the project areas, including the San Luis Valley Canal headgate, bank erosion on the Rio Grande, and the Centennial Ditch headgate. Bachman said the project addresses priorities identified in the 2001 Study. Partners include Colorado Parks & Wildlife, the San Luis Valley Canal Company and the Centennial Ditch Company. The San Luis Valley Canal, which waters 20,200 acres and has 78 shareholders, will see improvements to its headgate to counter channel instability, sedimentation and an aging headgate. The Centennial Ditch, which has 22 shareholders and irrigates 8,500 acres, faces sedimentation, an aging diversion dam, and high maintenance requirements. Ditch board members are also concerned the river could jump the channel in high flows, thereby circumventing the diversion dam and headgate. Rio Grande 2 diversion has been added to the proposal. It is in need of engineering survey and design. Bachman said the overall project would stabilize 1,000 feet of streambank, restore and enhance riparian and acquatic habitat, and protect more than 100 acres of critical wetland habitat.
Bachman said WSRF Basin funds will make up $90,000, or 42 percent of the project budget. Matching funds will come from CPW (12 percent), the North American Wetlands Conservation Act (28 percent), Great Outdoors Colorado (12 percent) and in-‐kind matches from CPW (5 percent) and Colorado Rio Grande Restoration Foundation (2 percent).
San Luis Valley Canal Board President Terry Chiles, said the canal's headgate is 100 years old and concrete around the structure has deteriorated. The bank below the headgate is unstable. He said high flows could put the river in the canal and, although a flood gate could put flows back in the river, the flows would bypass the Centennial Ditch.
Bachman said funds from the WRSF Basin account would go toward design, surveying and permitting for the SLV Canal's headgate, the Centennial diversion and Rio Grande 2. Stream-‐ bank work is already funded.
Shriver said the Centennial is also old and in need of repair. The 2001 Study was meant to identify all of these old structures and she noted they were no different than roads or sewers — eventually they'll need to be replaced. Rio de La Vista said the potential failure of the CPW's siphon would put Southwestern Willow Flycatcher habitat at risk. She supports the project strongly. Smith said a cornerstone of the water supply reserve account was to bring partners and ditch companies together and to provide seed money. He asked if any work needed to be done before spring runoff. Chiles said he didn't believe any work could be done by then. Cindy Medina asked if the ditch companies still needed to sign off on the proposal. Rio de la Vista said the Rio Grande Headwaters Land Trust also has easements on a significant amount of land serviced by the ditches in the project.
Judy Lopez moved to approve the proposal, noting it serves a variety of needs. Holland seconded the motion, which carried unanimously. Dutton and Shriver abstained as they are on the Board of the RGHRP and Shriver is also a Centennial Ditch member.
Dam Safety Presentation:Bill McCormick, Chief of Dam Safety with the Colorado Division of Water Resources, gave an overview of his office's work. His office has 12 engineers and regulates 1,870 dams. Average age of dams in the San Luis Valley is 91 years. They provide 368,495 acre-‐feet of storage. Five of the dams designated as high hazard were built between 1908 and 1914. McCormick reviewed recent troubles with flooding and spillway damage at Orroville Dam in California, noting 188,000 people were evacuated. In the valley, Terrace, Beaver Park, Rio Grande, Humphreys and Continental reservoirs have all had work done. Alberta Park, Trujillo Meadows and Rito Hondo are being studied for repairs by CPW. McCormick said increased storage is critical but the safety of our reservoirs is paramount. His office appreciates the diligence of dam owners and the roundtables for funding to maintain and repair dams. In response to questions from the roundtable, McCormick noted the recent breach of a dam in Nevada that damaged a small town and said emergency spillways allow flood flows to be routed around a dam without anyone having to throw a switch. He also said the 2013 flooding along the Front Range taught the division that the design of most dams was good. Only failures came from low-‐hazzard dams that were not built for that much rain. All of the dam safety information, including inspection reports and the status of EAPs is online.
CWCB update:CWCB Program Manager Megan Holcomb gave an update on the Statewide Water Supply Initiative, which is a technical platform with a focus on delivering data for the future refinement of the basin implementation plans and the Colorado Water Plan. The policy of the state water plan will inform SWSI. New and refined approaches include scenario planning with climate change and other drivers such as agricultural, environmental and recreation supply gaps. The scheduled completion of analyses is December. There will be less public back and forth. The Technical Advisory Groups, which will be small and flexible, will review all phases. She said the agency's goal is to have one TAG meeting and a second, if necessary. Its membership has not been finalized. In response to a question about the roundtable's role, CWCB Program Manager Greg Johnson said they were still trying to determine it. He added that there may be time and funding limitations that determine how much information to get from the individual basins. Johnson also gave an update on the second phase of Colorado River Risk Study. Western Slope roundtables would like more detail in the study, including how to use the state model, water rights, its relation to the Bureau of Reclamation model and, possibly, contingency planning for reservoir operations. Johnson said there has been resistance from the Front Range roundtable and they were undecided on whether the roundtable, its management committee, or its technical advisory group would participate. CWCB staff recommends support of the study. Smith suggests the roundtable support the CWCB staff recommendation.
DWR Update:Division Engineer Craig Cotten reviewed the most recent 10-‐day report. His office is forecasting an annual flow of 750,000 acre-‐feet on the Rio Grande at the Del Norte gauge. The delivery obligation would 229,000 acre-‐feet, 19 percent of which is expected to come during irrigation season. On the Conejos, his office is forecasting 450,000 acre-‐feet at the Mogote gauge, which would be 147 percent of the historic average. The obligation downstream under the compact would be 232,000 acre-‐feet, 45 percent of which would have to be delivered during irrigation season. Cotten said basin snowpack is higher than average and significantly higher than the last three years, although it is dwindling some because of warm weather. Craig presented a table of stream flow forecasts. The Rio Grande is projected to 113 percent of average, while the San Antonio projects highest at 160 percent of average. He said the Rio Grande Compact Commission meets April 5 in Santa Fe. Irrigation Season will open on La Jara and Hot creeks March 16. He encouraged folks to come forward with ideas on opening dates for other watersheds.
IBCC Update: Smith said the next meeting is April 20.
Dutton said the Colorado Foundation for Water Education has opened registration for its Water Fluency course, which is issue-‐based.
The meeting adjourned 5 p.m.
|
Smarter Learning with SlimStampen
Steffen Bürgers, Menno Nijboer & Hedderik van Rijn University of Groningen
A good way to study factual information is to rehearse a particular fact several times, intermixed with a couple of other items. Ideally the information is not simply read over, but actively retrieved from memory. The more time between rehearsals the higher the likelihood of retaining the fact, but only when retrieval is successful. SlimStampen is an electronic learning environment and computer algorithm that uses scientific insights about human memory to present facts at the right time for optimal learning benefit and long-‐term knowledge retention.
are often read, but seldom retrieved from memory. However, actively retrieving a fact from memory leads to better long-‐term retention than simply reading and understanding it. This can be seen by the testing effect, the name given to the finding that knowledge retention is better when it is learned by actively retrieving information (i.e., "What is the translation of the French word fromage?") than when simply studying the material and reading over it (e.g., "fromage = cheese";Karpike & Roediger, 2006).
Fact learning is important. Visiting foreign countries, understanding the evening news, finding your way around on Google and Wikipedia knowing where to look or ambitiously trying to win money in a quiz show – factual knowledge of one kind or another is important for everyone in a variety of situations.
Alas, most people consider fact learning as a boring and cumbersome activity. Consequently, many pupils and students 'cram' (Dutch: Stampen) as many facts as possible into their heads just before a test, which on the short term is actually quite effective. However, long-‐ term retention of the material is rather poor compared to students who use spaced learning periods, as Ebbinghaus demonstrated more than a hundred years ago (1885).
Testing effect and spacing effect
One of the reasons why cramming is ineffective in terms of long-‐term retention is that the facts
Another effect with a positive influence on knowledge retention is the spacing effect: Rehearsing a fact a few times over a long period is superior to rehearsing it an equal number of times over a shorter period of time. Importantly, this effect is not simply limited to complete learning sessions, but also holds within a single learning session. In other words, rehearsing one item three times and then moving on to repeat the next item three times and so on (e.g., AAA, BBB, CCC) is less effective than rehearsing each item once and then moving on to the second rehearsal of each item (e.g., ABC, ABC, ABC).
These two memory effects might not seem particularly surprising and many people can probably remember studying vocabulary with a parent or friend, where the helper times the intervals between rehearsals depending on the success of the learner. When an item could be recalled quickly the helper would wait longer before presenting the item again compared to if an item could not be recalled or was recalled with more difficulty. By using such a strategy the helper is using the spacing effect and testing effect to the learner's advantage.
The above example begs an obvious question:
What does the learner do if there is nobody to help test his or her knowledge and order test items in a constructive way? A single person might space training sessions over time, but within a single session it will be rather complicated to monitor retrieval difficulty and sort items accordingly for future presentations. Also, there are obviously differences between individuals regarding their progress and how much time between rehearsals is ideal. This is one reason why it is difficult to use the spacing effect for the benefit of school classes (Pashler et al., 2007).
Fact-‐learning methods
These issues notwithstanding there are a couple of frequently used fact learning methods that employ the testing and spacing effect -‐ for example, the Pimsleur-‐method or the flashcard method (Leitner system).
In the Pimsleur-‐method learners are asked to retrieve previously learned words or sentences of a foreign language (testing effect) and the intervals between having to recall a given word increases with each repetition (spacing effect). Even so, the spacing is not adapted to the individual learner and the interval between items might be too small for some and too big for others. Similarly, it might be ideal for some facts and less ideal for others.
The flashcard method is more attuned to the individual learner and the specific difficulty of items than the Pimsleur-‐method. The facts are written on cards and put in a "starting box", meaning that they had never been rehearsed before. Now, each time the learner successfully retrieves a fact the corresponding flashcard is put in a higher-‐level box. When a card reaches the last box it is declared as having been committed to long-‐term memory. However, no matter which box the card is in, when a fact could not be recalled correctly the card has to be put back into the first box. This resetting of items incorporates individual differences between learners, but it also disregards the degree to which a previous item had already been learned (i.e., maybe the fact could be recalled with a little more thinking time?). It is thus unlikely that items reaching the last box are forgotten, but quite a few of them would probably already have been retained at an earlier stage.
Theories of memory
Thankfully, it is possible to use modern theories of human memory to create the best possible framework for optimal fact learning (Pavlik & Anderson, 2005; Taatgen, 2009).
According to these theories the knowledge of a particular fact is expressed by how "active" that fact is in our memory. This activity value can be calculated depending on how often the fact has been retrieved, as well as when it has been retrieved. Figure 1 illustrates the activity of a certain fact in two distinct situations. The instance of a fact-‐retrieval is denoted by a peak, after which the activation slowly decays over time. The first peak, where the line starts, denotes the first time the item has been encountered and stored in memory. The blue, solid line shows the activity of a fact that is recalled four times in rapid succession (cramming). The red, dotted line on the other hand shows the activity of a fact that is retrieved from memory only three times, but with more time in-‐between retrievals (spacing). As can be seen, the activity of the blue fact decreases more rapidly after the last rehearsal than the activity of the red fact, in line with the spacing effect. Furthermore, the speed with which activity drops after each rehearsal decreases with each repetition and, importantly, depends on the activity level at the moment of retrieval. In other words, when a fact is still very fresh in memory, and activity is high, it is easy to recall, but it also decays away more quickly than if activity had been lower beforehand. This leads to the relative benefit of cramming knowledge immediately before an exam, but forgetting most of the facts shortly afterward.
In Figure 1 the facts are assumed to be equally difficult to illustrate the effect of spacing and activity levels at the moment of retrieving a fact. A more realistic scenario is depicted in Figure 2, with one easy item (blue, solid) and one difficult item (red, dotted). As can be seen, the difficult item needs to be rehearsed five times to be roughly on the same level of activity as the easy item with only one rehearsal.
After this short discussion of human memory theories, recall the situation where a parent or friend selects the facts to be retrieved by the learner. The advantage is precisely that they can adapt the spacing to the difficulty of the item and how fast it could be recalled. Or, now in terms of scientific theory: How active the fact was. According to these memory models, the ease with which a fact can be recalled or the time needed to recall it is directly proportional to the time until the item should be retrieved again to yield optimal fact learning efficiency.
Enter SlimStampen
SlimStampen (a combination of the Dutch words for clever cramming) is an adaptive computer-‐based learning environment that uses the activation equation from the memory theories described above. That way it can calculate when an item should best be presented again to be a challenge for the learner, but not impossible to recall, optimizing the utility of the spacing effect. Similarly, it recognizes when an item is easily remembered and thus needs few rehearsals compared to more difficult items.
In the past years students of the University of Groningen have implemented this system and tested it extensively in high schools and freshman university classes in the Netherlands. Results show that fact learning with SlimStampen is significantly more effective than normal fact learning strategies or improved strategies like flashcard learning with sometimes more than one grade-‐point difference on a final test (Figure 4; Van Rijn, 2010).
The SlimStampen method is currently being tested with and improved for students with learning disabilities in the Leonardo project GOLD ( http://www.goldleonardo.eu ) , as it is an ideal tool for learners with large individual differences. The tailor-‐made spacing between item presentations allows complete adaptation to the individual needs of the participant, something that is especially beneficial for special learning populations. Importantly, due to the relatively few presentations of very easy to recall items, and many items that can be answered successfully but still are perceived as challenging, SlimStampen is more motivating and fun.
References
Ebbinghaus, H. (1885). Memory: A contribution to experimental psychology. Teachers College, Columbia University, given by Henry A. Ruger and Clara E. Bussenius (1913). Available on http://psychclassics.yorku.ca/Ebbinghaus/in dex.htm
Pavlik, P.I., & Anderson, J.R. (2005). Practice and forgetting effects on vocabulary memory: An activation-‐based model of the spacing effect.Cognitive Science, 29(4), 559-‐86.
Pashler, H., Bain, P., Bottige, B., Graesser, A., Koedinger, K., McDaniel, M., & Metcalfe, J. (2007). Organizing instruction and study to improve student learning (NCER 2007-‐2004). Washington, DC: National center for education research, institute of education sciences, U.S. department of education.
Roediger, H.L., & Karpicke, J.D. (2006). Taking memory tests improves long-‐term retention. Psycholgical Science, 17 (3), 249-‐255.
Rijn, H. van (2010). SlimStampen. Optimaal leren door kalibratie of kennis en vaardigheid. http://onderzoek.kennisnet.nl/onderzoekentotaal/slimstampen
Taatgen, N.A. (2009). Kennisopslag, vergeten en geheugen. In R. Klarus & R.J. Simons (Eds.), Wat is goed onderwijs? Bijdragen uit de psychologie (pp. 33-‐46). Den Haag: Lemma.
This document is largely based on the text: Optimaal Feiten Leren met ICT by Van Rijn, H., & Nijboer, M. which appeared in 2012 in the series 4W: Weten Wat Werkt en Waarom (edited by A. ten Brummelhuis, M. Van Amerongen, and S. Peters; 1(1), 6–11).
|
EXTENSION OF THE TIME LIMIT TO FILE THE DECLARATION OF USE REQUIRED BY SUBSECTION 40(3) OF THE TRADE-MARKS ACT
LAURENT CARRIERE*
LEGER ROBIC RICHARD, L.L.P.
LAWYERS, PATENT & TRADEMARK AGENTS
DECLARATION OF USE
When an application for the registration of a trade-mark is based on the intention to use the trade-mark in Canada, one must first file a declaration which states that the applicant has commenced use of the trade-mark in Canada, in order to obtain the registration. Paragraph 40(2) of the Trade-marks Act states the following¹:
Marque de commerce projetée
Lorsqu'une demande d'enregistrement d'une marque de commerce projetée est admise, le registraire en donne avis au requérant. Il enregistre la marque de commerce et délivre un certificat de son enregistrement après avoir reçu une déclaration portant que le requérant, son successeur en titre ou l'entité à qui est octroyée, par le requérant ou avec son autorisation, une licence d'emploi de la marque aux termes de laquelle il contrôle directement ou indirectement les caractéristiques et la qualité des marchandises et services a commencé à employer la marque de commerce au Canada, en liaison avec les marchandises ou services spécifiés dans la demande.
Proposed trade-mark
When an application for registration of a proposed trade-mark is allowed, the Registrar shall give notice to the applicant accordingly and shall register the trade-mark and issue a certificate of registration on receipt of a declaration that the use of the trade-mark in Canada, in association with the wares or services specified in the application, has been commenced by
(a) the applicant;
(b) the applicant's successor in title; or
(c) an entity that is licensed by or with the authority of the applicant to use the trade-mark, if the applicant has direct or indirect control of the character or quality of the wares or services.
When must this declaration be filed? Usually, it will be filed once the application for registration has been allowed², following paragraph 40(3), which states³:
---
* Lawyer and trade-mark agent, Laurent Carrière is a partner of LEGER ROBIC RICHARD, L.L.P., a multidisciplinary firm of lawyers, and patent and trademark agents. Publication 230E.
¹ Trade-Marks Act, R.C.S. 1985, c. T-13, as section 40 is modified by C.S. 1993, c. 15, s. 68 and C.S. 1993, c. 44, s. 231.
² The registrar will thus issue a “notification of allowance”, the last paragraph of which reads:
Abandon de la demande
La demande d'enregistrement d'une marque de commerce projetée est réputée abandonnée si la déclaration d'emploi mentionnée au paragraphe (2) n'est pas reçue par le registraire dans les six mois qui suivent l'avis donné aux termes du paragraphe (2) ou, si la date en est postérieure, à l'expiration des trois ans qui suivent la production de la demande au Canada.
Abandonment of application
An application for registration of a proposed trade-mark shall be deemed to be abandoned if the registrar has not received the declaration referred to in subsection (2) before the later of
(a) six months after the notice by the Registrar referred to in subsection (2), and
(b) three years after the date of filing of the application in Canada.
Thus, the declaration of use must be filed:
- within six (6) months of the date from which the registration was allowed, or
- within three (3) years of the date of filing of the application for registration according to the time limit which is most advantageous for the applicant.\(^4\)
If an application for registration is filed on 2008-01-01 and the registration is allowed on 2009-01-01, the applicant will have until 2011-01-01 to file the declaration of use (within three years of the date of filing of the application for registration). However, if an application for registration is filed on 2008-01-01 but if the registration is only allowed on 2012-01-01 (more than three (3) years of the date of filing of the application for registration), the applicant will have until 2012-07-01 (six (6) months of the date from which the registration was allowed) to file the declaration of use.
FAILURE TO FILE
En conformité du paragraphe 40(2) de la Loi sur les marques de commerce, une DÉCLARATION indiquant que le requérant a commencé à utiliser la marque de commerce au Canada en liaison avec les marchandises et/ou services mentionnés dans la demande doit être fournie le ou avant le JJ-MM-AAAA à défaut de quoi la demande sera réputée abandonnée en vertu du paragraphe 40(3) de la Loi. Lors de la préparation de la déclaration d'emploi, veuillez s.v.p. vous référer à l'énoncé des marchandises/services qui figure sur la dernière feuille de vérification que vous avez reçue.
Pursuant to sub-section 40(2) of the Trade-marks Act, a DECLARATION of use of the Trade-mark in Canada in association with the wares and/or services specified in the application must be filed on or before DD-MM-YYYY failing which the application shall be deemed abandonned pursuant to sub-section 40(3) of the Act. When preparing your declaration of use, please refer to the statement of wares/services appearing in the latest Proof Sheet that you have received.
\(^3\) As modified by the coming into effect, on 1994-01-01 of section 231 of the North American Free Trade Agreement Implementation Act, C.S. 1993, c. 44).
\(^4\) In theory, nothing prevents the declaration of use from being filed before the notification.
If the applicant does not file the declaration of use, his application for registration will be deemed to be abandoned, without any notice to that effect by the registrar\(^5\). There will be such a notice\(^6\) if the application for registration contains a basis other than the one which is planned\(^7\): in this case, the registrar will send a notice\(^8\) which will state that the portion of the application which is based on a foreseen use of the trade-mark is abandoned, but will also grant the applicant an additional delay of two (2) months in order for him to proceed with respect to any other basis\(^9\), which the application might contain\(^10\).
---
\(^5\) Section 36 would be inapplicable because the time limit for the filing of the declaration of use is specifically stipulated in the Act.
\(^6\) Section 36 would apply because there is no statutory provision regarding an obligation with respect to the final fee for the issuance.
\(^7\) The registrar’s notice read as follows:
Si votre déclaration ne nous parvient pas dans les délais prévus, le bureau procédera à l’enregistrement en fonction de(s) l’autre (autres) revendication(s).
\(^8\) Which notice reads:
**ANNULATION D’UNE REVENDICATION**
*Article 16(3)*
La revendication à l’enregistrement selon la disposition de l’article 16(3) de la *Loi sur les marques de commerce* a été annulée parce que le requérant n’a pas produit sa déclaration d’emploi.
Par ailleurs, pour que nous puissions enregistrer la marque de commerce, basée sur la partie déjà utilisée seulement, le requérant doit acquitter le droit d’enregistrement prescrit de deux cent (200,00$) dollars.
Nous vous accordons par la présente une prolongation jusqu’au 24 octobre 2005 pour vous permettre d’acquitter les dits frais d’enregistrement.
De plus, soyez avisé que si aucune réponse n’est donnée avant la date prescrite, cette demande sera traitée comme étant abandonnée selon l’article 36 de la *Loi sur les marques de commerce*.
\(^9\) That is, a trade-mark used in Canada, a trade-mark which has been revealed in Canada, or a trade-mark which has been registered in a country which is a member of the Union and which is used elsewhere in the world.
\(^10\) Essentially, for the “registration of a trade-mark, including the issuance without additional fees of the corresponding certificate of registration” pay the fee of 200$, which is provided for by item 15 of Part II of the schedule entitled “Tariff of Fees”, which refers to section 12 of the *Trade-Marks Regulations*, itself enacted by virtue of Paragraph 65(e) of *Trade-marks Act*. It must be noted that it is no longer
The filing of a declaration of use covering only part\textsuperscript{11} of the wares or services will cause the registration to issue only for those wares or services\textsuperscript{12}.
**EXTENSION OF TIME LIMIT**
The applicant who cannot file the declaration of use which is provided for by paragraph 40(2), within the time limit provided for by paragraph 40(3), can, however, by paying off the prescribed fee\textsuperscript{13}, ask for an extension of the time limit, in accordance with paragraph 47(1)\textsuperscript{14}, which states the following:
**Prorogations**
Si, dans un cas donné, le registraire est convaincu que les circonstances justifient une prolongation du délai fixé par la présente loi ou prescrit par les règlements pour l’accomplissement d’un acte, il peut, sauf disposition contraire de la présente loi, prolonger le délai après l’avis aux autres personnes et selon les termes qu’il lui est loisible d’ordonner. [Les italiques sont nôtres.]
**Extensions of time**
If, in any case, the Registrar is satisfied that the circumstances justify an extension of the time fixed by the Act or prescribed by the regulations for the doing of any act, he may, except as in this Act otherwise provided, extend the time after such notice to other persons and on such terms as he may direct. [Our emphasis.]
It must be pointed out that the powers granted to the registrar in accordance with section 47 of the Act are of discretionary and administrative nature\textsuperscript{15}.
\textsuperscript{11} At least with respect to the wares or services covered by the proposed use basis.
\textsuperscript{12} In Canada, an application could be partially assigned but cannot be divided.
\textsuperscript{13} The prescribed fee for an application for an extension of time, by virtue of subsection 47(1) or (2) of the Act, for the doing of any one or more acts, is 125$ for each act: item 9 of part I of the «Tariff of Fees».
\textsuperscript{14} Subsection 47(2) applies when an extension is applied for after the expiration of the time fixed for the doing of an act. In the case of a declaration of use or of an application to extend the time limit for doing so which is not filed within the prescribed time limit, the registrar can accede to a “retroactive” application to extend the time limit, provided that the registrar is satisfied that the omission was not reasonably avoidable, that the prescribed fee is paid [we pay only one fee, and not one fee for the application under paragraph 47(2) and another fee under subsection 47(1)] and that failure has not already been noted and the application deemed to be abandoned, by virtue of subsection 40(2). Regarding the application of subsection 47(2), see \textit{Fjord Pacific Marine Industries Ltd. v. Canada (Registrar of Trade-marks)} (1975), 20 C.P.R. (2d) 108 (F.C.T.D.), judge Mahoney at page 112; \textit{Rust-Oleum Corporation v. Canada (Registrar of Trade-marks)} (1986), 8 C.I.P.R. 1 (F.C.T.D.) Teitelbaum J., at page 5; \textit{Rust-Oleum Corporation v. Canada (Registrar of Trade-marks)} (1986), 8 C.I.P.R. 213 (F.C.T.D.), Martin J., at page 216.
\textsuperscript{15} See, among others, A. \textit{Lassonde Inc. v. Canada (Registrar of Trade-marks)} (2003), [2003] 4 F.C. 618 (F.C.) Lemieux J., at paragraph 40; \textit{Kitchen Craft Connection Ltd. v. Canada (Registrar of Trade-marks)} (1991), 48 F.T.R. 85 (F.C.T.D.), Dubé J., at page 87; \textit{Centennial Packers Ltd. v. Canada Packers Inc.} (1987), 15 C.P.R. (3d) 103 (F.C.T.D.), Joyal J., at page 14, \textit{Canadian Schenley}.
DURING THE FIRST THREE YEARS
The registrar must be satisfied that the circumstances justify the extension time. Thus, a vague statement such as “We have been informed that the applicant can not file the declaration of use provided for by paragraph 40(2) of the Act because the trade-mark is not yet totally used in Canada with respect to all the merchandise and services which are mentioned in the application for registration”, is insufficient. The registrar has revised/standardised his filing policy and one must, since April 14, 1998, provide a true reason for the extension\(^{16}\) and not simply state that the mark is not yet used and that more time is needed to commercialise it in Canada\(^{17}\). During the first three (3) years, which follow the period when registration is allowed, this reason does not need to be “significant and substantial”; however, one must nonetheless be provided\(^{18}\). This is the meaning that must be given to the administrative directive, which reads as follows\(^{19}\):
1. **Prolongation de 6 mois**
Le Bureau octroie présentement des prolongations de délai de six mois lorsque le délai pour déposer la déclaration d’emploi est expiré si la requête est justifiée et que le droit prescrit de 50$ est acquitté.
1. **Six month extension**
As of April 14, 1998, on the first request for an extension of time or any subsequent one, if a reason is not provided that would justify the extension of time, the office will refuse the extension of time and allow the applicant two months to further respond.
---
\(^{16}\) The registrar’s communications bear this note: Nous vous rappelons qu’en vertu de l’article 47(1) de la loi, le registraire doit être convaincu que les circonstances justifient l’octroi d’une prolongation du délai fixé par la présente loi.
\(^{17}\) By analogy, see *In re Comdial Corp.* (1993), 32 U.S.P.Q. (2d) 1863 (U.S.P.T.O – Comm.), at page 1864: “Since petitioner’s extension request merely set forth a statement that it had made ongoing efforts but did not specify any type(s) of ongoing efforts that were actually being made, the extension request did not include a showing of good cause, and it was properly denied”; see also *In re Sparc International Inc.* (1993), 33 U.S.P.Q. (2d) 1479 (U.S.P.T.O.-Comm.), at page 1480.
\(^{18}\) Here, one must beware of too much imagination/creativity, because what is written remains and could come back to haunt him if, in order to obtain an extension, he twisted the truth or even invented an excuse. It is preferable to ask the client/correspondent to indicate what the reasons for non commercialisation are, even if this means suggesting some possible reasons.
\(^{19}\) Published in the editions of July 8, 1998, July 15, 1998, and July 22, 1998, of the Trade-Marks Journal, vol. 45, no 2280, 2281 and 2282. This notice recapitulates in part the one which was published in the editions of January 28, 1998, February 4, 1998, February 11, 1998, and February 18, 1998, in the Trade-Marks Journal, vol. 45, no 2257, 45-2258, 2259 and 2260.
À compter du 14 avril 1998, à la première demande de prolongation de délai et les suivantes, s'il n'y a pas de raisons données, le Bureau refusera la prolongation de délai et donnera au requérant deux mois additionnels pour répondre.
As of April 14, 1998, on the first request for an extension of time or any subsequent one, if a reason is not provided that would justify the extension of time, the office will refuse the extension of time and allow the applicant two months to further respond.
At this stage in the application, the Trade-marks Office does not have any established policy regarding the nature of the application but will most likely accept all good reasons, whether they are simple or general, without questioning their existence and without any inquiries about documentary evidence\textsuperscript{20}. Which reasons justify an extension of the time limit? The Act, the rules and the directives are silent. However, it is interesting to note that, in the United States, the USPTO indicates that, with respect to the commercialisation efforts that must be alleged by an applicant who finds himself in a similar situation\textsuperscript{21}, will be recognised\textsuperscript{22}, in a non exhaustive manner, the following reasons:
Those efforts may include, without limitation, product or service research or development, market research, manufacturing activities, promotional activities, steps to acquire distributors, steps to obtain required governmental approval, or other similar activities. In the alternative, a satisfactory explanation for the failure to make such efforts must be submitted\textsuperscript{23}.
The severing of a distribution/licensing contract, the bankruptcy of a distributor, a large number of merchandise or services, a specialised market or high end products,
\textsuperscript{20} Telephone conversation, on 1998-09-24, between Laurent Carrière and the Administrator of the declarations and registrations section, Lise Audette. The purpose of this telephone conversation was only to clarify the policy of the Trade-Marks Office, which can be modified without prior notice.
\textsuperscript{21} Considering that the clerks of the declarations section are able to read and that, in the event that one would attempt to automatically/routinely use the same reason, during these three (3) first years, questions could be asked to the applicant, usually in the form of a paragraph included in the notice of the granting of the extension, which informs the applicant that a subsequent application could be refused if the circumstances are not well supported.
\textsuperscript{22} In the United States, an application which is based on the intention to use (or «ITU») must also be accompanied by the filing of a declaration of use [see Paragraph 1051(d) of the American Act and section 2.88 of its Regulation; the question regarding the extension of time limits is strictly regulated - see section 2.89 of the Regulation with respect to what must be included in the application], or else the application is deemed to be abandoned. The first two (2) extensions of six (6) months can be obtained quasi-automatically but the four others –also of six (6) months- must be justified according to the rules. If the declaration of use is not filed within the thirty six (36) months of the acceptance of the registration, then the application is deemed to be abandoned, without any other possibility of extension. On this subject, see generally Thomas McCarthy, \textit{McCarthy on Trademarks and Unfair Competition}, 4\textsuperscript{th} ed. (New York, Thomson/West, 1996), at §19:25, updated 6/2004.
\textsuperscript{23} During this telephone conversation, on 1998-09-24, between Laurent Carrière et Lise Audette, Ms Audette informally indicated that any of the reasons mentioned in the TMEB's list would be acceptable for the CIPQ, at least during the initial period of three (3) years.
\textsuperscript{24} \textit{Trademark Manual of Examining Procedure}, 4\textsuperscript{th} ed., Rev April 2005 (Washington, Patent and Trademark Office, 1997), at §1108. Also available at http://tess2.uspto.gov/tmdb/tmep/1100.htm#_T1108.
a change in a given sector of the economy, legal procedures or the delay caused by an opposition, the assignment of the mark in question or a corporate reorganisation, should also be sufficient reasons for the granting of an extension of the time limit, during the course of the first three (3) years\textsuperscript{25}.
**STANDARD TIME LIMIT EXTENSION**
The standard time extension is of six (6) months. However, when the use of the mark is delayed because the applicant is awaiting a government approval\textsuperscript{26}, the extension of time will then be of twelve (12) months\textsuperscript{27}. This is the meaning that must be given to the second part of the practice notice\textsuperscript{28} which reads as follows:
2. **Prolongation de 1 an**
Le Bureau octroie présentement des prolongations de délai d’un an lorsqu’un requérant est en attente d’une approbation gouvernementale avant que la marque soit en utilisation.
À compter du 14 avril 1998, à la première demande de prolongation de délai et les suivantes, si la demande est en attente d’une approbation gouvernementale, le Bureau demandera de spécifier le ministère dont le requérant attend l’approbation.
The trade-marks office will thus require that the name of the government department in question be indicated\textsuperscript{29} and does not require a copy of documentary evidence\textsuperscript{30}.
---
\textsuperscript{25} In \textit{In re Alco Industries Inc.} (1995) 34 U.S.P.Q. (2d) 1799 (U.S.P.T.O. – Comm.) it was, with respect to the mark “QUALITY PRODUCTS FROM PEOPLE WHO CARE” successfully argued, in order to obtain an extension, that “(1) the mark is long, so that it fits on packaging for goods better than on goods themselves; (2) the goods are presently in packages which do not include the mark; (3) it is expensive to redesign the package to include the mark; and (4) when new packaging is developed for the goods, applicant intends to include the mark in the new packaging”.
\textsuperscript{26} It can be an approval by the Federal Government, by a Provincial Government or even by a Municipal Government [confirmation via telephone on 1998-09-24 by the assistant director, Linda Powers].
\textsuperscript{27} The fee remains unchanged: $125 per request for an extension of time, whether it is for a period of six (6) or twelve (12) months.
\textsuperscript{28} Published in the editions of July 8, 1998, July 15, 1998 and July 22, 1998, of the Trade-Marks Journal, vol. 45, no 2280, 2281 and 2282. This practice notice recaptured in part the one which was published in the editions of January 28, 1998, February 4, 1998, February 11, 1998, and February 18, 1998, in the Trade-Marks Journal, vol. 45, no 2257, 45-2258, 2259 and 2260.
\textsuperscript{29} For the moment, it is not required to include the particular department of the ministry. However, it would be wise to include it if it is available.
This aspect of the policy targets all required government approvals\textsuperscript{31}, even if they do not cover all the merchandise or all the services which are mentioned in the application\textsuperscript{32}: if at least one merchandise or service is covered, the extension of twelve (12) months applies. Furthermore, contrary to popular opinion, these government approvals are not limited to pharmaceutical products or parapharmaceuticals, but target every required government approval\textsuperscript{33}.
**AFTER THREE YEARS**
Things get more complicated in the case of a request for an extension of time that extends beyond the period of three (3) years from the original deadline to file a declaration of use. The Trade-marks Office will request significant and substantive reasons. This is prescribed by part three of the administrative directives\textsuperscript{34}, which states the following:
\textsuperscript{30} However, the Office could, in case of doubt, request further explanations or corroborating documents. It is important to remember that, by virtue of section 29 of the Act, everything that is filed with the registrar is public: thus, one should not hesitate to conceal confidential information while indicating the reason behind the concealment.
\textsuperscript{31} The Practice Notice does not specify that it must be an approval by a Canadian government. In fact, the Trade-Marks Office can accept a request for an approval made to any government authority, even a non Canadian one (for example, the American FDA): in this case, the extension of twelve (12) months applies as well.
\textsuperscript{32} Shrewd individuals had already understood this and, mainly in the case of “blockade” registrations, a pharmaceutical product would be added in order to “extend” the delays at a lower cost. The consequences of such a behavior on the application, notably in regard of the intent to use the trademark \textit{in toto}, remain to be decided.
\textsuperscript{33} For example, approval by virtue of banking legislation, permit to conduct business in another province, zoning regulation, etc. See also, for instance, Pest Control Products Regulations (SOR/2006-124), Medical devices Regulations (SOR 98-282) or Motor Vehicle Tire Safety Regulation, 1995 (SOR/95-148).
\textsuperscript{34} Published in the editions of July 8, 1998, July 15, 1998, and July 22, 1998, of the Trade-Marks Journal, vol. 45, no 2280, 2281 and 2282. This Practice Notice recapitulated in part the one which was published in the editions of January 28, 1998, February 4, 1998, February 11, 1998, and February 18, 1998, in the Trade-Marks Journal, vol. 45, no 2257, 45-2258, 2259 and 2260, which read as follows:
À compter d’aujourd’hui, à l’expiration du délai de trois ans à compter de la date indiquée dans l’avis d’admission pour soumettre une déclaration d’emploi, le Bureau exigera des raisons considérables et substantielles pour justifier l’octroi d’une autre prorogation de délai ainsi que les détails spécifiques empêchant le dépôt de la déclaration d’emploi. Le droit prescrit de 50$ doit être acquitté pour chaque demande. [Les italiques sont nôtres.]
Effective immediately, upon the expiration of three years from the initial deadline to file a Declaration of Use provided in the Notice of allowance, the office will require \textit{significant and substantive reasons} which clearly justify a further extension of time and which set out in detail the reason(s) \textit{why it is not yet} possible to file a Declaration of Use. The prescribed fee of $50.00 is required for each request. [Our emphasis.]
[The prescribed fee is now $125.00.]
3. **Raisons considérables et substantielles**
La pratique administrative publiée dans le journal des marques de commerce le 28 janvier 1998, 4 février 1998, 11 février 1998 et 18 février 1998, énonçait que le Bureau exigerait des raisons *considérables et substantielles* pour justifier l’octroi d’une autre prolongation de délai après le délai de trois ans à compter de la date indiquée dans l’avis d’admission pour soumettre une déclaration d’emploi. L’évaluation afin de déterminer si une raison est considérable et substantielle, sera faite sur une base individuelle par le Gestionnaire de la section des déclarations et des enregistrements et la Directrice adjointe, Direction des marques de commerce. [Les italiques sont nôtres.]
This part of the directive, which requires “significant and substantive reasons”\(^{35}\) or, if we prefer, “considerable and substantive reasons”\(^{36}\), targets requests for an extension of time that extend beyond the period of three (3) years from the original deadline to file a declaration of use. Again, the justified extensions will be granted for periods of six (6) or twelve (12) months\(^{37}\), according to the situation.
---
\(^{35}\) According to the 1994 revised 3rd edition of the *Collins English Dictionary*, “significant” means “1. Having or expressing a meaning; indicative 3. Important, notable, or momentous whereas “substantive” means 2. Of, relating to, containing, or being the essential element of a thing”. [The 1999 edition provides for the same definitions.] According to the 1983 edition of the *Gage Canadian Dictionary*, “significant” means “1. Full of meaning; important; of consequence” whereas “substantive” means “3. Real, actual; 4. Having a firm and solid basis”. Finally, the 1996 edition of *The Oxford English Reference Dictionary* defines “significant” as “1. having a meaning; indicative; 3. Noteworthy; important; consequential” and “substantial” as “of real importance, value, or validity”. The 2004 edition of the *Canadian Oxford Dictionary* defines «substantial» as «1. of great importance or consequence 2. having or conveying an unstated meaning ; having information that can be gathered; 3. noteworthy, noticeable».
\(^{36}\) According to the 1996 edition of *Le nouveau petit Robert*, «considérable» means «1. VIEILLI Qui attire la considération [i.e., motif que l’on a pour agir]à cause de son importance, de sa valeur, de sa qualité; 2. Très important (grandeur, quantité)» alors que «substantiel» signifie «1. Essentiel; 2. Qui appartient à la substance [i.e., ce qui constitue la chose], à l’essence, à la chose en soi; 3. Qui nourrit beaucoup; 4. Riche en substance par son contenu; 5. Important.». [The 2007 edition provides for the same definitions.] According to the 3rd 1997 edition of the *Multi dictionnaire de la langue française*, «considérable» signifie «Important par le nombre, le prix, la force; syn. Énorme; grand; immense» alors que «substantiel» signifie «1. Nutritif; 2. Dont le contenu est étoffé, riche; 3. Important.». [The 4th 2003 edition provides for the dsaeem definitions.] Finally, the 1998 edition of *Le petit Larousse illustré* defines «considérable» as «Dont l’importance est grande; notable» et «substantiel» comme «1. Nourrissant. 2. Important, considérable. 3. Essentiel, capital. 4. Relatif à la substance [i.e., matière dont qch est formé]». [The 2007 edition provides for the same definition.]
\(^{37}\) We can presume that, in the case of pharmaceutical trade-marks, the long waiting period for the granting of a government approval should be considered as a “significant and substantive reason”. However, the obtaining, from a municipality, of a simple permit which allows one to do business would not be considered as a “significant and substantive reason”.
It must now be determined what constitutes a "significant and substantive" reason. Even though the Trade-marks Office intends, at least for the time being, to treat the issue on a case by case basis\textsuperscript{38}, we can already presume that what would be considered as a special circumstance, thus excusing the non use in case of a forfeiture procedure by virtue of section 45 of the Act, should be accepted by the Trade-marks Office\textsuperscript{39}. However, these reasons should go beyond normal commercialisation efforts\textsuperscript{40} (such as the search for business partners)\textsuperscript{41}.
In closing, it should be noted that, when the Trade-marks Office is not satisfied with the given reasons in support of a request for an extension of time, a notice will be given\textsuperscript{42} which will refuse the extension but will grant the applicant the possibility to
\begin{footnotesize}
\begin{itemize}
\item[\textsuperscript{38}] At least for the beginning of the coming into effect of this policy, in order to assure consistency – which is often found to be missing- with respect to the examiners.
\item[\textsuperscript{39}] Without any doubt, the nature of the products and services and the care which is given to the presentation of the request for an extension of time can also help. This does not mean that, to be considered as “significant and substantive”, the reason must be extraordinary, result form a superior force or from a galactic invasion. Thus, in the TMO 742237 request, the following has been accepted as a “significant and substantive” reason: “Responsive to your July 10, 1998 letter, please note that the above-mentioned application is still under the process of being transferred to the company LUXINDEX SRT, S.L.. Unfortunately, this assignment has not yet been recorded in view of problems surrounding the due execution of the documents before the competent Spanish authorities. We are informed that the nature of these problems are confidential but should be resolved within the next six months hence the present request for an extension of time of six months”.
\item[\textsuperscript{40}] For example, the need for new tests, in order to upgrade a product, could be a reason which could justify the extension of time during the first period of three (3) years, but not later on. However, if this further upgrade, during the following period, is deemed to be necessary because of a change in the scientific or governmental norms [briefly explained and not simply alleged], this could, without any doubt, be considered as a “significant and substantive” reason.
\item[\textsuperscript{41}] One can also foresee situations where the same reason which justified the extension during the first period of three (3) years would also be considered as a “significant and substantive” reason. For example, oenology, as we all know, helps the tracking of pesticides, fungicides and herbicides in wine; this being said, the approval of molecules developed by the phytosanitary industry, for this purpose, takes eight (8) years...
\item[\textsuperscript{42}] In order to illustrate, this is what a notice issued on August 3, 1998, would contain, in part:
\end{itemize}
\end{footnotesize}
present better reasons within two (2) months of the notice\textsuperscript{43}. If they are accepted, the extension of time will be of six (6) months (or of twelve (12) months, if the case arises) from the filing of the improved request\textsuperscript{44}. This policy applies, whether the request for an extension of time is made within the first period of three (3) years or during the following period.
**SUMMARY**
Let us summarize.
- Within six (6) months of the notice of allowance or three (3) years of the filing of the application: no explanation/reason/justification needs to be provided;
- Within the three (3) years that follow the expiration of the six (6) months of the notice of allowance or three (3) years of the filing of the application: circumstances which must convince the registrar to grant an extension need to be provided;
- After the three (3) years which follow the expiration of the six (6) months following the notice of allowance or three (3) years of the filing of the application: the reasons must be significant and substantial.
---
\textsuperscript{43} In this case, it is not necessary to pay another fee of $125.00: The Trade-marks Office considers the improved request as being a part of the initial one and will thus reimburse the fee which will accompany the improved request. There will be no reimbursement if no improved request is filed.
\textsuperscript{44} For those who wish to extend the delays to their maximum, this would result in an extension of eight (8) or of fourteen (14) months, depending on the circumstance.
SMALL CHART
Let us use the example of a trade-mark which has been allowed for registration on 2007-07-01 and for which the initial time limit to file a declaration of use was set for 2008-01-01. The following chart\(^{45}\) illustrates, step by step, the type of reasons that must be brought forward:
| Within the initial 3 years of the time limit set in the notice of allowance | Subsequently to the 3 year time limit |
|---|---|
| 1\(^{st}\) | 2\(^{nd}\) | 3\(^{rd}\) | 4\(^{th}\) | 5\(^{th}\) | 6\(^{th}\) | 7\(^{th}\) | 8\(^{th}\) | The others |
| 20080101 | 20080701 | 20090101 | 20090701 | 20100101 | 20100701 | 20110101 | 20110701 | |
| Reasons justifying the request | Reasons justifying the request | Reasons justifying the request | Reasons justifying the request | Reasons justifying the request | Reasons justifying the request | Significant and substantive reasons | Significant and substantive reasons | Significant and substantive reasons |
|---|---|---|---|---|---|---|---|---|
---
\(^{45}\) The US situation may be summarized by the following chart. In any case, the maximum period will be 36 months from the allowance to registration (and not from the expiration of the initials statutory 6 months) to file the statement of use (SOU).
| Initial 6-Month ITU Application Period | Automatic 6-Month Extension Period | Subsequent 6-Month Extension Period | Subsequent 6-Month Extension Period | Subsequent 6-Month Extension Period | Subsequent 6-Month Extension Period |
|---|---|---|---|---|---|
| PTO sends a Notice of Allowance | Must make request within the original six month term; include a verification of continued intention to use the mark in commerce, and must remit fee. | Must show good cause (as defined by the Commissioner) and meet the same requirements as in the initial period | Must show good cause (as defined by the Commissioner) and meet the same requirements as in the initial period | Must show good cause (as defined by the Commissioner) and meet the same requirements as in the initial period | Must show good cause (as defined by the Commissioner) and meet the same requirements as in the initial period |
Fig. 19:25C Chart of Periods of Extension of Time to file SOU, McCarthy On Trademarks.
AVIS DE PRATIQUE
Publié dans les éditions des 8 juillet 1998, 15 juillet 1998 et 22 juillet 1998 du Journal des marques de commerce, vol. 45, n° 2280, 2281 et 2282.
1. Prolongation de 6 mois
Le Bureau octroie présentement des prolongations de délai de six mois lorsque le délai pour déposer la déclaration d'emploi est expiré si la requête est justifiée et que le droit prescrit de $50 est acquitté.
À compter du 14 avril 1998, à la première demande de prolongation de délai et les suivantes, s'il n'y a pas de raisons données, le Bureau refusera la prolongation de délai et donnera au requérant deux mois additionnels pour répondre.
2. Prolongation de 1 an
Le Bureau octroie présentement des prolongations de délai d'un an lorsqu'un requérant est en attente d'une approbation gouvernementale avant que la marque soit en utilisation.
À compter du 14 avril 1998, à la première demande de prolongation de délai et les suivantes, si la demande est en attente d'une approbation gouvernementale, le Bureau demandera de spécifier le ministère dont le requérant attend l'approbation.
3. Raisons considérables et substantielles
La pratique administrative publiée dans le journal des marques de commerce le 28 janvier 1998, 4 février 1998, 11 février 1998 et 18 février 1998, énonçait que le Bureau exigerait des raisons considérables et substantielles pour justifier l'octroi d'une autre prolongation de délai après le délai de trois ans à compter de la date indiquée dans l'avis d'admission pour soumettre une déclaration d'emploi. L'évaluation afin de déterminer si une raison est considérable et substantielle, sera faite sur une base individuelle par le Gestionnaire de la section des déclarations et des enregistrements et la
PRACTICE NOTICE
Published in the issues of July 8, 1998, July 15, 1998 and July 22, 1998 of the Trademarks Journal, vol. 45, nos 2280, 2281 and 2282.
1. Six month extension
The Office currently grants extensions of time of six months upon the expiration of the time limit to file a declaration of use if the request is justified and the prescribed fee of $50.00 is paid.
As of April 14, 1998, on the first request for an extension of time or any subsequent one, if a reason is not provided that would justify the extension of time, the office will refuse the extension of time and allow the applicant two months to further respond.
2. One year extension
The Office currently grants extensions of time of one year where the request for an extension of time is based on a situation in which the applicant requires a type of government approval before use of the trademark can begin.
As of April 14, 1998, on the first request and on any subsequent one, if the request is based on awaiting approval from a government department, the Office will require specifics of the government department from which the applicant is seeking approval.
3. Significant and substantive reasons
The Office Practice Notice published in the Trademarks Journal January 28, 1998, February 4, 1998, February 11, 1998 and February 18, 1998, stated that the Office will require significant and substantive reasons to support a request for an extension of time that extends beyond the period of 3 years from the original deadline to file a declaration of use. The determination of whether a reason is significant and substantive will be decided on an individual basis by the manager of the Declaration and Registration Section and the Assistant Director, Trade-marks Branch. [Our
Directrice adjointe, Direction des marques de commerce. [Les italiques sont nôtres.]
LEGER ROBIC RICHARD, L.L.P.
1001 Square-Victoria - Bloc E - 8th floor
Montreal (Quebec) Canada H2Z 2B7
Tel.: (514) 987-6242 Fax: (514) 845-7874
www.robic.ca email@example.com
|
Optimal control of stacked multi-kite systems for utility-scale airborne wind energy
Jochem De Schutter\textsuperscript{1}, Rachel Leuthold\textsuperscript{1}, Thilo Bronnenmeyer\textsuperscript{2}, Reinhart Paelinck\textsuperscript{2}, Moritz Diehl\textsuperscript{1}
Abstract—Within the prevailing single-kite paradigm, the current roadmap towards utility-scale airborne wind energy (AWE) involves building ever larger aircraft. Consequently, utility-scale AWE systems increasingly suffer from similar upscaling drawbacks as conventional wind turbines. In this paper, an alternative upscaling strategy based on stacked multi-kite systems is proposed. Although multi-kite systems are well-known in the literature, the consideration of stacked configurations extends the design space even further and could allow for significantly smaller aircraft, and therefore possibly to cheaper, mass-producible utility-scale AWE systems. To assess the potential of the stacking concept, optimal control is applied to optimize both system design and flight trajectories for a range of configurations, at two different industry-relevant wind sites. The results show that the modular stacking concept effectively decouples aircraft wing sizing considerations from the total power output demand. An efficiency increase of up to 20% is reported when the harvesting area for the same amount of aircraft is doubled using a stacked configuration. Moreover, it is shown that stacked configurations can more than halve the peak power overshoot within one power cycle with respect to conventional single-kite systems.
I. INTRODUCTION
Airborne wind energy (AWE) is an emerging renewable energy technology that aims to harvest high-altitude wind power at a fraction of the cost and material of conventional wind turbines. It is based on the concept, first investigated in [11], of a tethered aircraft - a “kite” - flying fast crosswind manoeuvres at high altitudes, thereby tapping into strong and steady winds that conventional wind turbines cannot reach.
In order to compete in the utility market, AWE systems in the MW range need to be developed and made economically viable. The upscaling strategy currently dominating in industry is based on single-kite systems and entails increasing aircraft size until the desired power output is reached. The largest existing single-kite system known to date is a 600 kW system with 30 m wing span [12]. Recently, leading companies in the field have presented plans to develop 4 MW and 5 MW systems with 50 m [7] and 65 m wing span respectively [12].
This upscaling strategy intrinsically involves many of the disadvantages associated with large-scale conventional wind turbines: expensive system production, transport, maintenance, repair, intricate structural mechanics, etc. These drawbacks limit the relative electricity price improvement that can be achieved by an AWE system.
There are however alternative strategies. Multi-kite AWE systems (MAWES), first envisaged in [13], are based on the idea of multiple aircraft flying in crosswind, balancing their forces so as to minimize their shared main tether’s lateral motion. Initial studies on dual-kite systems based on optimal control using point-mass models predict an efficiency increase of at least factor two [16] for large-scale systems, and a factor 1.7 when induction effects are taken into account [17].
Single-layer multi-kite systems have a bounded potential however. The steady-state study [9] predicts that multi-kite efficiency decreases with the number of kites per layer. Additionally, safety prohibits many aircraft flying close to each other. Hence the idea of stacking multi-kite systems by extending the main tether, as shown in Fig. 1, thereby naturally increasing the total harvesting area as the number of aircraft in the system grows. A stack-based upscaling concept for AWE based on networked rotary ring kites and tensile rotary power transfer is proposed in [14]. However no simulation data are available on the performance of this system in the MW range.
This paper investigates the performance of utility-scale stacked multi-kite systems relative to single-layer single- and multi-kite systems. The system class under consideration is that of rigid-wing pumping type systems. Optimal control is applied to compute optimal system designs and flight trajectories for different multi-kite configurations, tailored to two industry-relevant wind sites with related power demand. The different configurations are then compared in terms of flight trajectory and aircraft sizing.
Section II states optimization-friendly system dynamics and path constraints. Section III describes the parametric optimal control problem (OCP) used for trajectory optimization and Section IV presents the case study results, while Section V poses some conclusions.
II. SYSTEM MODEL
The system model incorporates 6 DOF aircraft dynamics, tether tension and drag forces, induction effects, wind shear modeling and the atmospheric density drop. It has been presented in detail for single-layer MAWES in [8]. In this section, we summarize this model and augment it with additional assumptions for the multi-layer case as well as with some mass scaling laws.
A. Topology
The general system configuration is represented by a tree-structured topology. The trees are parametrized by nodes \( n \in \mathcal{N} \), where each node represents a tether end-point. The tethers are assumed to be rigid and straight, which is a good assumption if tether tension is high. Some of the nodes \( k \in \mathcal{K} \) have kites attached to them, while some nodes \( l \in \mathcal{L} \) are layer nodes, with \( \mathcal{L} := \mathcal{N} \setminus \mathcal{K} \) if \( |\mathcal{N}| > 1 \) and \( \mathcal{L} := \mathcal{N} \) otherwise. The parent map \( P(n) \) defines the interlinkage between nodes, and the children map \( C(n) \) returns the set of nodes with parent \( n \).
In this paper, we only consider the topologies that are defined by the following restrictions. The kite nodes can only be at the leaves of the tree, i.e. \( P(n) \notin \mathcal{K} \) for all \( n \in \mathcal{N} \). Every layer node can be the parent of only one other layer node, i.e. \( |C(l) \setminus \mathcal{K}| \leq 1 \) for all \( l \in \mathcal{L} \). And if \( |\mathcal{N}| > 1 \), every layer node has the equal amount of kite children, which must be greater than one, i.e. \( |C(l) \setminus \mathcal{L}| = |C(j) \setminus \mathcal{L}| > 1 \) for all \( l, j \in \mathcal{L} \).
Given these restrictions, every possible specific configuration is uniquely determined by the pair \( (|\mathcal{L}|, \frac{|\mathcal{K}|}{|\mathcal{L}|}) \), i.e. by the number of layers and by the number of kites per layer. Fig. 1 illustrates the proposed notation for some typical examples.
B. System dynamics
The dynamic equations of the system are established using an optimization-friendly modeling procedure based on non-minimal coordinates [4]. For given sets \( \mathcal{N}, \mathcal{K} \) and parent map \( P \), the complete system dynamics are summarized by the parametric index-1 DAE
\[
\mathbf{F}(\dot{\mathbf{x}}, \mathbf{x}, \mathbf{u}, \mathbf{z}, \boldsymbol{\theta}, \mathbf{p}) = 0,
\]
with associated consistency conditions \( \mathbf{C}(\mathbf{x}) = 0 \).
The differential states \( \mathbf{x} := (\mathbf{q}, \dot{\mathbf{q}}, \mathbf{R}, \mathbf{\omega}, \mathbf{\delta}, l_t, \dot{l}_t, \ddot{l}_t) \) firstly contain \( \mathbf{q} \) and \( \dot{\mathbf{q}} \) that are concatenations of the node positions \( \mathbf{q}_n \in \mathbb{R}^3 \) and velocities \( \dot{\mathbf{q}}_n \in \mathbb{R}^3 \) respectively. These are followed by the states specific to kite nodes, namely \( \mathbf{R}, \mathbf{\omega}, \mathbf{\delta} \), which are concatenations of all \( \mathbf{R}_k, \mathbf{\omega}_k, \mathbf{\delta}_k \). Aircraft orientation is represented by direct cosine matrices \( \mathbf{R}_k := [\mathbf{e}_{1,k}, \mathbf{e}_{2,k}, \mathbf{e}_{3,k}] \in \mathbb{R}^3 \) that contain the chord-wise, span-wise and upwards unit vectors of the aircraft body frames, expressed in the inertial frame \( \{\mathbf{e}_x, \mathbf{e}_y, \mathbf{e}_z\} \). All rotation matrices should be orthonormal, i.e. they are constrained to evolve on the 3D manifold defined by
\[
\mathbf{c}_{\mathbf{R},k} := P_{\text{ut}}(\mathbf{R}_k^\top \mathbf{R}_k - I) = 0,
\]
where the operator \( P_{\text{ut}} \) is used to select the upper triangular elements of a matrix. The aircraft angular velocities \( \mathbf{\omega}_k \in \mathbb{R}^3 \) are given in the body frame. The surface deflections \( \mathbf{\delta}_k = [\delta_{a,k}, \delta_{e,k}, \delta_{r,k}] \in \mathbb{R}^3 \) of aileron, elevator and rudder respectively, give control over the aircraft aerodynamics. Finally, tether length \( l_t \in \mathbb{R} \), speed \( \dot{l}_t \in \mathbb{R} \) and acceleration \( \ddot{l}_t \in \mathbb{R} \) describe the main tether reel-in and -out evolution.
The controls \( \mathbf{u} := (\mathbf{\delta}, \ddot{l}_t) \) are given by the concatenation of all aircraft surface deflection rates \( \dot{\mathbf{\delta}}_k \in \mathbb{R}^3 \) and by the tether jerk \( \ddot{l}_t \in \mathbb{R} \).
The algebraic variables \( \mathbf{z} := (\lambda, \mathbf{a}) \) describe firstly the concatenation of all lagrange multipliers \( \lambda_n \in \mathbb{R} \) related to the tether constraints that restrict the position of each node \( n \in \mathcal{N} \) to evolve on a 2D manifold defined by
\[
c_n := \frac{1}{2} \left( (\mathbf{q}_n - \mathbf{q}_{P(n)})^\top (\mathbf{q}_n - \mathbf{q}_{P(n)}) - l_n^2 \right) = 0,
\]
where \( \mathbf{q}_0 \) lies in the origin, and where \( l_n \) is the tether length associated with node \( n \). The first tether length is that of the main tether, i.e. \( l_1 := l_t \). If \( |\mathcal{N}| > 1 \), then the lengths of the secondary tethers \( l_k := l_S \) for \( k \in \mathcal{K} \) and the lengths of the layer-linking tethers \( l_l := l_l \) for \( l \in \mathcal{L} \setminus \{1\} \). The variables \( a_l \), one for each layer \( l \in \mathcal{L} \), are concatenated in the vector \( \mathbf{a} \). They represent the instantaneous induction factors, further described in section II-D.
The variables \( \boldsymbol{\theta} := (\mathbf{l}, \mathbf{d}) \) contain concatenations of the key system parameters that can be optimized over. The vector \( \mathbf{l} := l_S \) if \( |\mathcal{N}| > 1 \) and empty otherwise. The vector \( \mathbf{d} \) contains the main tether diameter \( d_T \), the secondary tether diameters \( d_S \) if \( |\mathcal{N}| > 1 \) and the intermediate tether diameter \( d_I \) if \( |\mathcal{L}| > 1 \).
The constant parameters \( \mathbf{p} := (b, u_{\text{ref}}, z_{\text{ref}}, z_0) \), allow the dynamics to be evaluated for different aircraft wing spans \( b \) and different wind profile parameters \( u_{\text{ref}}, z_{\text{ref}}, z_0 \), as further described below.
C. Lagrangian dynamics
In accordance with the Lagrangian approach proposed in [4], the system Lagrangian can be defined as
\[
L := \sum_{n}^{N} (T_n - V_n - \lambda_n c_n),
\]
with for each node \( n \in \mathcal{N} \) the kinetic energy \( T_n \) and potential energy \( V_n \) given by
\[
T_n := \frac{1}{2} m_n \dot{\mathbf{q}}_n^\top \dot{\mathbf{q}}_n + \frac{1}{2} \mathbf{\omega}_n^\top J_K \mathbf{\omega}_n \delta_{(n \in \mathcal{K})}
\]
\[
V_n := m_n g \mathbf{q}^\top \mathbf{e}_z.
\]
Here, the mass of each node is defined as
\[
m_n := m_K \delta_{(n \in \mathcal{K})} + \frac{1}{2} \rho_t (d_n^2 l_n + \sum_{c}^{C(n)} d_c^2 l_c) \frac{\pi}{4}.
\]
Note that each node carries half of the weight of each tether that is linked to it. Here, \( m_K \) and \( J_K \) are the aircraft mass and moment of inertia respectively. The variable \( \delta_{(n \in K)} \) equals 1 for kite nodes and 0 for tether nodes. Finally, \( \rho_t \) is the tether material density.
The translational dynamics are readily given by
\[
\frac{d}{dt} \frac{\partial L}{\partial \dot{q}} - \frac{\partial L}{\partial q} = F + F_{\text{mom}}
\]
with \( F \) the concatenation of the external forces \( F_n \) exerted on each of the nodes. The term
\[
F_{\text{mom}} := \sum_n^N \frac{dm_n}{dt} q_n^\top \frac{\partial \dot{q}_n}{\partial q}
\]
is a momentum correction that takes into account the fact that over time mass is added to and extracted from the airborne part of the tether due to reel-in and -out.
Following [4] the rotational dynamics can be projected on a 3D manifold so as to read:
\[
J_K \frac{d\omega_k}{dt} + \omega_k \times J_K \omega_k = M_k,
\]
with \( M_k \) the aerodynamic moment exerted on the aircraft.
An index reduction combined with a Baumgarte stabilization scheme [3] is performed on the holonomic tether constraints so that they are imposed as
\[
\ddot{c}_n + 2\kappa \dot{c}_n + \kappa^2 c_n = 0,
\]
were \( \kappa \) is a tuning parameter.
The same stabilization technique is applied to the rotational kinematics:
\[
\frac{dR_k}{dt} = R_k \left( \frac{\kappa_R}{2} (I - R_k^\top R_k) + \text{skew}(\omega_k) \right)
\]
with \( \kappa_R \) another tuning parameter.
Performing Baumgarte stabilization on the system invariants in the context of periodic optimal control will not only ensure that the consistency conditions \( C(x) := (c, \dot{c}_n, c_R) = 0 \) are satisfied over the entire time interval, but also preserves linear independence of the optimization constraints [5]. The variables \( c, \dot{c}_n \) and \( c_R \) denote the concatenations of all \( c_n \) and \( \dot{c}_n \) for all \( n \in N \), and all \( c_{R,k} \) for all \( k \in K \) respectively.
The trivial kinematics
\[
\frac{d}{dt}(q, \delta, l_t, \dot{l}_t) = (\dot{q}, \dot{\delta}, \dot{l}_t, \ddot{l}_t)
\]
together with (8) - (12) then give the system dynamics summarized by (1), save for expressions for the aerodynamic forces \( F_n \) and moments \( M_k \), as well as the algebraic equations that determine the layer induction factors \( a \).
### D. Aerodynamic model
Due to wind shear in the atmospheric boundary layer, the available wind power can grow significantly with increasing altitude. A commonly used model assumes both steady flow conditions and a logarithmic profile for the freestream wind velocity [2]:
\[
u_\infty(z) := u_{\text{ref}} \frac{\log \frac{z}{z_0}}{\log \frac{z_{\text{ref}}}{z_0}} e_x.
\]
In this approximation, \( u_{\text{ref}} \) is the reference wind speed that is measured at an altitude \( z_{\text{ref}} \), whereas \( z_0 \) is the surface roughness length. The atmospheric density drop \( \rho(z) \) is taken into account using the international standard atmosphere model and the parameters found in [2].
Tether drag forces and kite aerodynamics are modeled in identical fashion as in [8], and the reader is referred to this paper for both the explicit expressions and the values of the aerodynamic coefficients. The tether drag of each segment is computed at the midpoint and distributed equally between the nodes it connects. This is a computationally inexpensive approximation that however underestimates the tether drag at the kite nodes. The aircraft aerodynamics are described by stability derivatives, based on the apparent wind velocity \( u_{a,k} \). The aerodynamic coefficients correspond to those identified for the existing Ampyx AP2 system, which is chosen here as a reference aircraft [10]. It has a mass \( m_{\text{ref}} \) and an inertia tensor \( J_{\text{ref}} \). The equations given in [8] are summarized here as:
\[
F_n := f_{F,n}(x, z, \theta, p),
\]
\[
M_k := f_{M,k}(x, z, \theta, p),
\]
for \( n \in N \) and \( k \in K \).
The actuator-disk method (AD) used to approximate induction effects is based on the one presented in [8]. It assumes steady potential flow that is in equilibrium. Based on conservation of momentum considerations, \( |L| \) algebraic equations are derived:
\[
4a_l \dot{l}_t - C_T \|u_\infty(e_z^\top q_{c,l})\|_2 = 0,
\]
each of them determining an instantaneous induction factor \( a_l \), for \( l \in L \). Here \( q_{c,l} := \sum_k^{C(l)} q_k / |C(l)| \) is the arithmetic center of the aircraft. The thrust coefficient is defined as \( C_T := \sum_k^{C(l)} n_l^\top F_k / (q_{o,l} A_l) \). The actuator normal vector is given by \( n_l := (q_l - q_{P(l)}) / \|q_l - q_{P(l)}\|_2 \). The aerodynamic pressure at the actuator center is defined as \( q'_{o,l} := 1/2\rho(e_z^\top q_{c,l}) \|u_\infty(e_z^\top q_{c,l}) - u_{\text{act},l}\|_2 \), with the actuator velocity \( u_{\text{act},l} := \sum_k^{C(l)} \dot{q}_k / |C(l)| \). The actuator area \( A_l \) is modeled as an annulus, so that \( A_l := 2\pi b \|d'_l - (n_l^\top d'_l)n_l\|_2 \), with the average distance w.r.t. the actuator center \( d'_l := \sum_k^{C(l)} (q_k - q_{c,l}) / |C(l)| \). The apparent wind velocity of kite \( k \) is then given by
\[
u_{a,k} := u_\infty(e_z^\top q_{c,P(k)}) - \dot{q}_k + u_{\text{ind},k},
\]
with \( u_{\text{ind},k} := -a_{P(k)} \|u_\infty(e_z^\top q_{c,P(k)}) - u_{\text{act},P(k)}\|_2 n_{P(k)} \). The wake of each layer is assumed to behave independently. This is done to limit computational complexity, though it is only a good assumption when the distance between the layers is large in comparison to the flight radius. In this study, the length \( l_i \) is assumed to take a constant value of 100 m, which is arguably small. However, increasing this value does not change the system power output significantly.
E. System upscaling
In order to evaluate the system dynamics for different wing spans $b$, the following mass scaling law is assumed to hold when upscaling the reference aircraft:
$$m_K := m_{\text{ref}} \left( \frac{b}{b_{\text{ref}}} \right)^\kappa \quad \text{and} \quad J_K := J_{\text{ref}} \left( \frac{b}{b_{\text{ref}}} \right)^{\kappa+2}. \tag{19}$$
For simplicity, we assume that the wing aspect ratio $AR$ remains constant. The exponent $\kappa = 2.4$ slightly underestimates the upscaling law implicit in the data presented in [7]. The values of the aircraft stability derivatives are kept constant for different aircraft sizes, under the assumption of identical geometry and low Reynold’s number variation. This assumption is violated in the results below, hence the resulting data are therefore not suited for making actual design decisions for real systems.
F. Constraints
The system path constraints
$$h(\dot{x}, x, u, z, \theta, p) \leq 0, \tag{20}$$
firstly entail that the tether stress of all tethers should be limited below some critical value:
$$\tau_n f_s - s_n \sigma_{\text{max}} \leq 0, \tag{21}$$
where $f_s$ is a safety factor, $\tau_n := \lambda_n l_n$ is the tension force of tether $n$ and $s_n := \pi \frac{d_n^2}{4}$. The parameter $\sigma_{\text{max}}$ is the maximum allowed tether stress.
Furthermore, kite accelerations are limited to a hardware-friendly range defined by
$$||\ddot{q}_k||_2 \leq a_{\text{max}}. \tag{22}$$
All kites have to keep a safety distance $f_b b$ from each other, which is formulated as
$$||q_i - q_j||_2 - f_b b \leq 0, \tag{23}$$
for all pairs $(i, j) \in K \times K$, with $f_b = 5$.
In order to ensure model validity and to prevent stall, the kites’ angle of attack $\alpha_k$ and side slip $\beta_k$ are bounded to stay within the intervals $[\alpha_{\text{min}}, \alpha_{\text{max}}]$ and $[\beta_{\text{min}}, \beta_{\text{max}}]$ respectively, as explicitly stated in [8]. Finally the kites’ orientation is constrained to prevent collision with the tether. Let $\hat{q}_k = q_k - q_{P(k)}$. The orientation constraint then reads as
$$\sin(\theta_{\text{min}}) \leq -\frac{\hat{q}_k^\top \hat{e}_{1,k}}{||\hat{q}_k||_2} \leq \sin(\theta_{\text{max}}) \tag{24}$$
$$\tan(\phi_{\text{min}}) \leq \frac{\hat{q}_k^\top \hat{e}_{2,k}}{\hat{q}_k^\top \hat{e}_{3,k}} \leq \tan(\phi_{\text{max}}), \tag{25}$$
with the corresponding bounds on the pitch and roll angles $\theta$ and $\phi$. The reader is referred to [8] for a full overview of system parameters and variable bounds used. The maximum tether length is set to 5 km. In this study, we abstract from the economical consequences (e.g. on the packing density for a farm configuration) of allowing such a long tether. Tether elasticity has been neglected in this study.
III. PROBLEM FORMULATION AND SOLUTION
Periodic optimal control is applied to compute feasible power-optimal periodic flight trajectories of free time period $T$. In the periodic optimal control problem (OCP), the initial state and terminal state can be chosen freely but must be equal:
$$x(0) - x(T) = 0. \tag{26}$$
In order to remove phase invariance from the problem, as well as to avoid multiple periodic orbits within one solution, the following phase-fix strategy is adopted. Consider a time period $T_1$ for the reel-in phase, and one $T_2$ for the reel-out phase. The overall cycle time is thus defined as $T := T_1 + T_2$. Then, reel-out and reel-in phase are uniquely determined by introducing the constraints
$$\dot{l}_t(t) \geq 0, \quad \forall t \in [0, T_1] \quad \text{and} \quad \dot{l}_t(t) \leq 0, \quad \forall t \in [T_1, T]. \tag{27}$$
Using the system dynamics and constraints described in section II, the following OCP can be solved to locally maximize the rated power output $\bar{P}$, for a given configuration with sets $K, L$ and for given parameters $p$:
$$\bar{P}(w^*, p) := \max_w \frac{1}{T} \int_0^T \hat{P}(t) dt \tag{28}$$
s.t. \hspace{1cm} (1), (20), (26) – (27)
Here $\hat{P} := \lambda_1 l_t \dot{l}_t$ is the instantaneous mechanical power transferred to the ground station by the main tether. The decision variables are defined as $w := (x(\cdot), u(\cdot), z(\cdot), \theta, T_1, T_2)$.
The OCP (28) is discretized in $10 \cdot N_{\text{loop}}$ intervals using direct collocation with Radau polynomials of order 4. The initial guess for the resulting NLP is found by a homotopy procedure, that involves solving a series of problems of increasing non-linearity, starting from the tracking of $N_{\text{loop}}$ circular loops. The NLP is formulated using the symbolic modeling language and framework for algorithmic differentiation CasADi [1] in Python, and solved with the interior-point solver IPOPT [15] and the linear solver MA57 [6]. The resulting NLP size ranges from 3565 variables, 3384 equality and 800 inequality constraints for configuration $(1, 1)$ to 19527 variables, 18447 equality and 6480 inequality constraints for configuration $(3, 2)$. The problems are solved on an Intel Core i7 2.5 Ghz, 16GB RAM. Computation times range from 30 seconds for configuration $(1, 1)$ up to more than 3 hours for configuration $(3, 2)$.
IV. RESULTS
In order to assess the potential of the stacked multi-kite configuration for utility-scale AWE, we carry out a case study at two types of wind site. Each wind site has a specific rated mechanical power demand $P_{\text{nom}}$, as summarized in Table I. For each wind site, and for each considered configuration with sets $K, L$, the optimal control problem (28) is solved for different wing spans $b$ until the average power output $\bar{P}(w^*, p_{\text{nom}}) \approx P_{\text{nom}}$, with the corresponding system parameters $p_{\text{nom}} = (b_{\text{nom}}, u_{\text{ref}}, z_{\text{ref}}, z_0)$.
### A. Trajectory analysis
The solution of (28) cannot be guaranteed to be a global one, but for all stacked multi-kite configurations the trajectories seem reasonable. They are qualitatively very similar to the dual-kites pumping cycles obtained in [8]. Fig. 2 shows as an example the trajectory of the \((3, 2)\) configuration. It consists of \(N_{\text{loop}} = 2\) loops of crosswind flight during reel-out and a straight flight back outwards during reel-in so as to minimize the force in the direction of the main tether.

In the offshore case, there is a big increase in flying height (from roughly 200 m to 500 m) in the transition from single to dual kites, as Fig. 3 shows. This is explained by the reduced main tether drag and confirms the results obtained in [16]. Due to the low offshore wind speed gradient, it becomes at some point inefficient for the dual kites to carry more tether mass in order to fly higher. As stacked systems tap into a larger overall harvesting area they can carry more mass and will fly higher (up to 750 m for the \((3, 2)\) configuration). In the onshore case, not shown here, all multi-kite systems fly at the maximum tether length due to the higher velocity gradient.

Apart from an increased flying height, stacked configurations result in a lower peak power overshoot and a more constant power output. Stacked configurations naturally distribute the amount of mass flying upwards more evenly over one revolution, as they are composed of lighter and spatially more distributed aircraft. Different layers can phase their rotation w.r.t. each other in order to obtain an optimal distribution. Fig. 4 shows how the peak power output is systematically reduced as kites and layers are added to the configuration. The configuration \((3, 2)\) achieves a 59% reduction in peak power compared to single kite systems, and a 27% reduction compared to dual kites. This is the only configuration that displays a period of constant power output.

### B. System sizing
The different required wing spans \(b_{\text{nom}}\) for considered configurations are given in Fig. 5a, whereas Fig. 5b gives the power output per total aerodynamic surface area \(P_S := P / \sum_k S_k\), which is taken here as a measure of system efficiency.
The first thing to observe is that the wing span of configuration \((1, 2)\) is about half that of configuration \((1, 1)\), since the first is able to fly at much larger altitude. The power-per-surface-area is significantly increased (50% for the offshore case), which is in line with the findings in [16]. Notice that the effect is more pronounced in the onshore case, as the wind speed gradient with altitude is larger.
The second observation is that in order to maximize efficiency, a fixed number of aircraft should always be distributed over as many layers as possible. This strategy maximizes the amount of available harvesting area. Compare for example systems \((1, 4)\) and \((2, 2)\), for both of which \(|\mathcal{L}| = 4\). The latter has roughly twice the available harvesting area as the first, resulting in a significantly higher efficiency: 20% more power per surface area offshore.
Finally, the results in Fig. 5a show how extending the design space to stacked configurations effectively decouples aircraft sizing from demanded power output. For example, assume that it would be particularly cost-effective to mass-produce an aircraft with 21 m wing span. Then, one could employ the same aircraft type, with minor adaptations, for a 2 MW onshore site in a \((1, 3)\) configuration as well as for a 4 MW offshore wind site in a \((2, 3)\) configuration. Higher power output demands could be met by opting for configurations with even more layers. Reversely, one could meet a specific power demand with a whole range of aircraft sizes, that could then be dimensioned mainly on economic criteria. The negative slopes in Fig. 5a suggest that this range could be extended to even smaller aircraft than those considered in this study.

V. CONCLUSIONS
In this paper, an upscaling strategy for airborne wind energy based on stacked multi-kite systems has been proposed. It has been shown in two industry-relevant case studies that this modular strategy can effectively render aircraft wing sizing considerations largely independent from the rated power demand of a specific wind site.
Future research will focus on developing fast models that describe the wake interaction between different stacking layers. Launch-and-landing strategies for stacked multi-kite systems are the subject of ongoing research.
ACKNOWLEDGMENTS
This research was supported by an industrial project with the company Kiteswarms Ltd, by DFG via Research Unit FOR 2401, by the German Federal Ministry for Economic Affairs and Energy (BMWi) via eco4wind (0324125B) and DyConPV (0324166B).
REFERENCES
[1] J. A. E. Andersson, J. Gillis, G. Horn, J. B. Rawlings, and M. Diehl. CasADi: a software framework for nonlinear optimization and optimal control. *Mathematical Programming Computation*, 2018.
[2] C. Archer. An introduction to meteorology for airborne wind energy. In *Airborne Wind Energy*. Springer Berlin / Heidelberg, 2013.
[3] J. Baumgarte. Stabilization of Constraints and Integrals of Motion in Dynamical Systems. *Computer Methods in Applied Mechanics and Engineering*, 1(1):1–16, 1972.
[4] S. Gros and M. Diehl, Modeling of airborne wind energy systems in natural coordinates. In *Airborne Wind Energy*. Springer-Verlag Berlin Heidelberg, 2013.
[5] S. Gros and M. Zanon. Numerical optimal control with periodicity constraints in the presence of invariants. *IEEE Transactions on Automatic Control (under revision)*, 2018.
[6] HSL. A collection of Fortran codes for large scale scientific computation. http://www.hsl.rl.ac.uk, 2011.
[7] M. Kruijff and R. Ruiterkamp. A roadmap towards airborne wind energy in the utility sector. In *Airborne Wind Energy: Advances in Technology Development and Research*. Springer, Singapore, 2018.
[8] R. Leuthold, J. De Schutter, E. Malz, G. Licitra, S. Gros, and M. Diehl. Operational regions of a multi-kite awe system. In *European Control Conference (ECC)*, 2018.
[9] R. Leuthold, S. Gros, and M. Diehl. Induction in optimal control of multiple-kite airborne wind energy systems. In *Proceedings of 20th IFAC World Congress, Toulouse, France*, 2017.
[10] G. Licitra, P. Williams, J. Gillis, S. Ghandchi, S. Sieberling, R. Ruiterkamp, and M. Diehl. Aerodynamic parameter identification for an airborne wind energy pumping system. In *Proceedings of the IFAC World Congress*, 2017.
[11] M. Loyd. Crosswind Kite Power. *Journal of Energy*, 4(3):106–111, July 1980.
[12] Makani Power Inc. Response to the federal aviation authority. Technical report, 2017.
[13] P. Payne and C. McCutchen. Self-Erecting Windmill. United States Patent 3987987, Oct. 26 1976.
[14] R. Read. Kite networks for harvesting wind energy. In *Airborne Wind Energy: Advances in Technology Development and Research*. Springer Singapore, 2018.
[15] A. Wächter and L. T. Biegler. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. *Mathematical Programming*, 106(1):25–57, 2006.
[16] M. Zanon, S. Gros, J. Andersson, and M. Diehl. Airborne wind energy based on dual airfoils. *IEEE Transactions on Control Systems Technology*, 21:1215–1222, July 2013.
[17] M. Zanon, S. Gros, J. Meyers, and M. Diehl. Airborne wind energy: Airfoil-airmass interaction. In *Proceedings of the IFAC World Congress*, pages 5814–5819, 2014.
|
MINUTES OF THE MEETING OF THE BOARD OF DIRECTORS OF THE MORONGO BASIN CULTURAL ARTS COUNCIL MBCAC
A regular meeting of the Board of Directors of the above Corporation was held onJuly 8, 2015 at the residence of the Corporation's Marketing Director. The meeting was called to order by President Klopfenstein at 5:47 PM.
Board Members Present:Paul Klopfenstein (President/Chair), Valerie Davis (Vice President), Patricia Knight (Membership Director), Lenne Rosen-‐Kabe (Secretary), Marcia Geiger (Treasurer), Alita VanVliet (Art Tours Director), Paul Morehead (Marketing Director), Scott Doten (Communications Director).
Not present:Anne Beattie, Member Events Director.
Others present:Kathi Klopfenstein (Art Tours Registrar), Ed Keesling (Art Tours committee member), Penny Morehead.
QUORUM was established.
DIRECTORS' REPORTS:
PRESIDENT'S REPORT:
The President reported on the results of a Yucca Valley Town Council meeting where the topic of exemptions from the home occupations requirement was decided on. Several MBCAC members were present to speak in favor of suspending the ordinance, which was resolved favorably by the Council, including the ability of residents of the town to have more than two cars per week parking in the street in front of their homes at any given time.
ACTION ITEMfrom June Board meeting: the President spoke with Larry Lane, manager of the premises where Gallery 62 is located, regarding an Art Tours 2015 closing party using the annex and stage
behind the gallery. The owner is supportive of the idea and has approved the use for purposes of a closing party on October 25, 2015. The President next raised the coordination needed between the purchase of ads to run on the MBCAC website, placing the ad on the website, and receipt of money. Two such ad requests have been received on firstname.lastname@example.org email account. Director VanVliet suggested that it be mentioned to new advertisers that up to 30-‐days are needed to process payments, and to get the ads up on the website, and that the duration of the website ads is counted from when they are up. Treasurer Geiger and Communications Director will work together to streamline the process. It was also pointed out that there is a need to set up for general ads in addition to ads related to the Art Tours. The suggestion was made that whenever an ad is applied for on-‐line, an e-‐ mail should be generated to email@example.com account, which is monitored by the Vice President.
The President indicated that he had checked with the Center for Healthy Generations in Yucca Valley, which among other services offers the use of their indoor and outdoor spaces free of charge to non-‐profit organizations. It might be a good location for future artists' gatherings and Board meetings. Art Tours Registrar Kathi Klopfenstein suggested that holding Board meetings at that venue rather than at private homes might encourage more MBCAC members to attend Board meetings. Secretary Rosen-‐Kabe proposed moving the starting time to 6:00 PM and concentrating on business only to limit the time needed for Board meetings.
Motion by Director Morehead seconded by VanVliet for the Registrar to contact the Center to find out regarding availability and options. Unanimous. Should the first Wednesday of the month not be available, an email vote will be taken to set a different weekday for the monthly Board meetings.
VICE PRESIDENT'S REPORT:
Vice President Davis reported that she continues monitoring the firstname.lastname@example.org e-‐mail account.
Davis next indicated that the two prospective docents interested in helping out with setting up exhibitions had withdrawn, and that presently there are no other likely candidates.
Vice President Davis will cover for the President during the August Board meeting. (It was later decided that no Board meeting would be held in August due to various summer absences.)
SECRETARY'S REPORT:
The Minutes of the June 10, 2015 Board of Directors' meeting had been previously e-‐mailed by the Secretary to all Board members. Suggestions and corrections were incorporated into the "Final" version. Motion by Director Morehead to approve the Minutes of the June 10,
2015 Board meeting, as amended. Seconded and carried without dissent.
TREASURER'S REPORT:
Treasurer Geiger reported that the Corporation remains solvent. A "Monthly Treasurer's Report for Month ending June 30, 2015" was distributed and is incorporated herein by reference.
The un-‐reconciled bank balance as of the end of June was reported to be $43,502.15.
Art Tours Director VanVliet requested a year-‐to-‐date art tours report, ACTION ITEM.
The status of the low-‐impact grant that Geiger applied for on behalf of the MBCAC is still pending.
MEMBERSHIP DIRECTOR'S REPORT:
Director Knight provided a written report, incorporated herein by reference. Active membership currently stands at 220, 42 have lapsed, and there is one new member. Emails are being sent out 2 weeks ahead of expiration date, one week before, and on the day of expiration. Knight indicated that she is working on instructions on how to update profiles on the MBCAC membership page, and will send a pdf document for proofing prior to sending it to the membership,ACTION ITEM.
Knight next indicated that she spent some time gathering information regarding a GoDaddy account, but needs to investigate further. She also stated that she has contacted web designer Jim Harvey regarding domain names to ascertain where they are registered but the issue remains unresolved.
Director Knight reiterated her request for the submission of Newsflash items in advance of the following month's electronic publication, stating that items are welcome through the month. Newsflash items should be submitted by the second Monday of the month to allow more time for creating the publication. She indicated that she plans to add the need of a website maintenance volunteer to the July Newsflash.
Director Knight indicated that she has purchased the 1TB external hard drive, and that she will leave it in the Gallery 62 office for use with the Corporation's laptop for record keeping. Those Board members interested in using the laptop should contact the Treasurer for location and password.
COMMUNICATIONS DIRECTOR'S REPORT:
Director Doten reported that currently the Hwy 62 Art Tours Facebook group page membership stands at 1034 members. He continues to update the artwork on the MBCAC homepage's carousel of images. The bottom of the home page is filled to capacity with sponsorship ads, and the size of the page would have to be increased in order to accommodate additional ads.ACTION ITEM -‐ need to check into adding space for additional ads on home page.
MARKETING:
Marketing Director Morehead stated that to date he has checks and firm commitments totaling $6090.00 in ad sales for the Art Tours catalog. He plans to address the attendees of the July 9 th artists' gathering regarding the purchase of ads in the 2015 Art Tours catalog.
He reiterated that the MBCAC is not able to offer discounted ads to anyone, including other non-‐profits. It should be noted that Board members and other artists contribute hundreds of volunteer hours to bring about the yearly Art Tours without any compensation, including discounted ads, memberships, or entry fees.
Director Morehead contacted Westways (AAA) magazine, however a half-‐page ad costs over $16,000 and a small ad goes for $2,900.00, clearly outside of the Art Tours' marketing budget.
GALLERY 62 REPORT:
No report has been received from the Gallery Director, and she will be contacted by Director VanVliet regarding the status of the August show.
EVENTS:
No report in view of the absence of the Events Director.
ART TOURS COMMITTEE:
Director VanVliet indicated that she will make personal contact with Bobby Furst of Furst Wurld to ascertain whether he plans to hold a party coinciding with the closing of the Art Tours at his location this year, regardless of MBCAC sponsorship -‐ACTION ITEM. She commented that even though the option was open to hold the annual closing party at the Gallery 62 location it may not be a realistic endeavor this year, due to lack of interest, cost, and volunteers stepping up to help organize the event. An announcement will be made during the artists' gathering scheduled for July 9, 2015 to that effect. However, any OSAT 2015 participating artist who plans on holding a closing gathering at their own venue is encouraged to do so, and the MBCAC will support by suggesting member artists post announcements on the MBCAC Facebook page and Hwy 62 Art Tours page.
VanVliet then spoke about an event scheduled for September 17 at Indian Cove campground in Joshua Tree National Park. This is a collaborative effort by Joshua Tree Excursions, the Tri-‐Chamber of Commerce, and the Park. The planning is in the early stages. It involves various businesses coming together to present what they are about, and may be a good opportunity to promote Gallery 62, MBCAC, and the Art Tours.
VanVliet went on to indicate that she had met with a representative of Joshua Tree Excursions. The JT Excursions group is planning to offer excursion tours, including lunch, during the two weekends of the OSAT 2015. They will contact participating artists via email to see how many are interested in participating in this venture.
Treasurer Geiger spoke briefly about the proposed OSAT catalog centerfold map. The map legend and cautions would be on the first page, with perhaps the last page left blank for notes. It was also noted that all catalog images have been completed and are awaiting layout.
The OSAT posters and a new batch of save-‐the-‐date postcards are ready, will be picked up and made available at the July 9 gathering.
MBCAC website: Registrar Klopfenstein indicated that she had been in contact with Janis Commentz who was honored to having a link to her websitehttp://hdacbb.weebly.com placed on the MBCAC's website. Director Doten will take care of it. A link will also be added to the monthly Newsflash, ACTION ITEM.
A moment of silence was observed by those present in memory of MBCAC webmaster Arthur Comings.
The vacancy will be announced during the artists' gathering on July 9, 2015.
Secretary Rosen-‐Kabe inquired whether Director Doten might take over the task of posting the minutes of Board meetings on the website. The issue may be resolved if a new volunteer is found to fill the webmaster position, in which case web designer Jim Harvey may be contacted to train the new volunteer.
The issue of the handling of new ads for the MBCAC website was again raised, and it was agreed that new requests should be forwarded to Communications Director Doten. Membership Director Knight will create an "event" with the Wild Apricot software for ad renewals and expirations on the MBCAC website,ACTION ITEM. It was reiterated that coordination is needed between the person in charge of the website and the Treasurer in order to effectively deal with ads.
EXHIBITIONS:
No report.
NEW BUSINESS:
Director VanVliet brought up various needs for the upcoming gathering on July 9, 2015, including a beverage dispenser. The Board unanimously authorized expenditures for the gathering to be held at the Joshua Tree Retreat Center's Friendship Hall, 59700 Twentynine Palms Hwy, Joshua Tree, CA 92252. The outline of an agenda for the meeting was presented, and will be distributed via email to all OSAT participating artists. A "Tips for a successful Tour" presentation will also be emailed to all artists
NEXT MEETING:
The next regular meeting of the Board of Directors will be held on Wednesday, August 12, 2015 at 5:00. Time, date, and location will be confirmed via email to Board members.
There being no further business, the meeting was duly adjourned at 7:20 PM.
These Minutes are certified by the Secretary.
Signature ___________________________________________________Date_______________
Lenne Rosen-Kabe
September 9, 2015
|
Heisenberg’s uncertainty principle and particle trajectories
Принцип неопределенности Гейзенберга и траектории квантовых частиц
S. Aristarkhov\textsuperscript{a1}
С.К. Аристархов\textsuperscript{a1}
\textsuperscript{a} Ludwig Maximillian University Munich
\textsuperscript{a} Мюнхенский Университет Людвига Максимилиана
В своей знаменитой статье 1927 года, а также в "Чикагских лекциях" (1929) В. Гейзенберг пришел к заключению, что мир не может состоять из точечных частиц, следующих траекториям. В этой статье мы критически анализируем аргументы В. Гейзенберга, приведшие его к этому выводу. По пути, мы уточним смысл соотношения неопределенностей и проясним некоторые заблуждения с ним связанные.
In this paper we critically analyse W. Heisenberg’s arguments against the ontology of point particles following trajectories in quantum theory, presented in his famous 1927 paper and in his Chicago lectures (1929). Along the way, we will clarify the meaning of Heisenberg’s uncertainty relation and help resolve some confusions related to it.
PACS: 44.25.+f; 44.90.+c
Introduction
Heisenberg’s work on quantum mechanics, in particular, his well-known 1927 paper [1] and his 1929 Chicago lectures [2], led to the rejection of the ontology of point particles moving on trajectories in quantum theory. As M. Born put it in his 1954 Nobel lecture [3]:
It was through this paper [Heisenberg’s 1927 [1]] that the revolutionary character of the new conception became clear. It showed that not only the determinism of classical physics must be abandoned, but also the naive concept of reality which looked upon the particles of atomic physics as if they were very small grains of sand. At every instant a grain of sand has a definite position and velocity. This is not the case with an electron.
\textsuperscript{1}E-mail: firstname.lastname@example.org
\textsuperscript{1}E-mail: email@example.com
Ever since, this view has become deeply entrenched in many textbooks and lectures (see, e.g., [4]) and remains popular today.
At the same time, the existence and successes of trajectory-containing quantum theories (TCQTs), for instance, Bohmian mechanics (also known as the de Broglie–Bohm or pilot-wave theory) [5–7], tell us that there must be something wrong with the reasoning of Heisenberg and other advocates of the view that there are no point particles following trajectories in quantum physics.
In this paper, we will investigate Heisenberg’s arguments against the ontology of point particles in quantum mechanics, presented in his well-known 1927 paper [1] and his 1929 Chicago lectures [2]. In what follows, we will also clarify the meaning of Heisenberg’s uncertainty relation (UR) and help resolve some confusions related to it.
1. Heisenberg’s arguments in 1927
It is often asserted [8, Ch. 10], [9, p. 100] that the impossibility of trajectories in the quantum world is a consequence of the Heisenberg UR (eq. (1), Sec. 2.1). Indeed, a relation similar in form to (1) appears for the first time in [1]. However, it seems Heisenberg’s argument against trajectories did not rely on it.
In [1] Heisenberg proposed to redefine the familiar physical notions of position, velocity, and trajectory:
*When one wants to be clear about what is to be understood by the words ‘position of the object,’ for example of the electron <...>, then one must specify definite experiments with whose help one plans to measure the ‘position of the electron’; otherwise this word has no meaning.*
According to his new definitions, these words are mere place holders for certain laboratory operations. In the event that no suitable laboratory operations could be found, the corresponding notion was declared meaningless, and had to be banned from the very formulation of the theory.
Regarding the possibility of measuring the trajectories of the electrons, Heisenberg wrote:
*By path we understand a series of points in space (in a given reference system) which the electron takes as ‘positions’ one after the other. As we already know what is to be understood by ‘position at a definite time,’ no new difficulties occur here. Nevertheless, it is easy to recognize that, for example, the often used expression, the ‘1s orbit of the electron in the hydrogen atom,’ from our point of view has no sense. In order to measure this 1s ‘path’ we have to illuminate the atom with light whose wavelength is considerably shorter than $10^{-8}$ cm. However, a single photon of such light is enough to eject the electron completely from its ‘path’ <...>.*
Thus he admits that it is possible to measure particle trajectories, at least in certain situations.\footnote{For instance, in the debate with Einstein [8, Ch. 10], Heisenberg concedes that the trajectory of a free particle can be measured (say, approximately) via the Wilson cloud chamber.} It would require a series of consecutive position measurements and not the simultaneous determination of the position and momentum of the particle. So the UR is not what makes the measurement of the ‘1s orbit’ problematic, but rather the assertion that a single measurement of the electron’s position via photons would ionize the atom. Therefore, from Heisenberg’s point of view, the notion of an electron’s trajectory in an atom makes no sense.
The operational principle used by Heisenberg, which we could call ‘measurement = meaning’ following [10], was criticized for being inconsistent or circular [11, p. 73]: In order to show that a certain entity (e.g., a particle trajectory) cannot be observed in an experiment, one has to first specify what one means by the said entity. And Heisenberg does exactly that for the trajectory (or the ‘path’) of an electron at the beginning of the above quote. It is inconsistent with the claim that this word (in the narrower context of the hydrogen atom) has no meaning and has to be removed from the theory.
2. Heisenberg’s arguments in 1929
Heisenberg’s Chicago lectures [2] contain an entire chapter devoted to the ‘critique of the corpuscular theory of matter’ (Ch. II). As we shall see, the argument against particle trajectories presented therein is significantly different from that of the 1927 paper.
2.1. The uncertainty relationChapter II starts with a discussion of the inequality
\[
\Delta q \Delta p \geq \hbar,
\]
where \((\Delta q)^2 = \int (q' - \bar{q})^2 |\psi_q(q')|^2 dq'\) with \(\bar{q} = \int q' |\psi_q(q')|^2 dq'\), and analogously for \(\Delta p\), where \(\psi_q\) and \(\psi_p\) are the wave function of the particle and its Fourier transform, respectively.\footnote{Just like Heisenberg, we assume that the particle moves in one-dimensional space for brevity. The generalization to three-dimensional motion is straightforward.} This inequality is a mathematical fact, which was first proven, notably, not by Heisenberg, but by Kennard [13] in 1927. In modern textbooks (see, e.g., [12]), it is called the Heisenberg position–momentum UR.\footnote{The only difference is that \(\Delta a, a = p, q\) in [2] is equal to \(\sqrt{2}\) times the standard deviation and not the standard deviation itself. That is why in [2] the right-hand side of the inequality is \(\hbar\) and not the usual \(\hbar/2\).}
The empirical import of this inequality is clear: Take two large ensembles of identically prepared single-particle systems. For the first ensemble, measure the particle position in each system. Then \(\Delta q\) approximates the standard deviation of the measured positions. For the second ensemble measure the momentum of the particle in each system. The resulting standard
deviation then approaches $\Delta p$. The inequality (1) thus states that the product of those two standard deviations cannot be smaller than $\hbar$.
On page 15 of the Chicago lectures, Heisenberg writes:
\begin{quote}
This UR \[ (1) \] specifies the limits within which the particle picture may be applied. Any use of the words ‘position’ and ‘velocity’ with an accuracy exceeding that given by equation (I) \[ (1) \] is just as meaningless as the use of words whose sense is not defined.
\end{quote}
These words and the preceding heuristic derivation of (1) probably gave birth to the following argument against the ontology of point particles [8, Ch. 10], [9].\footnote{This argument seems to be particularly widespread among laypersons and in the popular science media.} Since quantum particles are known to be wave-like, we may ascribe a definite position to a particle only if the corresponding wave $\psi(x)^2$ is sharply peaked. At the same time, it follows from (1) that, the narrower $\psi(x)$, the broader its Fourier transform $F[\psi](k)$. But according to the de Broglie relation, we have $p = k\hbar$, so a matter wave which is well localized in space, thus resembling a particle with a definite position, cannot have a definite momentum, and vice versa.
As noted by Ballentine [14], this conclusion rests on the identification of the particle with the wave packet itself, which, he argues, is unjustified given the example of a particle incident on a beam splitter with detectors placed on either side. The particle is either reflected or transmitted, whereas the wave packet is separated into reflected and transmitted components.
Here, it is worth emphasizing that, even if it were somehow possible to formulate a version of quantum theory based on the identification of particles with wave packets, which would reject the notion of particle trajectories by the given reasoning, it is not at all certain that such a version would be in any way advantageous in comparison to TCQTs like Bohmian mechanics.
2.2. Heisenberg’s argument Although the Heisenberg quote from the previous subsection could be interpreted in the manner described above, it seems that Heisenberg’s own argument against particle trajectories was markedly different. On page 20, he states:
\begin{quote}
<...> if the velocity of the electron is at first known and the position then exactly measured, the position for times previous to the measurement may be calculated. Then for these past times $\Delta p \Delta x$ is smaller than the usual limiting value, but this knowledge of the past is of a purely speculative character, since it can never (because of the unknown change in momentum caused by the position measurement) be used as an initial condition in any calculation of the future progress of the electron and thus cannot be subject to experimental verification. It is a matter of personal belief whether such a calculation concerning the past history of the electron can be ascribed any physical reality or not.
\end{quote}
\footnote{A one-dimensional picture is assumed for simplicity.}
Popper [15, p. 27] criticized Heisenberg’s conclusion here, pointing out that most measurements, especially in quantum physics, are retrodictive. It is a standard praxis to measure, say, the position of a particle to determine some of its past properties (energy or momentum) using the theory and knowledge of the experimental arrangement. Thus, according to Popper, ‘To question whether the so ascertained “past history of the electron can be ascribed any physical reality or not” is to question the significance of an indispensable standard method of measurement’.
Popper was certainly right, but it seems that he missed Heisenberg’s point, especially if we take into account the footnote on page 15:
In this connection one should particularly remember that the human language permits the construction of sentences which do not involve any consequences and which therefore have no content at all – in spite of the fact that these sentences produce some kind of picture in our imagination; e.g., the statement that besides our world there exists another world, with which any connection is impossible in principle, does not lead to any experimental consequence, but does produce a kind of picture in the mind. Obviously such a statement can neither be proved nor disproved.
It becomes clear that Heisenberg did not question the possibility of reconstructing the trajectories once we assume their existence.\footnote{It is worth mentioning that this footnote comes right after the words ‘This uncertainty relation…’, quoted in subsection 2.1, which confirms that the argument against trajectories that Heisenberg had in mind in the Chicago lectures was not the one based on the identification of the electron with a wave.} He claimed that as a consequence of the UR we cannot ‘verify’ their existence. Hence, he claims, ‘it is a matter of personal belief’ to accept or to reject the results of retrodictive measurements performed under the assumption of their existence.
By ‘experimental verification’ Heisenberg apparently meant the following: At time $t = 0$ we determine (set) the position of the particle to be $q_0$ with an accuracy $\epsilon_q$ and determine other necessary initial data (for Heisenberg it is the momentum) to predict the future position of the particle. Let $U_0$ be the region of space where the probability of finding the particle is close to one. This region can be thought of as a ball of radius $\epsilon_q$ around $q_0$. Let $U_\tau$ be the region of space where the trajectories determined by the alleged law of motion can end up at time $\tau > 0$, if they start in the region $U_0$ and the (in)accuracy of other initial data is taken into account. The law of motion and the accuracy of the initial data have to be such that the volume of $U_\tau$ is still of order $\epsilon_q$. At time $\tau$ we measure the position of the particle $q(\tau)$ with an accuracy $\epsilon'_q$. Let $B_{\epsilon'_q}(q(\tau))$ be the ball of radius $\epsilon'_q$ with center at $q(\tau)$. If, no matter how small we manage to make $\epsilon_q$ and $\epsilon'_q$, we have $U_\tau \cap B_{\epsilon'_q}(q(\tau)) \neq \emptyset$, the ‘verification’ is complete. According to Heisenberg, because of the UR
\footnote{This has in fact been done for photons [16]. A similar procedure may be possible for massive particles [17].}
it is impossible to determine the initial data (the momentum) in such a way that $U_\tau$ is of the size $\epsilon_q$, no matter how small the latter is, so the ‘verification’ is impossible.
It seems to us that the word ‘verification’ is not really appropriate here. In fact, if the volume of $U_\tau$ is large and we have $U_\tau \cap B_{\epsilon_q}(q(\tau)) \neq \emptyset$, we can still claim to have verified our trajectory-containing theory. Indeed, we fix the initial conditions in a certain way (prepare the system), then let it evolve for some time, and our observations match the predictions. Presumably, what Heisenberg meant was the verification of the necessity of trajectories in the theory. He thought that, if his ‘verification’ could be carried out, the existence of trajectories would be the only possible explanation of the observation. However, if $U_\tau$ can never be made small, at least one more possibility arises: To leave only the position distribution $(|\psi|^2)$ in the description and banish the trajectories.\footnote{Strictly speaking, this is incorrect. Of course, in theory [18] it is possible to set the initial conditions for a classical particle exactly (with zero inaccuracy), but in practice we always have a distribution in phase space. This distribution may be propagated according to the Liouville equation. Thus the trajectory-free description is not excluded even if Heisenberg’s ‘verification’ is possible. It is just that, for the center of mass of a large rigid body, the position support of the initially prepared distribution is much narrower than the size of the body. And, no matter how narrow the initially prepared distribution is, it is going to stay narrow with time. So the trajectory-based description feels more natural.} According to Heisenberg it is ‘a matter of personal belief’ which possibility to opt for.
The preferences of Born, Heisenberg, and other members of the Copenhagen school were clearly not on the side of trajectories. Unfortunately, in the following years they often passed their personal tastes off as the absolute truth. In 1958, instead of mentioning that having trajectories in a quantum theory was a real possibility, Heisenberg said something as radical as [19, p. 129]:
\begin{quote}
The idea of an objective real world whose smallest parts exist objectively in the same sense as stones or trees exist, independently of whether or not we observe them <...>, is impossible <...>.
\end{quote}
2.3. The role of the uncertainty relation Let us summarize Heisenberg’s argument against trajectories in the Chicago lectures: The UR (1) makes it impossible to ‘verify’ the existence of the particle trajectories. Therefore it is not necessary to assume it. It is a matter of taste. In the previous subsection we discussed the second part of this argument. Now we are going to focus on the first part and answer the question of how, on the basis of the UR (1), Heisenberg concluded that the aforementioned ‘verification’ was impossible. We can then ask whether his conclusion was correct.
In order to prove this claim, it is necessary to assume the existence of the particle trajectories in the first place. Heisenberg does that implicitly. The law of motion Heisenberg implies has a lot in common with Newton’s second law.\footnote{Heisenberg may indeed have intended the law of motion of classical physics. See, for example, [17].} The initial data determining the trajectory are position and momentum (velocity), so the law has to be second order in time. Apart from that,
Heisenberg assumes that the trajectories are straight lines for a free particle. This is clear, for instance, from the first quote given in subsection 2.2. He also assumes that the distribution of the momentum, meaning the mass times the vector tangent to the trajectory at a given moment of time, of a particle with wave function $\psi$ is given by the Fourier transform of $\psi$.
For Heisenberg, the wave function clearly has an epistemological character. For instance, he writes [2, p. 16]:
*Any knowledge of the coordinate $q$ of the electron can be expressed by a probability amplitude $S(q')$, $|S(q')|^2 dq'$ being the probability of finding the numerical value of the coordinate of the electron between $q'$ and $q' + dq'$.*
Since the wave function expresses *our knowledge* about the particle, it is obvious for Heisenberg (it follows from the mere definition of the wave function) that, if the particle’s position $q$ is known to be $q_0$ within a certain accuracy $\epsilon_q$, its wave function will have ‘width’ $\Delta q \equiv \epsilon_q$. In other words, the wave function of a particle right after a position measurement of accuracy $\epsilon_q$ will have ‘width’ $\Delta q = \epsilon_q$.
How accurately can the momentum of the particle be determined at the same moment of time (from Heisenberg’s point of view)? Suppose it is possible to measure it infinitely accurately before we determine the position. Then the inaccuracy in our knowledge of the momentum is equal to the inaccuracy in the measurement of its disturbance $\delta p$ due to the position measurement.\footnote{Once again, Heisenberg assumes that the momentum, i.e., the mass times the vector tangent to the trajectory, does not change, when the particle is free. In particular, it does not change between the measurements.} Let us call this quantity $\epsilon_{\delta p}$. But, according to Heisenberg’s view of the wave function, our knowledge of the momentum of the particle is expressed by the Fourier transform of $\psi$. Its ‘width’ is $\Delta p$. Thus $\epsilon_{\delta p} \equiv \Delta p$ and $\epsilon_q \epsilon_{\delta p} = \Delta q \Delta p \geq \hbar$ [20, p. 180]. Now we recall that the momentum in Heisenberg’s considerations is nothing but the vector tangent to the trajectory of the particle and that, if left alone, the particle will move in a straight line in the direction dictated by the initial momentum. Since the indeterminacy in the momentum (our lack of knowledge of the momentum) gets larger as $\epsilon_q$ gets smaller, the region $U_\tau$ – the set of points the trajectories may end up in at time $\tau$ if they start from the ball $B_{\epsilon_q}(q_0)$, for the given possible values of the momentum, – will be large. This was how Heisenberg concluded that his ‘experimental verification’ of the existence of the trajectories was impossible.
Note that, as already mentioned, the equalities $\epsilon_q = \Delta q$ and $\epsilon_{\delta p} = \Delta p$ are not even assumptions for Heisenberg; due to the assumed epistemological character of the wave function, $\epsilon_q$, $\Delta q$ and $\epsilon_{\delta p}$, $\Delta p$ are equivalent quantities in his view of things. This is why Heisenberg refers to the famous $\gamma$-microscope thought experiment, and other experiments in which both the position and
momentum of a particle are determined at the same time (see [2] §2), as ‘illustrations’ of the UR, even though the experimental situation relating to (1), the one we described in subsection 2.1, does not involve simultaneous measurements and is very different from the examples given by Heisenberg in [2]. For instance, he claimed that, in the case of the $\gamma$-microscope, the position $q$ of the electron can be measured with accuracy equal to the resolving power of the microscope: $\epsilon_q = \lambda / \sin \varepsilon$, where $\varepsilon$ is the angular aperture of the objective. At the same time the change in momentum of the electron due to this measurement (the Compton recoil) $\delta p$ can be found using the theory of the Compton effect and the momentum of the scattered photon. The latter can be measured in the same run of the experiment with accuracy $\epsilon_{\delta p} \sim \frac{h}{\lambda} \sin \varepsilon$. Thus we have $\epsilon_q \epsilon_{\delta p} \sim h$. The same relation is obtained in all other examples of the simultaneous measurement of $q$ and $\delta p$, although they are based on completely different ideas. This was no surprise for Heisenberg because for him it was bound to happen, since $\epsilon_q \equiv \Delta q$, $\epsilon_{\delta p} \equiv \Delta p$ and (1) holds.
Now, when we break down Heisenberg’s line of reasoning, our aim is to analyse it critically. In order to reach Heisenberg’s conclusion, namely the impossibility of ‘experimentally verifying’ the existence of trajectories of quantum particles, it is necessary to consider a generic empirically adequate TCQT. Heisenberg’s implicit trajectory-containing quantum theory is not only very special (not generic), but also empirically inadequate. Indeed, Heisenberg postulates that the probability density of the velocity of a particle with the wave function $\psi$ is given by the Fourier transform of $\psi$. Such a velocity field would in general fail to satisfy the continuity equation. Thus the corresponding theory will not reproduce the Born rule. Apart from that, in any TCQT which reproduces the Born rule, the wave function has to be related to the law of motion and will thus have nomological rather than epistemological status. So, if the analysis requires the relations $\epsilon_q = \Delta q$ or $\epsilon_{\delta p} = \Delta p$, which Heisenberg considered to be obvious, they will have to be proven. Thus Heisenberg’s analysis may be correct in itself, but it is irrelevant, as the TCQT he considered did not describe our world.
Although we do not know of any study in which the possibility of Heisenberg’s ‘verification’ of the existence of particle trajectories has been checked for the whole class of empirically adequate TCQTs, the relevant analysis for one of those theories, Bohmian mechanics, has in fact been performed [18].
According to [18], the spread of the particle wave function right after the position measurement cannot be greater than the inaccuracy in the measurement. In fact, we may know the position of the particle only as accurately as the $|\psi_{\text{after}}|^2$-distribution allows, where $\psi_{\text{after}}$ is the wave function of the particle after the measurement. This result may be obtained either by analysing the generic process of the position measurement [18] or by statistical arguments (see, e.g., Ch. 11 of [5]). Therefore, $\epsilon_q \sim \Delta q$ does indeed
\footnote{Since that analysis does not use the exact form of the law of motion, generalization seems to be easily attainable.}
\footnote{The conditional wave function for a certain configuration of the measurement device.}
\footnote{In Bohmian mechanics, this fact is referred to as absolute uncertainty.}
hold. More precisely, if we determined the position of the particle to be $q_0$ and defined the accuracy of our measurement to be the number $\epsilon_q$ such that the probability of finding the particle in the region $B_{\epsilon_q}(q_0)$ is $1 - \alpha$, then
$$\int_{B_{\epsilon_q}(q_0)} |\psi_{\text{after}}(q)|^2 dq = 1 - \alpha.$$
\footnote{We emphasize that this is not just the statement of the Born rule! The wave function after the measurement is determined by the wave function before the measurement and the interaction Hamiltonian. The accuracy of our measurement is determined by the interaction between the particle and the measurement device as well, but it is not obvious that we cannot infer the position of the particle from the output of the measurement device (‘position of the pointer’) more accurately than from $\psi_{\text{after}}$. That would be necessarily the case only in a $\psi$-complete theory.}
The smaller the spread of the wave function, the faster it broadens in time, and the faster the volume of the image $U_\tau$ of the region $B_{\epsilon_q}(q_0)$ grows under the Bohmian flow $\Phi_t$, as
$$\int_{B_{\epsilon_q}(q_0)} |\psi_{\text{after}}(q)|^2 dq = 1 - \alpha = \int_{\Phi_\tau(B_{\epsilon_q}(q_0))} |\psi(q, \tau)|^2 dq$$
has to hold due to equivariance [5, p. 152]. So, even if we know the system–measurement device interaction Hamiltonian and can calculate the wave function of the system after the measurement, we will be able to predict the future position of the particle only with low accuracy, because $U_\tau$ is going to be much larger than $\epsilon_q$. Heuristically, one could say that the rate of spread of the wave function corresponds to $\Delta p$ – the spread of its Fourier transform. Thus, although Heisenberg’s analysis in his Chicago lectures was far from being correct, his intuition was right after all.
3. TCQT or a $\psi$-complete theory: a matter of taste?
So, at least for Bohmian mechanics, ‘verification’ of the presence of the trajectories in the sense of Heisenberg is indeed impossible. For a physicist who defines a physical theory as a set of rules allowing us to obtain numerical predictions for experiments, this is a convincing argument against TCQTs and, in particular, Bohmian mechanics. In classical physics the accurate prediction of the future position of a particle using its trajectory is indeed possible, so the idea of using the distribution of particles in space instead does seem very unnatural. Apart from that, the corresponding formalism (Newtonian/Lagrangian/Hamiltonian) is simpler than the one based on the density in phase space and Liouville’s equation. In quantum physics, the accurate prediction of positions with trajectories is excluded and calculation of the trajectories (at least in Bohmian mechanics) requires the solution of the Schrödinger equation anyway, so for a physicist with an instrumentalist attitude, keeping only the wave function in the theory is preferable.
This is correct unless one takes into account that there is at least one big class of experiments for which standard quantum formalism, based on the wave function and self-adjoint operators, does not give unambiguous predictions: arrival time measurements. The general scheme of such measurements can be described as follows. A particle is first trapped in a certain region of space and then released at a known time, which is set to zero. A detector
of given geometry is placed at a certain distance from the region of initial confinement. At time $\tau > 0$, it clicks. This experiment is repeated many times and the distribution of the arrival times is acquired.
The standard quantum formalism, it turns out, does not furnish definite predictions for the described experiments (see, e.g., [21]). Indeed, in 1933 Pauli [22] already noticed that there is no canonical time-operator in QM. As a result, over the years, many add-ons to the standard formalism aiming to predict the arrival time distributions of a quantum particle have been suggested [21]. Not only is the adequacy of many of these proposals questionable, but also their range of applicability is severely limited [23–28]. Arguably, a generally applicable and internally-consistent recipe for describing quantum arrival-time experiments based solely on a $\psi$-complete theory is yet to be discovered.
On the other hand, in any TCQT, prediction of the arrival time distribution is a problem with an almost obvious solution: if particles follow trajectories, the time when a given trajectory crosses a certain surface can be easily calculated. The arrival time distribution can thus be obtained if the distribution of the initial positions is known (see, e.g., [29]).
Thus even a physicist for whom a theory is solely a tool for obtaining numerical predictions for laboratory experiments may be willing to prefer a TCQT over a $\psi$-complete theory.
4. Conclusion
Let us summarize our discussion. Heisenberg’s argument against particle trajectories in 1927 was not based on the uncertainty relation, but on his operationalist redefinition of the familiar physical concepts. This argument has been criticized as circular.
In contrast to what is often stated, Heisenberg’s argument against trajectories was not related to the fact that the position and velocity cannot be defined simultaneously for a wave packet.
In his 1929 Chicago lectures, Heisenberg claimed that the existence of trajectories in the quantum world is impossible to ‘verify experimentally’. By that he meant that, because of his uncertainty relation (1), no matter how accurately we know the initial position of the particle, we will not be able to predict the future position with comparable accuracy. This is why, he claimed, to accept or to reject the trajectories is a matter of personal belief.
This argument may be appealing, but Heisenberg’s derivation of the impossibility of the ‘experimental verification’ is irrelevant, since the TCQT he used was not empirically adequate. Nevertheless the analysis in Bohmian mechanics confirms that the ‘verification’ in Heisenberg’s sense is indeed impossible, at least in this trajectory-containing quantum theory.
If one defines the physical theory as a set of rules to obtain predictions for experiments, this could be reason enough to discard Bohmian mechanics and opt for a $\psi$-complete quantum formalism, unless one takes into account
the difficulties that $\psi$-complete theories encounter predicting the results of arrival time measurements. On the other hand, if one would like a physical theory to tell us about what there is, what the world consists of, how its constituents behave, and how this results in our observations, TCQTs and, in particular, Bohmian mechanics are definitely advantageous.
**Acknowledgments**
The author wishes to thank Dr. Paula Reichert, Siddhant Das, and Stephen N. Lyle for their help in the preparation of this paper.
**REFERENCES**
1. *W. Heisenberg* The Physical Content of Quantum Kinematics and Mechanics In: *J.A. Wheeler, W.H. Zurek* Quantum Theory and Measurement — Princeton University Press, Princeton, 1984, 62–84.
2. *W. Heisenberg* The Physical Principles of the Quantum Theory — University of Chicago Press, Chicago, 1930.
3. *M. Born* Nobel Lecture — https://www.nobelprize.org/uploads/2018/06/born-lecture.pdf 1954.
4. *L.D. Landau, E.M. Lifshitz* Course of Theoretical Physics — Vol. 3: Quantum Mechanics, Pergamon Press, Oxford, 1977.
5. *D. Dürr, S. Teufel* Bohmian Mechanics: The Physics and Mathematics of Quantum Theory — Springer, Berlin, Heidelberg, 2009.
6. *D. Bohm* The Undivided Universe: An Ontological Interpretation of Quantum Theory — Routledge, London and New York, 1993.
7. *P.R. Holland* The quantum theory of motion: an account of the de Broglie—Bohm causal interpretation of quantum mechanics — Cambridge University Press, Cambridge, 1995.
8. *M. Kumar* Quantum: Einstein, Bohr and the Great Debate about the Nature of Reality — W.W. Norton & Co., London, 2010.
9. *D. Bohm* Quantum Theory — Dover Publications, New York, 1951.
10. *J. Hilgevoord, J. Uffink* The Uncertainty Principle — https://plato.stanford.edu/entries/qt-uncertainty 2001.
11. *M. Jammer* The Philosophy of Quantum Mechanics — John Wiley, New York, 1974.
12. *D.J. Griffiths* Introduction to Quantum Mechanics — Prentice Hall, Upper Saddle River, 1995.
13. *E.H. Kennard* Zur Quantenmechanik einfacher Bewegungstypen // Z. Physik — 1927.— V. 44 — P. 326–352.
14. L.E. Ballentine The statistical interpretation of quantum mechanics // Rev. Mod. Phys. — 1970. — V. 42 — P. 358–381.
15. K.R. Popper Quantum Mechanics without “The Observer” — In: Quantum theory and reality, Springer, New York, 1967.
16. S. Kocsis, B. Braverman, S. Ravets, M.J. Stevens, R.P. Mirin, L.K. Shalm, A.M. Steinberg Observing the average trajectories of single photons in a two-slit interferometer // Science — 2011. — V. 332 — P. 1170–1173.
17. W.P. Schleich, M. Freyberger, M.S. Zubairy Reconstruction of Bohm trajectories and wave functions from interferometric measurements // Phys. Rev. A — 2013. — V. 87 — 014102.
18. A. Solé, X. Oriols, D. Marian and N. Zanghì How Does Quantum Uncertainty Emerge from Deterministic Bohmian Mechanics? // Fluctuation and Noise Lett. — 2016. — V. 15(3) — 1640010.
19. W. Heisenberg Physics and Philosophy — Harper and Row, New York, 1958.
20. W. Heisenberg Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik // Z. Phys. — 1927. — V. 43 — P. 172–198.
21. J.G. Muga, C.R. Leavens Arrival time in quantum mechanics // Phys. Rep. — 2000. — V. 338(4) — P. 353–438.
22. W. Pauli General Principles of Quantum Mechanics — Springer, Berlin, 1980.
23. B. Mielnik, G. Torres-Vega Time Operator: the challenge persists // Concepts of Physics — 2005. — V. II(1-4) — 81097.
24. C.R. Leavens On the “standard” quantum mechanical approach to times of arrival // Phys. Lett. A — 2002. — V. 303(2) — P. 154–165.
25. I.L. Egusquiza, J.G. Muga, B. Navarro, A. Ruschhaupt Comment on: “On the standard quantum-mechanical approach to times of arrival” // Phys. Lett. A — 2003. — V. 313(5) — P. 498–501.
26. C.R. Leavens Reply to Comment on: “On the ‘standard’ quantum-mechanical approach to times of arrival” // Phys. Lett. A — 2005. — V. 345(4) — P. 251–257.
27. S. Das, M. Nöth Times of arrival and gauge invariance // Proc. R. Soc. A — 2021. — V. 477(2250) — 20210101.
28. S. Das, W. Struyve Questioning the adequacy of certain quantum arrival-time distributions // Phys. Rev. A — 2021. — V. 104 — 042214.
29. S. Das, D. Dürr Arrival time distribution of spin-1/2 particles // Sci. Rep. — 2019. — V. 9 — 2242.
|
Pattern of the Month
May 2014
For Members of the MQG
Improv Echoed Hexagons
by Rossie Hutchinson Ann Arbor MQG Member
The Modern Quilt Guild's mission is to support and encourage the growth and development of modern quilting through art, education and community. www.modernquiltguild.com
Copyright © 2014 by Rossie Hutchinson. Distributed with permission by the Modern Quilt Guild. Some rights reserved. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. This means that you may share the pattern, but you may not use the pattern for commercial purposes. In using this pattern and displaying work created from it, you must give appropriate credit. You may adapt the pattern, but if you remix, transform, or build upon this pattern, you must distribute your work under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. For more information please consult: http://creativecommons.org/licenses/by-nc-sa/4.0/
Improv Echoed Hexagons
Improvisational Process by Rossie Hutchinson
r0ssie.blogspot.com // Fresh Modern Quilts // Rossie Crafts // instagram r0ssie_fmq Copyright © 2014 by Rossie Hutchinson. Most rights reserved.
This is an improvisational pattern, which means that I will be providing basic guidelines for construction that I expect you to modify and adjust as you make your quilt. Please feel free to make things bigger, smaller, darker, lighter, fussier, more minimalist, and more YOU as you make your quilt!
FABRIC GUIDELINES FOR TWINSIZED QUILT (67" x 90")
‐
Other materials:
CUTTING DIRECTIONS:
1. Sub-‐cutting your background fabric and putting it on your design wall. While a design wall will make this easier, you can make this quilt without one. Just lay your pieces out on a bed or floor as you work on your design and label them if they need to be put away.
1a. Cut two yards of the background fabric, to make a 44"x72" piece. Press it. Cut this in half length-‐ wise to create two 22" x 72" pieces. The pieces should be square, but do not need to be perfectly square at this time. Place the pieces one under the other on your design wall.
1b. Repeat step 1a.
1c. Repeat step 1a again, except only place one of the pieces on the design wall. Your design wall now has five pieces of fabric on it and combined they are 72" wide and 110" long. As we proceed, the exterior dimensions will shrink.
1d. If you want to insert complimentary fabric into the background of your quilt, do so now. To insert fabric like I did (in the upper right-‐hand corner of a background fabric strip), simply cut a piece from the background fabric the same width as your insert, attach the complimentary fabric to the piece you cut, and then trim it to be the same height as the rest of the background. You will not want to join it back to the plain background fabric yet (wait until after step 3b).
1e.The rest of the fabric:You will be cutting the rest of the fabric into strips and hexagons. I recommend waiting to cut these so that you can decide if you want to vary the width of your strips and the size of your hexagons after you've made a few and placed them on your design wall.
2. Make your first echoed hexagon
2a. Cut out a hexagon from background fabric. A template is included on the last page of this pattern.
2b. Cut one 72" x 1.75" from your background fabric (from the long piece you didn't put on the design wall) or two 44" x 1.75 strips from your background fabric.
2b. Cut two 44" x 1.75" strips from your contrast fabric.
2c. Using a scant ¼" seam, attach strips to every-‐other side of your hexagon as pictured below.
2e. Trim the edges of the strips as pictured below.
2f. Attach strips of your contrast fabric to the remaining three sides of your hexagon.
2g. Set the seams and then press the seams out.
2h. Trim the overhanging edges of the strips.
2i. If your seams were inconsistent or your strip wasn't cut accurately, you may have an uneven hexagon. You don't have to fix it if you don't want to (there's nothing wrong with a little bit of wonk, so long as you have strong seams!) If you do want to fix it, the easiest way is to use your rotary ruler to make the echo of the hexagon slimmer by lining up the seam with the 1" or 1 ¼" mark on your ruler and trimming the edges.
You should now have a hexagon with one echo, as pictured below.
2j. You can skip ahead to step 3 now or repeat steps 2c through 2i to add as many echoes to your hexagon as you desire. The last echo should be contrast fabric.
You will want to be consistent about which sides of the hexagon you add to first. In the photo below, I have consistently added the first three strips to the same sides. This makes a neat arrangement. You may also want to alternate it each time and experiment. Just be aware of what you're doing and how it looks and choose something that suits you!
3. Inserting your first echoed hexagon into background.
3a. Using your design wall, decide where you'd like to place your hexagon.
3b. Once you have decided where you will insert it, cut a rectangle of the background fabric from around the hexagon. In the picture below, I have marked with a dashed line the type of cut I recommend (vertical through the background fabric.) Please give yourself several inches of fabric between your hexagon and the edges of the fabric.
3c. In the example below, I have decided to place my hexagon in the background fabric that has been pieced with the complimentary fabric.
3d. In the photo above, the blue lines show how you should cut into the rectangle so that you can insert your hexagon.
1. Place your hexagon on the rectangle of background fabric in the position and at the angle you would like. Any angle is fine, but please keep the hexagon several inches from the edges of the rectangle.
2. Now, you can begin to cut into that background fabric! Begin at the top of your hexagon (what would be 12 o'clock on an analog clock). Align the edge of your ruler with the edge of the hexagon and slice the fabric from one side of the rectangle to the other (this line is marked "1" above).
3. Pull the fabric above line "1" away so that you don't cut it in the next step. You need to keep the fabric you are removing in order (don't flip it or toss it aside!) so that you can sew it back together correctly, so either place it carefully on your design wall, or mark it with a "1" and put it somewhere safe.
4. Now, proceed to the next cut. This cut is marked "2" on the photo above. It is cut in the same fashion as the earlier cut, Align the edge of your ruler with the edge of the hexagon and slice the fabric from the corner of the hexagon to the edge of the rectangle. As before, pull the piece of fabric you just cut away from your ruler, labeling it and/or placing it on the design wall.
5. Proceed around the hexagon clockwise, making each new cut so that it begins at corner of the hexagon and ends at the edge of the rectangle.There are special instructions for the last cut (marked 6 on the picture above).
6. Special instructions for the 6 th cut: On the final cut, you need to add 1/2" seam allowance to the background fabric. The easiest way to do this is to lay your ruler 1/2" overlapping into your hexagon, then carefully pull the hexagon away and trim the background fabric. If you look at the pictures below, you'll see that the ruler and the
background fabric are in the same position, but the hexagon has been pulled so that it will not be cut. DO NOT CUT THE HEXAGON.
7. You now have a hexagon of not quite symmetrical background fabric sitting under the echoed hexagon you made. Set this piece aside to start the next hexagon you make. (You'll want to trim it down first, but that can wait.)
3d. Now it is time to sew the background back around your echoed hexagon. Remember how we cut CLOCKWISE? We are going to be sewing COUNTER-‐CLOCKWISE.
1. Begin by taking the piece 6 (the last one you cut) and attaching it to your hexagon using a scant ¼" seam. Set the seam and then press it out (away from the center of the hexagon).
2. Next, add piece 5. To minimize waste, use the following procedure to line up your pieces:
With your hexagon on the bottom, put the pieces you are joining right-‐ sides together, lining up the edges you need to sew. Slide the top fabric so that the top fabric crosses the lower fabric ¼ inch in from the edge you will sew. The black arrow in the picture below points to the spot that should be 1/4 inch from the edge you will sew.
3. Continue around the hexagon, joining piece 4 and so on, counterclockwise, until your hexagon is safely sewn into its rectangle background. While the edges of your background may no longer be completely straight, I don't recommend trimming them yet. You may decide to insert another hexagon into this fabric, so wait until that's all done before squaring up the background.
4. Return the fabric to the design wall.
4. Repeat hexagon making and inserting as desired.
5. Squaring up and combining all the pieces of your quilt top.
5a. Once you've added all the hexagons you want, you need to square up all of your chunks and combine them. Begin by trimming off selvedges and uneven edges, removing as little as possible in order to keep your options open.
5b. Because of how we cut the background fabric in Step 1, it is likely that the easiest way to assemble the quilt top is to reassemble the rows and then join the rows. If you added many hexagons to a particular section of a row, you may discover that it is an inch or more shorter than other sections of the same row; because we started with extra width and height in the overall background, you should feel free to trim back taller sections; if you don't want to cut back the taller sections (or can't without slicing a hexagon), add extra pieces of background fabric to make the shorter section taller. Similarly, if you have too many seams meeting up as you join sections of the rows, sometimes inserting a 1" vertical strips of background fabric between the sections can save a lot of trouble.
5c. As the top is nearing completion, check that it is at least as big as you need it to be! I like my twin quilts to be at least 65" x 90." Add background fabric to the edges of the quilt if you need to make it larger.
6. Assemble quilt back.
8. Baste, quilt, label, and bind as desired.
Congratulations! You're done!
Please share your results and process with me by posting photos of your work in my flickr groups: "Rossie Crafts and Quilts" http://www.flickr.com/groups/rossie "Fresh Modern Quilts" http://www.flickr.com/groups/freshmodernquilts Or via Instagram using the hashtag #rossiecrafts
The possibilities are endless! Have fun!
HEXAGON TEMPLATES
The smaller hexagon has approximately 2.5" sides.
The larger hexagon has approximately 4" sides.
You do not need to use my templates, nor do any measurements need to be exact!
|
The Communists now possess a predominance of power in China, and they are setting up a "Peoples Government" to legalize and formalize their rule in the territory under the control of their People's Liberation Army. Local governments are being established step by step on a piecemeal basis, and two months ago the Communists issued an announcement which served notice to the Chinese public and to the world that the groundwork was being laid for an overall national government in Communist China. On June 20 in Peiping the People's Daily, official organ of the North China Bureau of the Chinese Communist Party, carried a prominent headline: "The Preparatory Committee of the New Political Consultative Conference is established in Peiping. It is preparing to convene a New Political Consultative Conference and establish a Democratic Coalition Government." Prior to this announcement, the literate public in China had known of the Communists' intention to establish a government by means of a Political Consultative Conference, but no one outside of the inner circle of the Communists and their closest political allies knew of the steps being taken in that direction. Since the brief flurry of publicity in June, furthermore, the curtain has dropped again on the political stage in Communist territory.
At present secret high-level discussions are taking place in Peiping, and the New Political Consultative Conference is expected to convene in the near future. According to one prevalent rumor in Peiping, the Communists hope to have a new national government ready for formal inauguration on October 10, anniversary of the 1911 Wuchang Uprising which led to the collapse of the Manchu Dynasty. Some political observers believe that the machinery of government cannot be assembled, greased and started by that date but that it will be running before the end of this year. In any case, the establishment of a formal national regime in Communist territory is imminent, and when the new government comes into existence the Chinese Communist Party will have transformed itself from a party leading a revolutionary movement into a party running the government controlling a major part of the nation. This government will lay claim to recognition as the Government of China, superseding the Nationalist Government now scattered in refugee centers in Canton, Chungking and Formosa.
On the eve of this major political development, I will describe to you in this and subsequent letters some of the background of government and politics in Communist China. My information, like that of all foreign observers in China, is fragmentary, but I believe that during the six months I have spent in Communist Peiping I have gathered information and impressions which are difficult to obtain elsewhere.
* * * *
In the United States, "government" is a comparatively restricted term. It is generally used to refer to the elected representative bodies and the bureaucracy which together formulate and implement policies and laws. In China, however, the term must be given a much broader interpretation. The right to formulate policies, the power to make and enforce decisions with the binding force of law, and even actual administration are not concentrated solely in the hands of the civil government. One dramatic illustration of this is the fact that the Chinese Communists already govern roughly half of China and yet do not have a central government. The Communists will have a central government within a short period of time, but it is safe to predict that this government in itself will not possess a monopoly of governing authority even in the territory under its jurisdiction. There are in China, both in Nationalist and Communist territory, three parallel lines of authority -- the government, the army and the party -- and each of these carries out functions of a governmental nature. This division of power has never been completely eliminated since 1911, because social revolution, foreign invasion and civil war have created internal chaos and have prevented the stability necessary for a civil government to monopolize power and rule peacefully over the whole country. Generally these three centers of authority have been merged by the overlapping of personnel and the centralization of leadership and control, but centrifugal forces have constantly operated to keep them separate to a certain degree, and all three have carried out governmental functions through separate organizations. Almost always the party has been supreme, however, and has tried to be a power unto itself controlling both the army and the government. The supremacy of the party is striking in Communist territory, and I will therefore describe the characteristics of the Chinese Communist Party before going on to the army and governmental structure which it controls.
According to a pamphlet entitled *Textbook for Communist Party Members*, (which is based largely on the "Report on Amendment of the Party Constitution" made on May 14, 1945, to the All-China Communist Party Congress by Liu Shao-ch'i, head of the Central Committee's Secretariat, expert on party organization, and major party theorist), "The (Chinese) Communist Party is organized from among workers, farmers and all laborers (including intellectuals) who are the most progressive, have the highest degree of consciousness and have determined to serve the welfare of the Chinese people and who furthermore wish to struggle for a New Democratic Society and a Communist society." According to this definition, therefore, the party has a limited class basis. The membership is restricted to workers, farmers, laborers and intellectuals. The term "intellectual" is one which can be flexibly interpreted, of course, and many of the top leaders of the party are non-proletarian in their class origin, but in theory the party's membership as it is officially defined has a class basis.
One whole chapter in the same pamphlet is titled "Party Discipline." The emphasis on discipline is not confined to this chapter alone, however, but runs as a theme throughout. When a new member is being initiated into formal party membership, for example, he (or she) must swear "...to obey the organization, to sacrifice myself, to execute orders, to observe discipline, to protect secrets...", and the pamphlet asserts that "... every party member must observe party discipline. If a person violates the discipline he will be punished by the party." This emphasis upon discipline is one of the distinguishing characteristics of the Chinese Communist Party. Some people have described membership in the party as similar to the life of a professional soldier. It is a career. Once a person joins the party, he is not only bound by oath to obey party decisions, but he may be assigned to any job in any locality. Not only does a party member live and work according to assignments made by party leaders, but he is entirely dependent upon the party for his livelihood. This dependence is almost complete because of the party's system of supporting members. A party member and his family are given every care and consideration in the form of food allotments, medical care and education, but salaries are nominal. As a result party members have no financial independence.
It is difficult to enter the party. Not only must a recruit (who must be 18 years of age or over) be introduced by an old member; in addition he must obtain a personal guarantee in which a regular member assumes responsibility before the party for the new member's reliability. After introduction a prospective member must go through a long period of "political", "ideological" and "organizational" preparation. Then he must undergo a strict investigation and examination. His qualifications are thoroughly discussed by several levels of party organizations, including the Cell, Branch and District levels, and must be approved by all of them. If he is found acceptable
he is taken in as a probationary member. The probationary period varies, but at present in Manchuria it is three months for workers, agricultural laborers, poor farmers, city poor people and revolutionary soldiers, and six months for middle farmers, organizational employees, intellectuals, and so on. Only after passing the probationary testing period can a person take the oath and become a regular party member.
The effectiveness of this system in ensuring discipline and complete obedience and loyalty to the party is obvious to anyone who has observed the Chinese Communists in operation, for the party is a tightly-knit corps of carefully-selected, professional political careerists.
Discipline is one of the basic principles underlying the Chinese Communist Party. Another is the idea of leadership. "The Communist Party", states the pamphlet already cited, "has the ability to lead the action of all kinds of organizations (farmers' unions, labor unions, governments, armies)....the Communist Party is the highest command for the leadership of all organizations." Stripped to essentials, the Communists' claim is that because of their unique qualifications they compose a small group which has the right to lead the majority. This idea of the special right to rule possessed by a relatively small group is not new in China. The Confucian bureaucracy in pre-1911 China believed in themselves as a governing elite because of their scholastic qualifications, and after its rise to power the Kuomintang justified its monopoly of political power by the theory of "political tutelage" outlined by Sun Yat-sen. The Communists claim to have a special understanding of social and historical forces. The Communists specifically reject the theory of "political tutelage", but the principle of leadership as they describe it seems to be very similar. There are some differences in detail. For example, the Communists' theory of leadership does not by definition exclude the existence of other political parties in the government, as did the theory of "political tutelage", so the Communists are able to take into their fold various minor parties and political groups. On the other hand, the period of "political tutelage" as defined in Kuomintang theory was a limited period of preparation for eventual political competition in the period of constitutional democracy, and the Kuomintang finally did permit the open formation of a few minor parties (although this did not alter its real monopoly of power), whereas the Communists' theory of leadership is permanent and without limitation even in theory. In any case, the idea of leadership is an essential part of the Chinese Communists' ideology, and according to the ideology the party should be "the highest command for leadership of all organizations", both governmental and non-governmental.
The organization of the Chinese Communist Party itself is based on what is called the theory of "democratic centralization", or "centralization on a democratic base and democracy under centralized direction." One must understand the meaning of this phrase to understand the theoretical basis for Communist organizational forms. The following quotation (also from the pamphlet previously quoted) elaborates the theory. "Why say the party's system of centralization is centralization on a democratic base? For example, the organization for leadership in a Branch, the Branch Committee, is elected by the mass of party members; Branch decisions are passed after discussion by the Branch Congress (or Assembly); procedures adopted by the Branch come from the masses; and because of this the power of leadership of the Branch Committee is given to it by all the party members in the Branch. It (the Branch Committee) has the power to be responsible for representing the party masses in carrying out centralized leadership, and it manages the work of the Branch. Until decisions are altered or until the Branch Committee is reelected by all the comrades in the Branch, by the Branch Congress, all comrades in the Branch must obey the leadership of the Branch Committee, because our party is established according to this sort of principle; that is, 'the individual obeys the organization', 'the minority obeys the majority', 'the lower echelons obey the higher echelons', 'Branch organizations all obey the Central Committee'; that is to say our party's centralization is on a democratic base; it doesn't depart from democracy and isn't dictatorship of individualism."
"Why say the party's democratic system is under centralized direction? For example, the Branch Congress is convoked by the Branch Committee; the Branch Committee is convoked by the Branch Secretary; or the District Committee directs the convoking of the Branch Committee or the Branch Congress. Every congress in the party has leadership; in the congresses' discussion can be carried on, opinions can be expressed to the fullest extent, and criticism can be made; this kind of democratic life is carried on with leadership. When a Branch Committee is elected this is presented to the next higher party committee for approval. Branch decisions and work must also be presented to the District Committee for instructions and can only be put into effect after instructions have been received from the District Committee. The work and decisions of all echelons of leadership organizations must first be considered by the leadership organizations themselves and then given to the congresses for discussion. Every party act must comply with the united party constitution and common discipline. The party's democracy isn't of an anarchistic kind but is carried out with leadership."
It is clear that extreme centralization of authority is one of the most fundamental characteristics of the Chinese Communist Party.
All appointments, every decision and every act made at any level in the party must have the approval of higher authorities - which means, of course, that the power of decision in important matters is concentrated at the very top. Democracy, as the term is used by the Chinese Communists, seems to mean mass participation in party activities and party life rather than the right of ordinary party members to have direct influence on the determination of policy. Party members may discuss questions and make suggestions and criticisms, and this in fact seems to be encouraged, but once a decision on policy is made by the leaders a member merely has, as I have heard one Chinese express it, "the freedom to obey the decision."
The Communists have been successful, however, in mobilizing mass participation in organizations, both party and non-party, under their control, and for many of the persons brought for the first time into active political life, participation, even without any great influence or control over policies, is a new experience. The latest estimates of party membership made by party leaders themselves place the total membership at about three million. In view of the character of party membership there is no doubt that these three million are "active" members. The Communists have achieved a much wider base for their party, in terms of participation in party work and activities, than the Kuomintang possesses (although in its earlier years the Kuomintang mobilized many more active supporters than it has at present), but the broad mass of Communist Party members work under a system of highly centralized control by which almost all aspects of their work are defined by orders and instructions from above.
The basic unit of organization in the Chinese Communist Party is the Branch, which may be organized in any factory, mine, village or organization where there are over three party members. Every Chinese Communist must belong to one of these branches. Each Branch elects a Branch Committee, the chief of which is the Secretary of the Branch, and if the Branch is comparatively large it organizes sections for organization and propaganda. Although the Branch is considered the party's basic organizational unit, its members are subdivided into smaller groups, or Cells, each of which has a leader.
Above the branches the party organization consists of a hierarchy of committees, each with a head bearing the title of Secretary, encompassing progressively larger geographical areas. There are committees at the following levels: District, Hsien or Municipality, Region (optional), and Province. At the very top is the Central Committee. I am not clear on how the intermediate levels of committees between the District Committees and the Central Committee are selected, but if the original Soviet system of organization, upon which the party structure is based, is followed consistently each level is elected by the level immediately below it in the
hierarchy. This may or may not be the case, however. In the establishment of the New Democratic Youth Corps, which is an affiliate of the party (similar to the Komsomols in the USSR), all the regional working committees are appointed, in some cases by the Central Committee of the Corps and in other cases by the committees one level above those being selected. Whether or not the various levels of regional committees in the party itself are elected from below or appointed from above, however, they must be approved by higher authorities.
The Central Committee of the Communist Party, the Chairman of which is Mao Tse-tung, is the pinnacle of the nation-wide organizational pyramid, and it possesses unlimited authority to make decisions binding on the entire party. This committee, which has 44 regular members and 33 alternates, is elected by the All-China Communist Party Congress which meets at irregular intervals. (There have been seven of these congresses since the party was founded, and the last one was in 1945.) The All-China Congress is theoretically the supreme authority in the party, but in the extended intervals between congresses this authority is delegated to the Central Committee. The Central Committee, in turn, elects a Standing Committee, which functions in its name between plenary sessions (the most recent of which was January of this year). The Central Committee also selects a Political Bureau, a special policy-formulating group and a secretariat. In addition, the Central Committee appoints branches, called Central Bureaux, for each of the major regional areas in the country, as for example, the North China Bureau. The secretaries of each of the bureaux are members of the Central Committee, and acting in its name they are the most important party leaders in the various regional areas, outranking the local party committees in those areas.
The 77 members of the Central Committee are the most important men in Communist China. The large majority of them are old-time party leaders who were prominent in the establishment of the party or in the pre-Long March, Kiangsi period of the party's history, but a few relative newcomers have made the grade. These 77 men hold key positions, outside of the Central Committee itself, in almost all politically important organizations in Communist territory, so that the influence of the Central Committee is exercised directly through them as well as through Central Committee orders. All the top military posts are held by members of the Central Committee; these include the commanders of the People's Liberation Army (the present name of the former Red Army), the chiefs of the most important Military Control Commissions in major cities, the commanders of all important military districts, and so on. Central Committee members fill the highest governmental posts as well, including the chairmanships of the North China People's Government.
and the Central Plains People's Government (the only two regional governments formally established to date which cover an entire major region of the country). In addition the national heads of important group organizations, which have just established nation-wide organizations during the past few months, are members of the party's Central Committee. These include the All-China General Labor Union, the All-China Youth Federation, the All-China Women's Federation, and so on.
The Chinese Communist Party, therefore, is a highly centralized organization of professional political workers who have delegated the power to make decisions to a key group of leaders. The policies adopted by these leaders have the force of law in Communist territory. Some of these policies apply only to party members, but others are binding on the population as a whole. Policies are implemented by the disciplined mass membership of the party together with non-party workers cooperating with them.
In the past, and at present, the party has been far more important than existing governmental organizations in the territory under Communist control. Government bodies have assisted in the implementation of policy but rarely in its formulation. It can be expected that as the governmental structure in Communist China becomes increasingly formal, permanent and uniform, the Communist party will work through the established government administration to a greater degree than in the past, but there is no reason to believe that the party will relegate itself to a subordinate position or abandon its "theory of leadership." As long as the Communists maintain power the party will undoubtedly continue to dominate the political scene, because it is founded on a theory of primary leadership and because it controls the power to assert its primacy. In this sort of framework the government does not have an independent, continuous existence; it is not an organization which several parties may compete for. The government is an organ of the party, established by it and identified with it. In practical terms, this means that in a "coalition government" set up by the Communists, minor parties will be allowed to "participate" -- but under Communist "leadership"-- and important decisions will in fact be reserved to the Communist Central Committee, as long as it possesses its present power, regardless of the form and appearance of the governmental structure established.
Sincerely yours,
A. Doak Barnett
Received New York 9/6/49.
|
Risks For the Long Run: A Potential Resolution of Asset Pricing Puzzles
RAVI BANSAL and AMIR YARON*
ABSTRACT
We model consumption and dividend growth rates as containing (i) a small long-run predictable component and (ii) fluctuating economic uncertainty (consumption volatility). These dynamics, for which we provide empirical support, in conjunction with Epstein and Zin’s (1989) preferences, can explain key asset markets phenomena. In our economy, financial markets dislike economic uncertainty and better long-run growth prospects raise equity prices. The model can justify the equity premium, the risk-free rate, and the volatility of the market return, risk-free rate, and the price-dividend ratio. As in the data, dividend yields predict returns and the volatility of returns is time-varying.
* Bansal is from the Fuqua School of Business, Duke University. Yaron is from The Wharton School, University of Pennsylvania. We thank Tim Bollerslev, Michael Brandt, John Campbell, John Cochrane, Bob Hall, John Heaton, Tom Sargent, George Tauchen, the Editor, an anonymous referee, and seminar participants at Berkeley (Haas), CIRANO in Montreal, Duke University, Indiana University, Minnesota (Carlson), NBER Summer Institute, NYU, Princeton, SED, Stanford, Stanford (GSB), Tel-Aviv University, UBC (Commerce), University of Chicago, UCLA, and Wharton for helpful comments. We particularly thank Andy Abel and Lars Hansen for encouragement and detailed comments. All errors are our own. This work has benefited from the financial support of the NSF, CIBER at Fuqua, and the Rodney White Center at Wharton.
Several key aspects of asset market data pose a serious challenge to economic models.\(^1\) It is difficult to justify the 6% equity premium and the low risk-free rate (see Mehra and Prescott (1985), Weil (1989), and Hansen and Jagannathan (1991)). The literature on variance bounds highlights the difficulty in justifying the market volatility of 19% per annum (see Shiller (1981) and Leroy and Porter (1981)). The conditional variance of the market return, as shown in Bollerslev, Engle, and Wooldridge (1988), fluctuates across time and is very persistent. Price-dividend ratios seem to predict long-horizon equity returns (see Campbell and Shiller (1988)). In addition, as documented in this paper, consumption volatility and future price-dividend ratios are significantly negatively correlated – a rise in consumption volatility lowers asset prices.
We present a model that helps explain the above features of asset market data. There are two main ingredients in the model. First, we rely on the standard Epstein and Zin (1989) preferences, which allow for a separation between the intertemporal elasticity of substitution (IES) and risk aversion, and consequently permit both parameters to be simultaneously larger than 1. Second, we model consumption and dividend growth rates as containing (i) a small persistent expected growth rate component and (ii) fluctuating volatility, which captures time-varying economic uncertainty. We show that this specification for consumption and dividends is consistent with observed annual consumption and dividend data. In our economy, when the IES is larger than 1, agents demand large equity risk premia because they fear that a reduction in economic growth prospects or a rise in economic uncertainty will lower asset prices. Our results show that risks related to varying growth prospects and fluctuating economic uncertainty can quantitatively justify many of the observed features of asset market data.
Why is persistence in the growth prospects important? In a partial equilibrium model, Barsky and DeLong (1993) and Bansal and Lundblad (2002) show that persistence in expected dividend growth rates is an important source of volatility in price-dividend ratios. In our equilibrium model, the degree of persistence in expected growth rate news affects the volatility of the price-dividend ratio and also determines the risk premium on the asset. News regarding future expected growth rates leads to large reactions in the price-dividend ratio and the ex-post equity return; these reactions positively co-vary with the marginal rate of substitution of the representative agent, and hence lead to large equity risk premia. The dividend elasticity of asset prices and the risk premia on assets rise as the degree of permanence of expected dividend growth rates increases. We formalize this intuition in Section I with a simple version of the model that incorporates only fluctuations in growth prospects.
To allow for time-varying risk premia, we incorporate changes in the conditional volatility of future growth rates. Fluctuating economic uncertainty (conditional volatility of consumption) directly affects price-dividend ratios, and a rise in economic uncertainty leads to a fall in asset prices. In our model, shocks to consumption volatility carry a positive risk premium. The consumption volatility channel is important for capturing the volatility feedback effect; that is, return news and news about return volatility are negatively correlated. About half of the volatility of price-dividend ratios in the model can be attributed to variation in expected growth rates, and the remaining can be attributed to variation in economic uncertainty. This is distinct from models where growth rates are i.i.d., and consequently, all the variation in price-dividend ratio is attributed to the changing cost of capital.
Our specification for growth rates emphasizes persistent movements in expected growth rates and fluctuations in economic uncertainty. For these channels to have a significant quantitative impact on the risk premium and volatility of asset prices, the persistence in expected growth rate has to be quite large, close to 0.98.\(^2\) A pertinent question is whether this is consistent with growth rate data, as observed autocorrelations in realized growth rates of consumption and dividends are small. Shephard and Harvey (1990) show that in finite samples, it is very difficult to distinguish between a purely \(i.i.d.\) process and one which incorporates a small persistent component. While it is hard to distinguish econometrically between the two alternative processes, the asset pricing implications across them are very different. We show that our specification for the consumption and dividend growth rates, which incorporates the persistent component, is consistent with the growth rate data and helps justify several puzzling aspects of asset market data.
We provide direct empirical evidence for fluctuating consumption volatility, which motivates our time-varying economic uncertainty channel. The variance ratios of realized consumption volatility increase up to 10 years. If residuals of consumption growth were \(i.i.d.\), then the variance ratio of the absolute value of these residuals will be flat across different horizons. Evidence presented below and explored further in Bansal, Khatchatrian, and Yaron (2002) shows that realized consumption volatility predicts and is predicted by the price-dividend ratio. This again corroborates the view that consumption volatility is time-varying.
In terms of preferences, our main results are based on a risk aversion of 10 and an IES of 1.5. There is considerable debate about what are reasonable magnitudes for these parameters. Mehra and Prescott (1985) argue that a risk aversion of 10 and below seems
reasonable. Our value for the IES is consistent with the findings of Hansen and Singleton (1982) and many other authors. Moreover, as established below, an IES greater than 1 is critical for capturing the observed negative correlation between consumption volatility and price-dividend ratios. Further, we show that the presence of fluctuating consumption volatility leads to a serious downward bias in the estimates for the IES using the regression approach pursued in Hall (1988). This bias may help interpret Hall’s small estimates of the IES.
The remainder of the paper is organized as follows. In Section I we formalize this intuition and present the economics behind our model. The data and the model’s quantitative implications are described in Section II. The last Section provides concluding comments.
I. An Economic Model for Asset Markets
Consider a representative agent with the Epstein and Zin (1989) and Weil (1989) recursive preferences. For these preferences, Epstein and Zin (1989) show that the asset pricing restrictions for gross return $R_{i,t+1}$ satisfy
$$E_t[\delta^\theta G_{t+1}^{-\frac{\theta}{\psi}} R_{a,t+1}^{-(1-\theta)} R_{i,t+1}] = 1,$$
where $G_{t+1}$ is the aggregate gross growth rate of consumption and $R_{a,t+1}$ is the gross return on an asset that delivers aggregate consumption as its dividends each period. The parameter $0 < \delta < 1$ is the time discount factor. The parameter $\theta \equiv \frac{1-\gamma}{1-\psi}$, with $\gamma \geq 0$ being the risk-aversion parameter and $\psi \geq 0$ the intertemporal elasticity of substitution parameter. The sign of $\theta$ is determined by the magnitudes of the risk aversion and the elasticity of substitution.\(^3\)
We distinguish between the *unobservable* return on a claim to aggregate consumption, $R_{a,t+1}$, and the *observable* return on the market portfolio $R_{m,t+1}$; the latter is the return on the aggregate dividend claim. As in Campbell (1996), we model aggregate consumption and aggregate dividends as two separate processes; the agent is implicitly assumed to have access to labor income.
Although we solve our model numerically, we demonstrate the mechanisms working in our model via approximate analytical solutions. To derive these solutions for the model, we use the standard approximations utilized in Campbell and Shiller (1988),
$$r_{a,t+1} = \kappa_0 + \kappa_1 z_{t+1} - z_t + g_{t+1}, \tag{2}$$
where lowercase letters refer to logs, so that $r_{a,t+1} = \log(R_{a,t+1})$ is the continuous return, $z_t = \log(P_t/C_t)$ is the log price-consumption ratio, and $\kappa_0$ and $\kappa_1$ are approximating constants that both depend only on the average level of $z$. Analogously, $r_{m,t+1}$ and $z_{m,t}$ correspond to the market return and its log price-dividend ratio.
The logarithm of the Intertemporal Marginal Rate of Substitution (IMRS) is
$$m_{t+1} = \theta \log \delta - \frac{\theta}{\psi} g_{t+1} + (\theta - 1)r_{a,t+1}. \tag{3}$$
It follows that the innovation in $m_{t+1}$ is driven by the innovations in $g_{t+1}$ and $r_{a,t+1}$. Covariation with the innovation in $m_{t+1}$ determines the risk premium for any asset. When $\theta$ equals one, the above IMRS collapses to the usual case of power utility. To present the intuition of our model in a simple manner, we first discuss the case (Case I) in which there are fluctuations only in the expected growth rates. Subsequently, we present the complete model (Case II), which also includes fluctuating economic uncertainty.
A. Case I: Fluctuating Expected Growth Rates
We first solve for the consumption return $r_{a,t+1}$, as this determines the pricing kernel and consequently risk premia on the market portfolio, $r_{m,t+1}$, as well as all other assets. To do so we first specify the dynamics for consumption and dividend growth rates. We model consumption and dividend growth rates, $g_{t+1}$ and $g_{d,t+1}$, respectively, as containing a small persistent predictable component $x_t$, which determines the conditional expectation of consumption growth,
\begin{align*}
x_{t+1} &= \rho x_t + \varphi_e \sigma e_{t+1} \\
g_{t+1} &= \mu + x_t + \sigma \eta_{t+1} \\
g_{d,t+1} &= \mu_d + \phi x_t + \varphi_d \sigma u_{t+1} \\
e_{t+1}, u_{t+1}, \eta_{t+1} &\sim N.i.i.d.(0, 1),
\end{align*}
with the three shocks, $e_{t+1}$, $u_{t+1}$, and $\eta_{t+1}$ being mutually independent.\(^5\) Two additional parameters, $\phi > 1$ and $\varphi_d > 1$, allow us to calibrate the overall volatility of dividends (which in the data is significantly larger than that of consumption) and its correlation with consumption. The parameter $\phi$, as in Abel (1999), can be interpreted as the leverage ratio on expected consumption growth.\(^6\) It is straightforward to allow the three shocks to be correlated; however, to maintain parsimony in the number of parameters, we have assumed they are independent.
The parameter $\rho$ determines the persistence of the expected growth rate process. First, note that when $\varphi_e = 0$, the processes $g_t$ and $g_{d,t+1}$ are $i.i.d$. Second, if $e_{t+1} = \eta_{t+1}$, the process for consumption is the ARMA(1,1) used in Bansal and Yaron (2000). Additionally, if $\varphi_e = \rho$, then consumption growth is an AR(1) process, as in Mehra and Prescott (1985).
Since $g$ and $g_d$ are exogenous processes, a solution for the log price-consumption ratio $z_t$ and the log price-dividend ratio $z_{m,t}$ leads to a complete characterization of the returns $r_{a,t+1}$ and $r_{m,t+1}$ (using equation (2)). The relevant state variable for deriving the solution for $z_t$ and $z_{m,t}$ is the expected growth rate of consumption $x_t$. Exploiting the Euler equation (1), the solution for the log price-consumption $z_t$ has the form $z_t = A_0 + A_1 x_t$. An analogous expression holds for the log price-dividend ratio $z_{m,t}$. Details of both derivations are provided in the appendix.
The solution coefficients for the effect of expected growth rate $x_t$ on the price-consumption ratio, $A_1$, and the price-dividend ratio, $A_{1,m}$, respectively, are
$$A_1 = \frac{1 - \frac{1}{\psi}}{1 - \kappa_1 \rho} \quad A_{1,m} = \frac{\phi - \frac{1}{\psi}}{1 - \kappa_{1,m} \rho}. \quad (5)$$
It immediately follows that $A_1$ is positive if the IES, $\psi$, is greater than one. In this case the intertemporal substitution effect dominates the wealth effect. In response to higher expected growth (higher expected rates of return), agents buy more assets, and consequently the wealth-to-consumption ratio rises. In the standard power utility model, the need to have risk aversion larger than 1 also implies that $\psi < 1$, and hence $A_1$ is negative. Consequently, the wealth effect dominates the substitution effect.\footnote{In addition, note that $A_{1,m} > A_1$ when $\phi > 1$; consequently, expected growth rate news leads to a larger reaction in the price of the dividend claim than in the price of the consumption claim.}
Substituting the equilibrium return for $r_{a,t+1}$ into the IMRS, it is straightforward to show that the innovation to the pricing kernel is (see equation (A10) in the appendix)
$$m_{t+1} - E_t(m_{t+1}) = [-\frac{\theta}{\psi} + \theta - 1]\sigma \eta_{t+1} - (1 - \theta)[\kappa_1(1 - \frac{1}{\psi})\frac{\varphi_e}{1 - \kappa_1 \rho}]\sigma e_{t+1}$$
$$= \lambda_{m,\eta}\sigma \eta_{t+1} - \lambda_{m,e}\sigma e_{t+1}. \quad (6)$$
The expressions $\lambda_{m,e}$ and $\lambda_{m,\eta}$ capture the pricing kernel’s exposure to the expected growth rate and the independent consumption shocks, $\eta_{t+1}$. The key observation is that the exposure to expected growth rate shocks $\lambda_{m,e}$ rises as the permanence parameter $\rho$ rises. The conditional volatility of the pricing kernel is constant, as all risk sources have constant conditional variances.
As asset returns and the pricing kernel in this model economy are conditionally log-normal, the continuous risk premium on any asset $i$ is $E_t[r_{i,t+1} - r_{f,t}] = -\text{cov}_t(m_{t+1}, r_{i,t+1}) - 0.5\sigma^2_{r_{i,t}}$. Given the solutions for $A_1$ and $A_{1,m}$, it is straightforward to derive the equity premium on the market portfolio (see Section A.4 in the appendix),
$$E(r_{m,t+1} - r_{f,t}) = \beta_{m,e}\lambda_{m,e}\sigma^2 - 0.5Var(r_{m,t}), \quad (7)$$
where $\beta_{m,e} \equiv [\kappa_{1,m}(\phi - \frac{1}{\psi})\frac{\varphi_e}{1-\kappa_{1,m}\rho}]$ and $Var_t(r_{m,t+1}) = [\beta^2_{m,e} + \varphi^2_d]\sigma^2$. The exposure of the market return to expected growth rate news is $\beta_{m,e}$, and the price of expected growth risk is determined by $\lambda_{m,e}$. The expressions for these parameters reveal that a rise in $\rho$ increases both $\beta_{m,e}$ and $\lambda_{m,e}$. Consequently, the risk premium on the asset also increases with $\rho$. Similarly, the volatility of the market return also increases with $\rho$ (see equation (A22) in the appendix).
Because of our assumption of a constant $\sigma$, the conditional risk premium on the market portfolio in (7) is constant, and so is its conditional volatility. Hence, the ratio of the two, namely the Sharpe ratio, is also constant. In order to address issues that pertain to time-varying risk premia and predictability of risk premia, we augment our model in the next section and introduce time-varying economic uncertainty.
**B. Case II: Incorporating Fluctuating Economic Uncertainty**
We model fluctuating economic uncertainty as time-varying volatility of consumption growth. The dynamics for the system (4) that incorporate stochastic volatility are:
\[
\begin{align*}
x_{t+1} &= \rho x_t + \varphi_e \sigma_t e_{t+1} \\
g_{t+1} &= \mu + x_t + \sigma_t \eta_{t+1} \\
g_{d,t+1} &= \mu_d + \phi x_t + \varphi_d \sigma_t u_{t+1} \\
\sigma^2_{t+1} &= \sigma^2 + \nu_1 (\sigma^2_t - \sigma^2) + \sigma_w w_{t+1}
\end{align*}
\]
where \( \sigma_{t+1} \) represents the time-varying economic uncertainty incorporated in consumption growth rate and \( \sigma^2 \) is its unconditional mean. To maintain parsimony, we assume that the shocks are uncorrelated, and allow for only one source of economic uncertainty to affect consumption and dividends.
The relevant state variables in solving for the equilibrium price-consumption (and price-dividend) ratio are now \( x_t \) and \( \sigma^2_t \). Thus, the approximate solution for the price-consumption ratio is \( z_t = A_0 + A_1 x_t + A_2 \sigma^2_t \). The solution for \( A_1 \) is unchanged (equation (5)). The solution coefficient \( A_2 \) for measuring the sensitivity of price-consumption ratios to volatility fluctuations is
\[
A_2 = \frac{0.5[(\theta - \frac{\theta}{\psi})^2 + (\theta A_1 \kappa_1 \varphi_e)^2]}{\theta(1 - \kappa_1 \nu_1)}.
\]
An analogous coefficient for the price-dividend ratio, \( A_{2,m} \), is derived in the appendix and has a similar form. Two features of this model specification are noteworthy. First, if the IES and risk aversion are larger than 1, then \( \theta \) is negative, and a rise in volatility lowers the price-consumption ratio. Similarly, an increase in economic uncertainty raises risk premia
and lowers the market price-dividend ratio. This highlights that an IES larger than 1 is critical for capturing the negative correlation between price-dividend ratios and consumption volatility. Second, an increase in the permanence of volatility shocks, that is $\nu_1$, magnifies the effects of volatility shocks on valuation ratios, as changes in economic uncertainty are perceived as being long-lasting.
As the price-consumption ratio is affected by volatility shocks, so is the return $r_{a,t+1}$. Consequently, the pricing kernel (IMRS) is also affected by volatility shocks. Specifically, the innovation in the pricing kernel is now:
$$m_{t+1} - E_t(m_{t+1}) = \lambda_{m,\eta}\sigma_t\eta_{t+1} - \lambda_{m,e}\sigma_te_{t+1} - \lambda_{m,w}\sigma_tw_{t+1},$$
(10)
where $\lambda_{m,w} \equiv (1-\theta)A_2\kappa_1$, while $\lambda_{m,\eta}$ and $\lambda_{m,e}$ are defined in equation (6). This expression is similar to the earlier model (see equation (6)) save for the inclusion of $w_{t+1}$: Shocks to consumption volatility. In the special case of power utility, where $\theta = 1$, these volatility innovations are not reflected in the innovation of the pricing kernel, as $\lambda_{m,w}$ equals zero.\(^8\)
The equation for the equity premium will now have two sources of systematic risk. The first, as before, relates to fluctuations in expected consumption growth, and the second to fluctuations in consumption volatility. The equity premium in the presence of time-varying economic uncertainty is
$$E_t(r_{m,t+1} - r_{f,t}) = \beta_{m,e}\lambda_{m,e}\sigma_t^2 + \beta_{m,w}\lambda_{m,w}\sigma_w^2 - 0.5Var_t(r_{m,t+1}),$$
(11)
where $\beta_{m,w} \equiv \kappa_{1,m}A_{2,m}$ and $Var_t(r_{m,t+1}) = \{\beta_{m,e}^2\sigma_t^2 + \varphi_d^2\sigma_t^2 + \beta_{m,w}^2\sigma_w^2\}$.
The market compensation for stochastic volatility risk in consumption is determined by $\lambda_{m,w}$. The risk premium on the market portfolio is time-varying as $\sigma_t$ fluctuates. The ratio of the conditional risk premium to the conditional volatility of the market portfolio fluctuates
with $\sigma_t$, and hence the Sharpe ratio is time-varying. The maximal Sharpe ratio in this model economy, which approximately equals the conditional volatility of the pricing kernel innovation (equation (10)), also varies with $\sigma_t$. This means that during periods of high economic uncertainty, risk premia will rise. For further discussion on the specialization of the risk premia under expected utility see Bansal and Yaron (2000).
The first-order effects on the level of the risk-free rate (see equation (A26) in the appendix) are the rate of time preference and the average consumption growth rate, divided by the IES. Increasing the IES keeps the level low. In addition, the variance of the risk-free rate is primarily determined by the volatility of expected consumption growth rate and the IES. Increasing the IES lowers the volatility of the risk-free rate.
II. Data and Model Implications
To derive asset market implications from the model described in (8), we calibrate the model at the monthly frequency, such that its time-aggregated annual growth rates of consumption and dividends match salient features of observed annual data, and at the same time allow the model to reproduce many observed asset pricing features. Following Campbell and Cochrane (1999), Kandel and Stambaugh (1991), and many others, we assume that the decision interval of the agent is monthly but the targeted data to match are annual.
Our choices of the time series and preference parameters are designed to simultaneously match observed growth rate data and asset market data. In order to isolate the economic effects of persistent expected growth rates from those of fluctuating economic uncertainty, we report our results first for Case I, where fluctuating economic uncertainty has been shut off ($\sigma_w$ is set to zero), and then consider the model specification where both channels are
A. Persistent Expected Growth
In Table I we display the time series properties of the model given in (4). The specific parameters are given below the table. In spite of a persistent growth component, the model’s implied time series properties are largely consistent with the data.
Barsky and DeLong (1993) rely on a persistence parameter $\rho$ equal to 1. We calibrate $\rho$ at 0.979; this ensures that expected consumption growth rates are stationary and permits the possibility of large dividend elasticity of equity prices and equity risk premia. Our choice of $\varphi_e$ and $\sigma$ is motivated to ensure that we match the unconditional variance and the autocorrelation function of annual consumption growth. The standard deviation of the one-step ahead innovation in consumption, that is $\sigma$, equals 0.0078. This parameter configuration implies that the predictable variation in monthly consumption growth, i.e., the $R^2$, is only 4.4%. Our choice of $\phi$ is very similar to that in Abel (1999) and captures the “levered” nature of dividends. The standard deviation of the monthly innovation in dividends, $\varphi_d \sigma$, is 0.0351. This parameter configuration allows us to match the unconditional variance of dividend growth and its annual correlation with consumption.
Since our model emphasizes the long-horizon implications of the predictable component $x_t$, we first demonstrate that our proposed process for consumption is consistent with annual consumption data along a variety of dimensions. We use BEA data on real per-capita annual consumption growth of nondurables and services for the period 1929 to 1998. This is the
longest single source of consumption data. Dividends and the value-weighted market return data are taken from CRSP. All nominal quantities are deflated using the CPI. To facilitate comparisons between the model, which is calibrated to a monthly decision interval, and the annual data, we time-aggregate our monthly model and report its annual statistics. As there is considerable evidence for small sample biases in estimating autoregression coefficients and variance ratios (see Hurwicz (1950) and Ansley and Newbold (1980)), we report statistics based on 1,000 Monte Carlo experiments, each with 840 monthly observations — each experiment corresponding to the 70 annual observations available in our data set. Increasing the size of the Monte Carlo makes little difference in the results.
The annualized real per-capita consumption growth mean is 1.8% and its standard deviation is about 2.9%. Note that this volatility is somewhat lower for our sample than for the period considered in Mehra and Prescott (1985), Kandel and Stambaugh (1991), and Abel (1999). Table I shows that, in the data, consumption growth has a large first-order autocorrelation coefficient and a small second-order one. The standard errors in the data for these autocorrelations are sizeable. An alternative way to view the long-horizon properties of the model is to use variance ratios that are themselves determined by the autocorrelations (see Cochrane (1988)). In the data the variance ratios first rise significantly and at about 7 years out start to decline. The standard errors on these variance ratios, not surprisingly, are quite substantial.
The mean (across simulations) of the model’s implied first-order autocorrelation is similar to that in the data. The second and tenth-order autocorrelations are within one standard error of the data. The fifth-order autocorrelation is slightly above the two standard error range of the data. The empirical distribution of these estimates across the simulations as
depicted by the 5th and 95th percentiles is wide and contains the point estimates from the data. The model’s variance ratios mimic the pattern in the data. The point estimates are slightly larger than the data, but they are well within 1 standard error of the data. The point estimates from the data are clearly contained in the 5% confidence interval based on the empirical distribution of the simulated variance ratios. The unconditional volatility of consumption and dividend growth closely matches that in the data. In addition, the correlation of dividends with consumption of about 0.3 is somewhat lower, but within 1 standard error of its estimate in the data. This lower correlation is a conservative estimate, and increasing it helps the model generate a higher risk premium. Overall, Table I shows that allowing for a persistent predictable component produces consumption and dividend moments that are largely consistent with the data.
It is often argued that consumption growth is close to being i.i.d. As shown in Table I, the consumption dynamics, which contain a persistent but small predictable component, are also largely consistent with the data. This evidence is consistent with Shephard and Harvey (1990), Barsky and DeLong (1993), and Bansal and Lundblad (2002), who show that in finite samples, discrimination across the i.i.d. growth rate model and the one considered above is extremely difficult. While the financial market data are hard to interpret from the perspective of the i.i.d. dynamics, they are, as shown below, interpretable from the perspective of the growth rate dynamics considered above.
Before we discuss the asset pricing implications we highlight two additional issues related to the data. First, data for consumption, dividends, and asset returns pertain to the long sample from 1929. Clearly moments of these data will differ across subsamples. Our choice of the long sample is similar to Mehra and Prescott (1985), Kandel and Stambaugh (1991),
and Abel (1999) and is motivated to keep the estimation error on the moments small. The annual autocorrelations of consumption growth for our model are well within standard error bounds, even when compared to those in the post-war annual consumption data.\(^{11}\) Second, our dividend model is calibrated to cash dividends; this is similar to that used by many earlier studies. While it is common to use cash dividends, this measure of dividends may mismeasure total payouts, as it ignores other forms of payments made by corporations. Given the difficulties in accurately measuring total payouts of corporations and to maintain comparability with earlier work, we have focused on cash dividends as well. Jagannathan, McGrattan, and Scherbina (2000) provide evidence pertaining to the issue of dividends, and show that alternative measures of dividends have even higher volatility.
### A.1. Case I: Asset Pricing Implications
In Table II we display the asset pricing implications of the model for a variety of risk aversion and IES configurations. In Panel A, we use the time series parameters from Table I. In Panel B we increase \(\phi\), the dividend leverage parameter, to 3.5, and in Panel C we analyze the implications of an \(i.i.d.\) process. The table intentionally concentrates on a relatively narrow set of asset pricing moments, namely the mean risk-free rate, equity premium, the market and risk-free rate volatility, and the volatility of the log price-dividend ratio. These moments are the main focus of many asset pricing models. In Section II.C we discuss additional model implications.
[Insert Table II about here]
Our choice of parameters attempts to take economic considerations into account. In
particular $\delta < 1$, and the risk aversion parameter $\gamma$ is either 7.5 or 10. Mehra and Prescott (1985) argue that a reasonable upper bound for risk aversion is around 10. In this sense, our choice for risk aversion is reasonable. The magnitude for the IES that we focus on is 1.5. Hansen and Singleton (1982) and Attanasio and Weber (1989) estimate the IES to be well in excess of 1.5. More recently, Vissing-Jorgensen (2002) and Guvenen (2001) also argue that the IES is well over 1. However, Hall (1988) and Campbell (1999) estimate the IES to be well below 1. Their results are based on a model without fluctuating economic uncertainty. In Section II.C.4, we show that ignoring the effects of time-varying consumption volatility leads to a serious downward bias in the estimates of the IES. To highlight the role of the IES, we choose one value of the IES less than 1 (IES= 0.5) and another larger than 1 (IES=1.5).
Table II shows that the model with persistent expected growth is able to generate sizeable risk premia, market volatility, and fluctuations in price-dividend ratios. Larger risk aversion clearly increases the equity premium; changing risk aversion mainly affects this dimension of the model. To qualitatively match key features of the data, it is important for the IES to be larger than 1. Lowering the IES lowers $A_{1,m}$, the dividend elasticity of asset prices, and the risk premia on the asset. As the IES rises, the volatility of the price-dividend ratio and asset returns rise along with $A_{1,m}$. At very low values of the IES, $A_{1,m}$ can become negative, which would imply that a rise in dividends’ growth rate expectations will lower asset prices (see the discussion in Section I). In addition, note that if the leverage parameter $\phi$ is increased, it increases the riskiness of dividends, and $A_{1,m}$ rises. The price-dividend ratio becomes more volatile, and the equity premium rises.
As discussed earlier we assumed that $u_t$, $e_t$, and $\eta_t$ are independent. To give a sense of how the results change if we allow for correlations in the various shocks, consider the case
with the IES at 1.5 and a risk aversion of 10. When we assume that the correlation between $u_t$ and $\eta_t$ is 0.25 and all other innovations are set at zero, then the equity premium rises to 5.02%. If the correlation between $u_t$ and $e_t$ is assumed to be 0.25, then the equity premium and the market return volatility rise to 5.21% and 17.22% respectively. There are virtually no other changes. As stated earlier, in Table II, we have made the conservative assumption of zero correlations to maintain parsimony in the parameters that we have to calibrate.
It is also interesting to consider the case where consumption and dividend growth rates are assumed to be $i.i.d.$, that is $\varphi_e = 0$. In this case, the equity premium for the market is $E_t(r_{m,t+1} - r_{f,t}) = \gamma cov(g_{t+1}, g_{d,t+1}) - 0.5 Var(r_{m,t+1})$. In our baseline model, dividend innovations are independent of consumption innovations; hence, with $i.i.d.$ growth rates, $cov(g_{t+1}, g_{d,t+1})$ equals zero, and the market equity premium is $-0.5 Var(r_{m,t+1})$; this explains the negative equity premium in the $i.i.d.$ case reported in Panel C of Table II. If we assume that the correlation between monthly consumption and dividend growth is 0.25, then the equity premium is 0.08% per annum. This is similar to the evidence documented in Weil (1989) and Mehra and Prescott (1985). For comparable IES and risk-aversion values, shifting from the persistent growth rate process to $i.i.d.$ growth rates lowers the volatility of the equity returns. In all, this evidence highlights the fact that although the time-series dynamics of the model with small persistent expected growth are difficult to distinguish from a pure $i.i.d.$ model, its asset pricing implications are vastly different from those in the $i.i.d.$ model. In what follows we use the parameters in Panel A, with an IES of 1.5 as our preferred configuration, and display the implications of adding fluctuating economic uncertainty.
B. Fluctuating Economic Uncertainty
Before displaying the asset pricing implications of adding fluctuating economic uncertainty, we first briefly discuss evidence for the presence of fluctuating economic uncertainty.
Panel A of Table III documents that the variance ratios of the absolute value of residuals from regressing current consumption growth on 5 lags increase gradually out to 10 years. This suggests slow-moving predictable variation in this measure of realized volatility. Note that if realized volatility were \(i.i.d.\), these variance-ratios would be flat.\(^{12}\)
[Insert Table III about here]
In Panel B of Table III we provide evidence that future realized consumption volatility is predicted by current price-dividend ratios. The current price-dividend ratio predicts future realized volatility with negative coefficients, with robust \(t\)-statistics around 2 and \(R^2\)s around 5% (for horizons of up to 5 years). If consumption volatility were not time-varying, the slope coefficient on the price-dividend ratio would be zero. As suggested by our theoretical model, this evidence indicates that information regarding persistent fluctuations in economic uncertainty is contained in asset prices. Overall, the evidence in Table III lends support to the view that the conditional volatility of consumption is time-varying. Bansal, Khatchatrian, and Yaron (2002) extensively document the evidence in favor of time-varying consumption volatility and show that this feature holds up quite well across different samples and economies.
Given the evidence above, a large value of \(\nu_1\), the parameter governing the persistence of conditional volatility, allows the model to capture the slow-moving fluctuations in economic
uncertainty. In Table IV we provide the asset pricing implications based on the system (8), when in addition to the parameters given in Table I, we activate the volatility parameters (given below the table). It is important to note that the time-series properties displayed in Table I are virtually unaltered once we introduce the fluctuations in economic uncertainty.
[Insert Table IV about here]
Table IV provides statistics for the asset market data and for the model that incorporates fluctuating economic uncertainty (i.e., Case II). Columns 2 and 3 provide the statistics and their respective standard errors for our data sample. Columns 4 and 5 provide the model’s corresponding statistics for risk aversion of 7.5 and 10, respectively. In this table the IES is always set at 1.5 and $\phi$ is set at 3.
Column 5 of Table IV shows that with $\gamma = 10$, the model generates an equity premium that is comparable to that in the data.\footnote{The mean of the risk-free rate, and the volatilities of the market return and of the risk-free rate, are by and large consistent with the data. The model essentially duplicates the volatility and persistence of the observed log price-dividend ratio. Comparing columns 4 and 5 provides sensitivity of the results to the level of risk aversion. Not surprisingly, higher risk aversion increases the equity premium and aligns the model closer to the data. A comparison of Table IV with Table II shows that when risk aversion is 10, the equity risk premium is about 2.5\% higher – this additional premium reflects the premium associated with fluctuating economic uncertainty as derived in equation (11). One could, as discussed earlier, modify the above model and also include correlation between the different shocks. The inclusion of these correlations as documented above typically helps improve the fit of the model to the data.} The mean of the risk-free rate, and the volatilities of the market return and of the risk-free rate, are by and large consistent with the data. The model essentially duplicates the volatility and persistence of the observed log price-dividend ratio. Comparing columns 4 and 5 provides sensitivity of the results to the level of risk aversion. Not surprisingly, higher risk aversion increases the equity premium and aligns the model closer to the data. A comparison of Table IV with Table II shows that when risk aversion is 10, the equity risk premium is about 2.5\% higher – this additional premium reflects the premium associated with fluctuating economic uncertainty as derived in equation (11). One could, as discussed earlier, modify the above model and also include correlation between the different shocks. The inclusion of these correlations as documented above typically helps improve the fit of the model to the data.
to increase the equity premium. Hence, it would seem that these correlations would help the model generate the same equity premium with a lower risk-aversion parameter.
Weil (1989) and Kandel and Stambaugh (1991) also explore the implications of the Epstein and Zin (1989) preferences for asset market data. However, these papers find it difficult to quantitatively explain the aforementioned asset market features at our configuration of preference parameters. Why, then, do we succeed in capturing these asset market features with Epstein and Zin preferences? Weil uses \(i.i.d.\) consumption growth rates. As discussed earlier, with \(i.i.d.\) consumption and dividend growth rates, the risks associated with fluctuating expected growth and economic uncertainty are absent. Consequently, the model has great difficulty in explaining the asset market data.
Kandel and Stambaugh (1991) consider a model in which there is predictable variation in consumption growth rates and volatility. However, at our preference parameters, the persistence in the expected growth and conditional volatility in their specification is not large enough to permit significant response of asset prices to news regarding expected consumption growth and volatility. In addition, Kandel and Stambaugh primarily focus on the case in which the IES is close to zero. At very low values of the IES, \(\lambda_{m,e}\) and \(\beta_{m,e}\) are negative (see equations (6) and (7)). This may still imply a sizeable equity premium. However, a parameter configuration with an IES less than 1 and a moderate level of risk aversion (for example, 10 or less) leads to high levels of the risk-free rate and/or its volatility. In contrast, our IES, which is greater than 1, ensures that the level and volatility of the risk-free rate are low and comparable to those in the data. Hence, with moderate levels of risk aversion, both the high persistence and an IES greater than 1 are important in order to capture key aspects of asset market data.
C. Additional Asset Pricing Implications
As noted earlier, in the model where we shut off fluctuating economic uncertainty (Case I), both risk premia and Sharpe ratios are constant – hence, this simple specification cannot address issues regarding predictability of risk premia. The model that incorporates fluctuating economic uncertainty (Case II) does permit risk premia to fluctuate. Henceforth, we focus entirely on this model specification with the parameter configuration stated in Table IV with $\gamma = 10$.
C.1. Variability of the Pricing Kernel
The maximal Sharpe ratio, as shown in Hansen and Jagannathan (1991), is determined by the conditional volatility of the pricing kernel. This maximal Sharpe ratio for our model is the volatility of the pricing kernel innovation defined in equation (10). In Table V, we quantify the contributions of different shocks to the variance of the pricing kernel innovations (see equation (10)). The maximal annualized Sharpe ratio for our model economy is 0.73, which is quite large. The maximal Sharpe ratio with $i.i.d.$ growth rates is $\gamma \sigma$, and with our parameter configuration its annualized value equals 0.27. Consequently, the Epstein and Zin preferences and the departure from $i.i.d.$ growth rates are responsible for this larger maximal Sharpe ratio. Additionally, for our model, the maximal Sharpe ratio exceeds that of the market return, which is 0.33. The sources of risk in order of importance are shocks to the expected growth rate (i.e., $e_{t+1}$), followed by that of fluctuating economic uncertainty (i.e., $w_{t+1}$). While the variance of these shocks in themselves is small, their effects on the pricing kernel get magnified because of the long-lasting nature of these shocks (see discussion in Section 1). Finally, the variance of high-frequency consumption news, $\eta_{t+1}$, is relatively
large, but this risk source contributes little to the pricing kernel variability, as this shock is not long-lasting.
[Insert Table V about here]
C.2. Predictability of Returns, Growth Rates, and Price-Dividend Ratios
Dividend yields seem to predict multi-horizon returns. A rise in the current dividend yield predicts a rise in future expected returns. Our model performs quite well in capturing this feature of the data. However, it is important to recognize that these predictability results are quite sensitive to changing samples, estimation techniques, and data sets (see Hodrick (1992) and Goyal and Welch (1999)). Further, most dimensions of the evidence related to predictability (be it growth rates or returns) are estimated with considerable sampling error. This, in conjunction with the rather high persistence in the price-dividend ratio, suggests that considerable caution should be exercised in interpreting the evidence regarding predictability based on price-dividend ratios.
In Panel A of Table VI we report the predictability regressions of future excess returns for horizons of 1, 3, and 5 years for our sample data. In Column 4 we report the corresponding evidence from the perspective of the model. The model captures the positive relationship between expected returns and dividend yields. The absolute value of the slope coefficients and the corresponding $R^2$s rise with the return horizon, as in the data. The predictive slope coefficients and the $R^2$s in the model are somewhat lower than those in the data; however, the model’s slope coefficients are within two standard errors of the estimated coefficients in the data.\(^{14}\)
In Panel B of Table VI we provide regression results where the dependent variable is the sum of annual consumption growth rates. In the data it seems that price-dividend ratios have little predictive power, particularly at longer horizons. The slope coefficients and $R^2$s of these regressions are quite low both in the data and the model. The $R^2$s are relatively small in the model for two reasons. First, price-dividend ratios are determined by expected growth rates, and the variation in expected growth rates is quite small. Recall that the monthly $R^2$ for consumption dynamics is less than 5%. Second, price-dividend ratios are also affected by independent movements in economic uncertainty, which lowers their ability to predict future growth rates. Overall, the model, like the data, suggests that growth rates at long horizons are not predicted by price-dividend ratios in any economically sizeable manner.\footnote{The empirical evidence shows that asset markets dislike economic uncertainty, but this is not captured in the model.}
In Panel C of Table VI we report how well current realized consumption volatility predicts future price-dividend ratios. First, note that there is strong evidence in the data for this relationship. The regression coefficients for predicting future price-dividend ratios with current volatility for 1, 3, and 5 years are all negative, have robust $t$-statistics that are well above 2, and have $R^2$s of about 10%. The model produces similar negative coefficients, albeit in absolute terms they are slightly smaller. The $R^2$s are within two standard errors of the data. Taken together with the results in Panel B of Table III, the evidence is consistent with the economics of the model; fluctuating economic uncertainty, captured via realized consumption volatility, predicts future price-dividend ratios and is predicted by lagged price-dividend ratios.
uncertainty – a feature that our model is capable of reproducing. Using alternative measures of consumption volatility, Bansal, Khatchatrian, and Yaron (2002) show that this evidence is robust across many samples and frequencies, and is consistently found in many developed economies.
Some caution should be exercised in interpreting the links between dividend growth rates and price-dividend ratios. Evidence from other papers (see Ang and Bekaert (2001) and Bansal, Khatchatrian, and Yaron (2002)) indicates that alternative measures of cash flows, such as earnings, are well predicted by valuation ratios. Cash dividends, as discussed earlier, may not accurately measure the total payouts to equity holders and hence may distort the link between growth rates and asset valuations. However, given the practical difficulties in measuring the appropriate payouts, and to maintain comparability with other papers in the literature, we, like others, continue to use cash dividends. With this caveat in mind, we also explore the model’s implications by exploring how much of the variation in the price-dividend ratio is from growth rates and what part is due to variation in expected returns.
In the data, the majority of the variation in price-dividend ratios seems to be due to variation in expected returns. For our sample the point estimate for the percentage of the variation in price-dividend ratio due to return fluctuations is 108%, with a standard error of 42%, while dividends’ growth rates account for −6%, with a standard error of 31%. Our model produces population estimates that attribute about 52% of the variation in price-dividend ratios to returns and 54% to fluctuations in expected dividend growth. Note that the standard errors of the point estimates of this decomposition in the data are very large. To account for any finite sample biases, we also conducted a Monte Carlo exercise using simulations from our model of sample sizes comparable to our data. This
Monte Carlo evidence implies that in our model, the returns account for about 70% of the variation in price-dividend ratio, thus aligning the model closer to the data. Given the large sampling variation in measuring these quantities in the data using cash dividends, and the sharp differences in predictability implications using alternative cash flow measures, makes economic inference based on this decomposition quite difficult.
Two additional features of the model are worth highlighting. First, in the data the contemporaneous correlation between equity return and consumption is very small at the monthly frequency and is about 0.20 at the annual frequency. Our model produces comparable magnitudes, with correlations of 0.04 and 0.15 for the monthly and annual frequencies, respectively. Second, the term premium on nominal bonds, the average one-period excess return on an $n$-period discount bond, is small. This suggests that the equity premium in the data is not driven by a large term premium. The term premium (which in our model is on real bonds) is in fact small and slightly negative. Hence the large equity premium in the model is not a by-product of a large positive term premium.\footnote{The term premium is computed as the difference between the expected return on nominal bonds and the risk-free rate.} In totality, the above evidence, in conjunction with the results pertaining to predictability, suggest that the model is capable of capturing several key aspects of asset markets data.
\subsection*{C.3. Conditional Volatility and the Feedback Effect}
A large literature documents that market return volatility is very persistent (see, e.g., Bollerslev, Engle, and Wooldridge (1988)). This feature of the data is easily reproduced in our model. The market volatility process, as described in equation (A13) in the appendix, is a linear affine function of the conditional variance of the consumption growth rate process $\sigma_t$. As the conditional variance of the consumption growth rate process is an AR(1) process,
it follows that the market volatility inherits this property. Note that the coefficient on the conditional variance of consumption in the market volatility process is quite large. This magnifies the conditional variance of the market portfolio relative to consumption volatility. The persistence in market volatility coincides with the persistence in the consumption volatility process. In the monthly market return data, this persistence parameter is about 0.986 (see Bollerslev, Engle, and Wooldridge), and in the model it equals $\nu_1$, 0.987. As consumption volatility is high during recessions, this implies that the market volatility also rises during recessions. Also note that during periods of high consumption volatility (e.g., recessions), in the model the equity premium also rises. This implication of the model is consistent with the evidence provided in Fama and French (1989) that risk premia are countercyclical.
Campbell and Hentschel (1992), Glosten, Jagannathan, and Runkle (1993), and others document what is known as the volatility feedback effect. That is, return innovations are negatively correlated with innovations in market volatility. The model is capable of reproducing this negative correlation. The feedback effect arises within the model in spite of the fact that the volatility innovations are independent of the expected consumption growth process. The key feature that allows the model to capture this dimension is the Epstein-Zin preferences in which volatility risk is priced (see the discussion in Section I.B.). Using the analytical expressions for the innovation in the market return (see equation (A12) in the appendix) and the expression for the innovation in the market volatility, it is straightforward to show that the conditional covariance
$$\text{cov}_t((r_{m,t+1} - E_t r_{m,t+1}), \text{var}_{t+1}(r_{m,t+2}) - E_t[\text{var}_{t+1}(r_{m,t+2})]) = \beta_{m,w}(\beta^2_{m,e} + \varphi^2_d)\sigma^2_w,$$ \hspace{1cm} (12)
where \( \beta_{m,w} \equiv \kappa_1 A_{2,m} < 0 \) as \( A_{2,m} \) is negative. The correlation between market return innovations and market volatility innovations for our model is \(-0.32\).
An additional issue pertains to the relation between the expected return on the market portfolio and the market volatility. Glosten, Jagannathan, and Runkle (1993) and Whitelaw (1994) document that the expected market return and the market volatility are negatively related. French, Schwert, and Stambaugh (1987) and Campbell and Hentschel (1992) argue that this relation is likely to be positive. In our model, theoretically, the relation between expected market return and market volatility is positive, and is not consistent with the negative relation between expected returns and market volatility. Whitelaw (2000) shows that a standard power utility model with regime shifts in consumption growth can accommodate the negative relation between expected returns and market volatility. The unconditional correlation in our model between ex-post excess returns on the market and the ex-ante market volatility is a small positive number, 0.04. The model cannot generate the negative relation between expected returns and market volatility. To do so, we conjecture, will require significant changes, perhaps along the lines pursued in Whitelaw. This departure is well outside the scope of this paper, and we leave this exploration for future work.
C.4. Bias in Estimating the Intertemporal Elasticity of Substitution
As in Hall (1988), the IES is typically measured by the slope coefficient from regressing date \( t + 1 \) consumption growth rate on the date \( t \) risk-free rate. This projection would indeed recover the IES, if no fluctuating uncertainty affected the risk-free rate. However, the risk-free rate in our model fluctuates as a result of both changing expected growth rate and independent fluctuations in the volatility of consumption. Thus, the above projection
is misspecified and creates a downward bias. This bias is quite significant, as inside our model, where the value of the IES is set at 1.5, Hall’s regression would estimate the IES parameter to be 0.62. Our model is a simple one, and there may be alternative instrumental variable approaches to undo this bias. However, we view this result of the downward bias as suggestive of the difficulties in accurately pinning down the IES. As discussed in Section II.A, several papers report an estimated IES that is well over 1. This evidence, along with the potential downward bias in estimating the IES, makes our choice of an IES larger than 1 quite reasonable.
III. Conclusions
In this paper we explore the idea that news about growth rates and economic uncertainty (i.e., consumption volatility) alters perceptions regarding long-term expected growth rates and economic uncertainty and that this channel is important for explaining various asset market phenomena. If indeed news about consumption has a nontrivial impact on long-term expected growth rates or economic uncertainty, then asset prices will be fairly sensitive to small growth rate and consumption volatility news. We develop a model for growth rates that captures this intuition. Anderson, Hansen, and Sargent (2002) utilize features of our growth rate dynamics to motivate economic models that incorporate robust control with respect to the small long-run components in growth rates.
We provide empirical support for aggregate consumption and dividend growth processes that contain a small persistent expected growth rate component and a conditional volatility component. These growth rate dynamics, in conjunction with the Epstein and Zin (1989) and Weil (1989) preferences, can help explain many asset market puzzles. In our model, at
plausible values for the preference parameters, a reduction in economic uncertainty or better long-run growth prospects leads to a rise in the wealth-consumption and the price-dividend ratios.
The model is capable of justifying the observed magnitudes of the equity premium, the risk-free rate, and the volatility of the market return, dividend-yield, and the risk-free rate. Further, it captures the volatility feedback effect, that is, the negative correlation between return news and return volatility news. As in the data, dividend yields predict future returns and the volatility of returns is time-varying. Evidence provided in this paper and Bansal, Khatchatrian, and Yaron (2002) shows that there is a significant negative correlation between price-dividend ratios and consumption volatility. The model captures this dimension of the data as well. A feature of the model is that about half of the variability in equity prices is due to fluctuations in expected growth rates, and the remainder is due to fluctuations in the cost of capital.
**Appendix**
The consumption and dividend process given in (8) is
\[
g_{t+1} = \mu + x_t + \sigma_1 \eta_{t+1}
\]
\[
x_{t+1} = \rho x_t + \varphi_e \sigma_t e_{t+1}
\]
\[
\sigma^2_{t+1} = \sigma^2 + \nu_1 (\sigma^2_t - \sigma^2) + \sigma_w w_{t+1}
\]
\[
g_{d,t+1} = \mu_d + \phi x_t + \varphi_d \sigma_t u_{t+1}
\]
(A1)
\[w_{t+1}, e_{t+1}, u_{t+1}, \eta_{t+1} \sim N.i.i.d.(0, 1).\]
The IMRS (Intertemporal Marginal Rate of Substitution) for this economy is given by
\[
\ln M_{t+1} = \theta \ln \delta - \frac{\theta}{\psi} g_{t+1} + (\theta - 1) r_{a,t+1}.
\]
(A2)
We derive asset prices using this IMRS and the standard asset pricing condition \(E_t[M_{t+1} R_{i,t+1}] = 1\), so that
\[
E_t[\exp(\theta \ln \delta - \frac{\theta}{\psi} g_{t+1} + (\theta - 1) r_{a,t+1} + r_{i,t+1})] = 1
\]
(A3)
for any asset \(r_{i,t+1} \equiv \log(R_{i,t+1})\). We first start by solving the special case where \(r_{i,t+1}\) is \(r_{a,t+1}\) – the return on the aggregate consumption claim, and then solve for market return \(r_{m,t+1}\), and the risk-free rate \(r_f\).
**A. The Return on the Consumption Claim Asset, \(r_{a,t+1}\)**
We conjecture that the log price-consumption ratio follows, \(z_t = A_0 + A_1 x_t + A_2 \sigma^2_t\). Armed with the endogenous variable \(z_t\), we substitute the approximation \(r_{a,t+1} = \kappa_0 + \kappa_1 z_{t+1} - z_t + g_{t+1}\) into the Euler equation (A3).
Since \(g\), \(x\), and \(\sigma^2_t\) are conditionally normal, \(r_{a,t+1}\) and \(\ln M_{t+1}\) are also normal. Exploiting the normality of \(r_{a,t+1}\) and \(\ln M_{t+1}\), we can write down the Euler equation (A3) in terms of the state variables \(x_t\) and \(\sigma_t\). As the Euler condition must hold for all values of the state variables, it follows that all terms involving \(x_t\) must satisfy the following:
\[
-\frac{\theta}{\psi} x_t + \theta [\kappa_1 A_1 \rho x_t - A_1 x_t + x_t] = 0.
\]
(A4)
It immediately follows that
\[
A_1 = \frac{1 - \frac{1}{\psi}}{1 - \kappa_1 \rho},
\]
(A5)
which is (5) in the main text. Similarly, collecting all the $\sigma_t^2$ terms leads to the solution for $A_2$,
$$\theta[\kappa_1\nu_1 A_2 \sigma_t^2 - A_2 \sigma_t^2] + \frac{1}{2}[(\theta - \frac{\theta}{\psi})^2 + (\theta A_1 \kappa_1 \varphi_e)^2]\sigma_t^2 = 0,$$
(A6)
which implies that
$$A_2 = \frac{0.5[(\theta - \frac{\theta}{\psi})^2 + (\theta A_1 \kappa_1 \varphi_e)^2]}{\theta(1 - \kappa_1 \nu_1)},$$
(A7)
the solution given in (9).
Given the solution above for $z_t$, it is possible to derive the innovation to the return $r_a$ as a function of the evolution of the state variables and the parameters of the model.
$$r_{a,t+1} - E_t(r_{a,t+1}) = \sigma_t \eta_{t+1} + B \sigma_t e_{t+1} + A_2 \kappa_1 \sigma_w w_{t+1},$$
(A8)
where $B = \kappa_1 A_1 \varphi_e = \kappa_1 \frac{\varphi_e}{1 - \kappa_1 \rho}(1 - \frac{1}{\psi})$. Further, it follows that the conditional variance of $r_{a,t+1}$ is
$$Var_t(r_{a,t+1}) = (1 + B^2)\sigma_t^2 + (A_2 \kappa_1)^2 \sigma_w^2.$$
(A9)
### A.1. Intertemporal Marginal Rate of Substitution
Substituting for $r_{a,t+1}$ and the dynamics of $g_{t+1}$, we can rewrite the IMRS in terms of the state variables — referring to this as the pricing kernel. Suppressing all the constants in the pricing kernel,
$$m_{t+1} \equiv ln M_{t+1} = \theta \ln \delta - \frac{\theta}{\psi} g_{t+1} + (\theta - 1)r_{a,t+1}$$
$$E_t[m_{t+1}] = m_0 - \frac{x_t}{\psi} + A_2(\kappa_1 \nu_1 - 1)(\theta - 1)\sigma_t^2$$
$$m_{t+1} - E_t(m_{t+1}) = (-\frac{\theta}{\psi} + \theta - 1)\sigma_t \eta_{t+1} + (\theta - 1)(A_1 \kappa_1 \varphi_e)\sigma_t e_{t+1} + (\theta - 1)A_2 \kappa_1 \sigma_w w_{t+1}$$
$$= \lambda_{m,\eta} \sigma_t \eta_{t+1} - \lambda_{m,e} \sigma_t e_{t+1} - \lambda_{m,w} \sigma_w w_{t+1},$$
(A10)
where \( \lambda_{m,\eta} \equiv [-\frac{\theta}{\psi} + (\theta - 1)] = -\gamma, \) \( \lambda_{m,e} \equiv (1 - \theta)B, \) \( \lambda_{m,w} \equiv (1 - \theta)A_2\kappa_1, \) and \( B \) and \( A_2 \) are defined above. Note that the \( \lambda \)'s represent the market price of risk for each source of risk, namely \( \eta_{t+1}, e_{t+1}, \) and \( w_{t+1}. \)
### A.2. Risk Premia for \( r_{a,t+1} \)
The risk premium for any asset is determined by the conditional covariance between the return and \( m_{t+1}. \) Thus the risk premium for \( r_{a,t+1} \) is equal to
\[
E_t(r_{a,t+1} - r_{f,t}) = -cov_t[m_{t+1} - E_t(m_{t+1}), r_{a,t+1} - E_t(r_{a,t+1})] - 0.5var_t(r_{a,t+1}).
\]
Exploiting the innovations in (A8) and (A10), it follows that
\[
E_t[r_{a,t+1} - r_{f,t}] = -\lambda_{m,\eta}\sigma_t^2 + \lambda_{m,e}B\sigma_t^2 + \kappa_1A_2\lambda_{m,w}\sigma_w^2 - 0.5Var_t(r_{a,t+1}),
\]
(A11)
where \( Var_t(r_{a,t+1}) \) is defined in equation (A9).
### A.3. Equity Premium and Market Return Volatility
The risk premium for any asset is determined by the conditional covariance between the return and \( m_{t+1}. \) Thus the risk premium for the market portfolio \( r_{m,t+1} \) is equal to
\[
E_t(r_{m,t+1} - r_{f,t}) = -cov_t[m_{t+1} - E_t(m_{t+1}), r_{m,t+1} - E_t(r_{m,t+1})] - 0.5var_t(r_{m,t+1}).
\]
Equation (A10) already provides the innovation in \( m_{t+1}. \) We now proceed to derive the innovation in the market return. The price-dividend ratio for the claim on dividends is
\[
z_{m,t} = A_{0,m} + A_{1,m}x_t + A_{2,m}\sigma_t^2.
\]
It follows that
\[
r_{m,t+1} = g_{d,t+1} + \kappa_1A_{1,m}x_{t+1} - A_{1,m}x_t + \kappa_{1,m}A_{2,m}\sigma_{t+1}^2 - A_{2,m}\sigma_t^2
\]
\[
r_{m,t+1} - E_t(r_{m,t+1}) = \varphi_d\sigma_tu_{t+1} + \kappa_1A_{1,m}\varphi_e\sigma_te_{t+1} + \kappa_1A_{2,m}\sigma_ww_{t+1}
\]
\[
= \varphi_d\sigma_tu_{t+1} + \beta_{m,e}\sigma_te_{t+1} + \beta_{m,w}\sigma_ww_{t+1},
\]
(A12)
where \( \beta_{m,e} \equiv \kappa_{1,m} A_{1,m} \varphi_e \), and \( \beta_{m,w} \equiv \kappa_{1} A_{2,m} \). Moreover, it follows that
\[
Var_t(r_{m,t+1}) = (\beta_{m,e}^2 + \varphi_d^2) \sigma_t^2 + \beta_{m,w}^2 \sigma_w^2.
\] (A13)
Using the innovations in the market return and the pricing kernel, the expression for the equity premium is
\[
E_t(r_{m,t+1} - r_{f,t}) = \beta_{m,e} \lambda_{m,e} \sigma_t^2 + \beta_{m,w} \lambda_{m,w} \sigma_w^2 - 0.5 Var_t(r_{m,t+1}),
\] (A14)
where \( Var_t(r_{m,t+1}) \) is defined in equation (A13).
To derive the expressions for \( A_{1,m} \) and \( A_{2,m} \), we exploit the Euler condition \( E_t[\exp(m_{t+1} + r_{m,t+1})] = 1 \). Collecting all the \( x_t \) terms, we find that
\[
-\frac{x}{\psi} + x \kappa_{1,m} A_{1,m} \rho - A_{1,m} x + \phi x = 0,
\] (A15)
which implies that
\[
A_{1,m} = \frac{\phi - \frac{1}{\psi}}{1 - \kappa_{1,m} \rho}.
\] (A16)
The solution for \( A_{2,m} \) follows from exploiting the asset pricing condition,
\[
\exp\{E_t(m_{t+1}) + E_t(r_{m,t+1}) + 0.5 Var_t(m_{t+1} + r_{m,t+1})\} = 1,
\] (A17)
and collecting all \( \sigma_t \) terms. Note that \( Var_t(m_{t+1} + r_{m,t+1}) \) equals
\[
Var_t[\lambda_{m,\eta} \sigma_t \eta_{t+1} - \lambda_{m,w} \sigma_w w_{t+1} - \lambda_{m,e} \sigma_t e_{t+1} + \beta_{m,e} \sigma_t e_{t+1} + \varphi_d \sigma_t u_{t+1} + \beta_{m,w} \sigma_w w_{t+1}]
\]
\[
= H_m \sigma_t^2 + [-\lambda_{m,w} + \beta_{m,w}]^2 \sigma_w^2,
\] (A18)
where \( H_m \equiv [\lambda_{m,\eta}^2 + (-\lambda_{m,e} + \beta_{m,e})^2 + \varphi_d^2] \). Now collect all the \( \sigma_t^2 \) terms in equation (A17), and note that \( \sigma_t \) appears in \( E_t(r_{m,t+1}) \) as well as \( E_t(m_{t+1}) \). This leads to the following restriction,
\[
(\theta - 1) A_2 (\kappa_1 \nu_1 - 1) + A_{2,m} (\kappa_{1,m} \nu_1 - 1) + \frac{H_m}{2} = 0,
\] (A19)
which implies that
\[
A_{2,m} = \frac{(1 - \theta)A_2(1 - \kappa_1\nu_1) + 0.5H_m}{(1 - \kappa_{1,m}\nu_1)}.
\] (A20)
To derive the unconditional variance of the market return, note that
\[
r_{m,t+1} - E(r_{m,t+1}) =
\]
\[
- \frac{x_t}{\psi} + \beta_{m,e}\sigma_te_{t+1} + \varphi_d\sigma_tu_{t+1} + A_{2,m}(\nu_1\kappa_1 - 1)[\sigma_t^2 - E(\sigma_t^2)] + \beta_{m,w}\sigma_ww_{t+1}.
\] (A21)
Hence, the unconditional variance is
\[
Var(r_m) = \frac{\sigma_x^2}{\psi^2} + [\beta_{m,e}^2 + \varphi_d^2]\sigma^2 + [A_{2,m}(\nu_1\kappa_1 - 1)]^2Var(\sigma_t^2) + \beta_{m,w}^2\sigma_w^2.
\] (A22)
The unconditional variance of \(z_{m,t}\), the price-dividend ratio for the market portfolio, can be derived as follows:
\[
Var(z_{m,t}) = A_{1,m}^2Var(x_t) + A_{2,m}^2Var(\sigma_t^2).
\] (A23)
Finally, note that the innovation to the market return volatility follows from equation (A12) and is
\[
var_{t+1}(r_{m,t+2}) - E_t[var_{t+1}(r_{m,t+2})] = (\beta_{m,e}^2 + \varphi_d^2)\sigma_ww_{t+1}.
\] (A24)
**B. The Risk-Free Rate and Its Volatility**
To derive the risk-free rate, start with (A3) and plug in the risk-free rate for \(r_i\):
\[
r_{f,t} = -\theta \log(\delta) + \frac{\theta}{\psi}E_t[g_{t+1}] + (1 - \theta)E_tr_{a,t+1} - \frac{1}{2}Var_t[\frac{\theta}{\psi}g_{t+1} + (1 - \theta)r_{a,t+1}],
\] (A25)
subtract \((1 - \theta)r_{f,t}\) from both sides and divide by \(\theta\), where it is assumed that \(\theta \neq 0\). It then follows that
\[
r_{f,t} = -\log(\delta) + \frac{1}{\psi}E_t[g_{t+1}] + \frac{(1 - \theta)}{\theta}E_t[r_{a,t+1} - r_t] - \frac{1}{2\theta}Var_t[\frac{\theta}{\psi}g_{t+1} + (1 - \theta)r_{a,t+1}].
\] (A26)
Further, to solve the above expression, note that \( Var_t[\frac{\theta}{\psi}g_{t+1} + (1 - \theta)r_{a,t+1}] \equiv Var_t(m_{t+1}) \), and therefore,
\[
Var_t(m_{t+1}) = (\lambda^2_{m,\eta} + \lambda^2_{m,e})\sigma^2_t + \lambda^2_{m,w}\sigma^2_w. \tag{A27}
\]
The unconditional mean of \( r_{f,t} \) is derived by substituting the expression for the risk premium for \( r_{a,t+1} \) given in (A11) and (A27) into (A26). This substitution yields
\[
E(r_{f,t}) = -\log(\delta) + \frac{1}{\psi}E(g) + \frac{(1 - \theta)}{\theta}E[r_{a,t+1} - r_t] - \frac{1}{2\theta}[(\lambda^2_{m,\eta} + \lambda^2_{m,e})E[\sigma^2_t] + \lambda^2_{m,w}\sigma^2_w], \tag{A28}
\]
where note that \( E[\sigma^2_t] = Var(\eta) \).
The unconditional variance of \( r_{f,t} \) is:
\[
Var(r_{f,t}) = \left( \frac{1}{\psi} \right)^2 Var(x_t) + \left\{ \frac{1 - \theta}{\theta}Q_1 - Q_2 \frac{1}{2\theta} \right\}^2 Var(\sigma^2_t), \tag{A29}
\]
where \( Q_2 = (\lambda^2_{m,\eta} + \lambda^2_{m,e}) \), and \( Q_1 = (-\lambda_{m,\eta} + (1 - \theta)B^2 - 0.5(1 + B^2)) \), where \( B \) is defined above. Note that \( Q_1 \) determines the time-varying portion of the risk premium on \( r_{a,t+1} \). For all practical purposes, the variance of the risk-free rate is driven by the first term.
References
Abel, Andrew B., 1990, Asset prices under habit formation and catching up with the Joneses, *American Economic Review* 80, 38-42.
Abel, Andrew B., 1999, Risk premia and term premia in general equilibrium, *Journal of Monetary Economics* 43, 3–33.
Anderson, Evan, Lars P. Hansen, and Thomas Sargent, 2002, A quartet of semi-groups for model specification, detection, robustness, and the price of risk, Unpublished manuscript, University of Chicago.
Ang, Andrew, and Geert Bekaert, 2001, Stock return predictability: Is it there? Unpublished manuscript, Columbia University.
Ansley, Craig F., and Paul Newbold, 1980, Finite sample properties of estimators for autoregressive moving average models, *Journal of Econometrics* 13, 159–184.
Attanasio, P. Orazio, and Guglielmo Weber, 1989, Intertemporal substitution, risk aversion and the Euler equation for consumption, *Economic Journal* 99, 59–73.
Bansal, Ravi, and Wilbur J. Coleman II, 1997, A monetary explanation of the equity premium, term premium and the risk-free rate puzzles, *Journal of Political Economy* 104, 1135–1171.
Bansal, Ravi, Varoujan Khatchatrian, and Amir Yaron, 2002, Interpretable asset markets? NBER Working paper 9383.
Bansal, Ravi, and Christian Lundblad, 2002, Market efficiency, asset returns, and the size of the risk premium in global equity markets, *Journal of Econometrics* 109, 195–237.
Bansal, Ravi, and Amir Yaron, 2000, Risks for the long run: A potential resolution of asset pricing puzzles, NBER Working paper 8059.
Barberis, Nicholas, Ming Huang, and Tano Santos, 2001, Prospect theory and asset pricing, *Quarterly Journal of Economics* 116, 1–53.
Barsky, Robert, and Bradford J. DeLong, 1993, Why does the stock market fluctuate? *Quarterly Journal of Economics* 108, 291–312.
Bollerslev, Timothy, Robert F. Engle, and Jeffrey Wooldridge, 1988, A capital asset pricing model with time-varying covariances, *Journal of Political Economy* 96, 116–131.
Campbell, John Y., 1996, Understanding risk and return, *Journal of Political Economy* 104, 298–345.
Campbell, John Y., 1999, Asset prices, consumption and the business cycle, in John B. Taylor and Michael Woodford, eds.: *Handbook of Macroeconomics*, Volume 1 (Elsevier Science, North-Holland, Amsterdam).
Campbell, John Y., and John H. Cochrane, 1999, By force of habit: a consumption-based explanation of aggregate stock market behavior, *Journal of Political Economy* 107, 205–251.
Campbell, John Y., and Ludger Hentschel, 1992, No news is good news: An asymmetric
model of changing volatility is stock returns, *Journal of Financial Economics* 31, 281–318.
Campbell, John Y., and Robert J. Shiller, 1988, The dividend-price ratio and expectations of future dividends and discount factors, *Review of Financial Studies* 1, 195–227.
Cecchetti, Stephen G., Pok-Sang Lam, and Nelson C. Mark, 1990, Mean reversion in equilibrium asset prices, *American Economic Review* 80, 398–419.
Cecchetti, Stephen G., Pok-Sang Lam, and Nelson C. Mark, 1993, The equity premium and the risk free rate: matching the moments, *Journal of Monetary Economics* 31, 21–46.
Chapman, David, 2002, Does intrinsic habit formation actually resolve the equity premium puzzle? *Review of Economic Dynamics* 5, 618–645.
Cochrane, John H., 1988, How big is the random walk in GNP? *Journal of Political Economy* 96, 893–920.
Cochrane, John H., 1992, Explaining the variance of price dividend ratios, *Review of Financial Studies* 5, 243–280.
Constantinides, George, 1990, Habit formation: A resolution of the equity premium puzzle, *Journal of Political Economy* 98, 519–543.
Constantinides, George and Darrell Duffie, 1996, Asset pricing with heterogeneous consumers, *Journal of Political Economy* 104, 219–240.
Drost, Feike C., and Theo E. Nijman, 1993, Temporal aggregation of GARCH processes, *Econometrica* 61, 909–927.
Epstein, Lawrence, and Stanley Zin, 1989, Substitution, risk aversion and the temporal behavior of consumption and asset returns: A theoretical framework, *Econometrica* 57, 937–969.
Evans, Martin, 1998, Real rates, expected inflation and inflation risk premia, *Journal of Finance* 53, 187–218.
Fama, Eugene F., and Kenneth R. French, 1989, Business conditions and expected returns on stocks and bonds, *Journal of Financial Economics* 25, 23–49.
French, Kenneth R., William Schwert, and Robert F. Stambaugh, 1987, Expected stock returns and volatility, *Journal of Financial Economics* 19, 3–29.
Glosten, Lawrence, Ravi Jagannathan, and David Runkle, 1993, On the relation between expected value and the volatility of the nominal excess return on stocks, *Journal of Finance* 48, 1779–1801.
Goyal, Amit, and Ivo Welch, 1999, The myth of predictability: Does the dividend yield forecast the equity premium? Unpublished manuscript, UCLA.
Guvenen, Fatih, 2001, Mismeasurement of the elasticity of intertemporal substitution: The role of limited stock market participation, Unpublished manuscript, University of Rochester.
Hall, Robert E., 1988, Intertemporal substitution in consumption, *Journal of Political Economy* 96, 339–357.
Hansen, Lars P., and Ravi Jagannathan, 1991, Implications of security market data for models of dynamic economies, *Journal of Political Economy* 99, 225–262.
Hansen, Lars P., Thomas Sargent, and Thomas Tallarini, 1999, Robust permanent income and pricing, *Review of Economic Studies* 66, 873–907.
Hansen, Lars P., and Kenneth Singleton, 1982, Generalized instrumental variables estimation of nonlinear rational expectations models, *Econometrica* 50, 1269–1286.
Heaton, John, 1995, An empirical investigation of asset pricing with temporally dependent preference specifications, *Econometrica* 63, 681–717.
Heaton John, and Deborah Lucas, 1996, Evaluating the effects of incomplete markets on risk sharing and asset pricing, *Journal of Political Economy* 104, 443–487.
Hodrick, Robert J., 1992, Dividend yields and expected stock returns: Alternative procedures for inference and measurement, *Review of Financial Studies* 5, 357–386.
Hurwicz, Leonid, 1950, Least square bias in time series, In T. Koopmans, ed: *Statistical Inference in Dynamic Economic Models* (Wiley, New York).
Jagannathan, Ravi, Ellen R. McGrattan, and Anna Scherbina, 2000, The declining U.S. equity premium, *Federal Reserve Bank of Minneapolis, Quarterly Review* 24, 3-19.
Judd, Kenneth, 1998, *Numerical Methods in Economics* (MIT Press, Cambridge, MA).
Kandel, Shmuel, and Robert F. Stambaugh, 1991, Asset returns and intertemporal preferences, *Journal of Monetary Economics* 27, 39–71.
LeRoy, Stephen, and Richard Porter, 1981, The present value relation: Tests based on implied variance bounds, *Econometrica* 49, 555–574.
Lettau, Martin, and Sydney Ludvigson, 2001, Consumption, aggregate wealth, and expected stock returns, *Journal of Finance* 56, 815–849.
Mehra, Rajnish, and Edward C. Prescott, 1985, The equity premium: A puzzle, *Journal of Monetary Economics* 15, 145–161.
Nelson, Daniel, 1991, Conditional heteroskedasticity in asset returns: A new approach, *Econometrica* 59, 347–370.
Newey, Whitney K., and Kenneth D. West, 1987, A simple, positive semi-definite, heteroskedasticity and autocorrelation consistent covariance matrix, *Econometrica* 55, 703–708.
Shephard, Neil G., and Andrew C. Harvey, 1990, On the probability of estimating a deterministic component in the local level model, *Journal of Time Series Analysis* 11, 339–347.
Shiller, Robert J., 1981, Do stock prices move too much to be justified by subsequent changes in dividends? *American Economic Review* 71, 421–436.
Vissing-Jorgensen, Annette, 2002, Limited asset market participation and the elasticity of intertemporal substitution, *Journal of Political Economy* 110, 825–853.
Wachter, Jessica, 2002, Habit formation and returns on stocks and bonds, Unpublished manuscript, NYU.
Weil, Philippe, 1989, The equity premium puzzle and the risk-free rate puzzle, *Journal of Monetary Economics* 24, 401–421.
Whitelaw, Robert F., 1994, Time variations and covariations in the expectation and volatility of stock market returns, *Journal of Finance* 49, 515–541.
Whitelaw, Robert F., 2000, Stock market risk and return: An equilibrium approach, *Review of Financial Studies* 13, 521–547.
Wilcox, David, 1992, The construction of the U.S. consumption data: Some facts and their implications for empirical work, *American Economic Review* 82, 922–941.
Notes
\(^1\)Notable papers addressing asset market anomalies include Abel (1990),(1999), Bansal and Coleman (1997), Barberis, Huang, and Santos (2001), Campbell and Cochrane (1999), Cecchetti, Lam, and Mark (1990), Chapman (2002), Constantinides (1990), Constantinides and Duffie (1996), Hansen, Sargent, and Tallarini (1999), Heaton (1995), Heaton and Lucas (1996), and Kandel and Stambaugh (1991).
\(^2\)Barsky and DeLong (1993) choose a value of 1. Our choice ensures that the growth rate process is stationary.
\(^3\)In particular, if \( \psi > 1 \) and \( \gamma > 1 \) then \( \theta \) will be negative. Note that when \( \theta = 1 \), that is \( \gamma = (1/\psi) \), the above recursive preferences collapse to the standard case of expected utility. Further, when \( \theta = 1 \) and in addition \( \gamma = 1 \), we get the standard case of log utility.
\(^4\)Note that \( \kappa_1 = \exp(\bar{z})/(1 + \exp(\bar{z})).\kappa_1 \) is approximately 0.997, which is consistent with the magnitude of \( \bar{z} \) in our sample and with magnitudes used in Campbell and Shiller (1988).
\(^5\)Similar growth rate dynamics (see equation (4)) are also considered in Campbell (1999), Cecchetti, Lam, and Mark (1993), and Wachter (2002) to model the consumption growth rate.
\(^6\)The above specification models the growth rates of consumption (nondurables plus services) and dividends. Consequently, as in many other papers (e.g., Campbell and Cochrane (1999)), consumption and dividends are not cointegrated. It is an empirical issue if these series are cointegrated or not. Additionally, these growth-rate focused models also do not consider the implications for the ratio of dividends to consumption. It is possible that confronting the model specification for consumption and dividends with these additional issues may provide further insights regarding the appropriate time-series model for them—we leave this for future research.
\(^7\)An alternative interpretation with the power utility model is that higher expected growth rates increase the risk-free rate to an extent that discounting dominates the effects of higher expected growth rates. This leads to a fall in asset prices.
\(^8\)Recall that in our specification the conditional volatility and expected growth rate processes are independent. With power utility, the volatility shocks will not be reflected in the innovations of the IMRS. With the Epstein and Zin (1989) preferences, in spite of this independence, volatility shocks influence the innovations in the IMRS.
As in Campbell and Cochrane (1999), given the normality of the growth rate dynamics, the maximal Sharpe ratio is simply given by the standard deviation of the log pricing kernel.
The evidence regarding the model is based on numerical solutions using standard polynomial-based projection methods discussed in Judd (1998). The numerical results are quite close to those based on the approximate analytical solutions.
The first-order autocorrelations for annual consumption growth in 1951 to 1999 and 1961 to 1999 are 0.38 and 0.44, respectively – hence the consumption growth autocorrelations vary with samples. Based on Table I, both estimates are well within the model-based 5% confidence interval for the first-order autocorrelation. We have focused on annual data (consumption and dividends) to avoid dealing with seasonalities and other measurement problems discussed in Wilcox (1992).
Also note that it is difficult to detect high-frequency time varying volatility (e.g., GARCH) effects once the data is time-aggregated (see Nelson (1991) and Drost and Nijman (1993)).
To derive analytical expressions we have assumed that the volatility process is conditionally normal. When we solve the model numerically we ensure that the volatility is positive by replacing negative realizations with a very small number. This happens for about 5% of the realizations; hence, the possibility that volatility in equation (8) can become negative is primarily a technical issue.
Consistent with Lettau and Ludvigson (2001), predictability coefficients and $R^2$s based on the wealth-consumption ratio follow the same pattern and are slightly larger than those based on price-dividend ratios.
Our model can be easily modified to further lower the predictability of growth rates. Consider an augmented model (as in Cecchetti, Lam, and Mark (1993)) that allows for additional predictable movements in dividend growth rates that are unrelated to consumption. This will not affect the risk-free rate and the risk premia in the model, but will additionally lower the ability of price-dividend ratios to predict future consumption growth rates.
For explicit details of this decomposition see Cochrane (1992). Specifically, these represent the percentage of $\text{var}(p - d)$ accounted for by returns and dividend growth rates: $100 \times \sum_{j=1}^{15} \Omega^j \frac{\text{cov}(g_t - d_t, x_{t+j})}{\text{Var}(p_t - d_t)}$, where $x = -r$ and $g_d$ respectively, and $\Omega = 1/(1 + E(r))$.
The explicit formulas for the real term structure and the term premia are presented in Bansal and Yaron (2000). The negative real term premia of our model are consistent with the evidence provided in Evans (1998), who documents that for inflation-indexed bonds in the U.K. (1983 to 1995) the term
premia are significantly negative (less than $-2\%$ at the 1-year horizon), while the term premia for nominal bonds are very slightly positive.
Table I
Annualized Time-Averaged Growth Rates
The model parameters are based on the process given in equation (4). The parameters are $\mu = \mu_d = 0.0015$, $\rho = 0.979$, $\sigma = 0.0078$, $\phi = 3$, $\varphi_e = 0.044$, and $\varphi_d = 4.5$. The statistics for the data are based on annual observations from 1929 to 1998. Consumption is real non-durables and services (BEA); dividends are from the CRSP value-weighted return. The expression $AC(j)$ is the $j^{th}$ autocorrelation, $VR(j)$ is the $j^{th}$ variance ratio, and $corr$ denotes the correlation. Standard errors are Newey and West (1987) corrected using 10 lags. The statistics for the model are based on 1,000 simulations each with 840 monthly observations that are time-aggregated to an annual frequency. The mean displays the mean across the simulations. The 95% and 5% columns display the estimated percentiles of the simulated distribution. The P-val column denotes the number of times in the simulation the parameter of interest was larger than the corresponding estimate in the data. The Pop column refers to population value.
| Variable | Estimate | S.E. | Mean | 95% | 5% | P-val | Pop |
|--------------|----------|--------|------|------|------|-------|-----|
| $\sigma(g)$ | 2.93 | (0.69) | 2.72 | 3.80 | 2.01 | 0.37 | 2.88|
| $AC(1)$ | 0.49 | (0.14) | 0.48 | 0.65 | 0.21 | 0.53 | 0.53|
| $AC(2)$ | 0.15 | (0.22) | 0.23 | 0.50 | -0.17| 0.70 | 0.27|
| $AC(5)$ | -0.08 | (0.10) | 0.13 | 0.46 | -0.13| 0.93 | 0.09|
| $AC(10)$ | 0.05 | (0.09) | 0.01 | 0.32 | -0.24| 0.80 | 0.01|
| $VR(2)$ | 1.61 | (0.34) | 1.47 | 1.69 | 1.22 | 0.17 | 1.53|
| $VR(5)$ | 2.01 | (1.23) | 2.26 | 3.78 | 0.79 | 0.63 | 2.36|
| $VR(10)$ | 1.57 | (2.07) | 3.00 | 6.51 | 0.76 | 0.77 | 2.96|
| $\sigma(g_d)$| 11.49 | (1.98) | 10.96| 15.47| 7.79 | 0.43 | 11.27|
| $AC(1)$ | 0.21 | (0.13) | 0.33 | 0.57 | 0.09 | 0.53 | 0.39|
| $corr(g,g_d)$| 0.55 | (0.34) | 0.31 | 0.60 | -0.03| 0.07 | 0.35|
Table II
Asset Pricing Implications – Case I
This table provides information regarding the model without fluctuating economic uncertainty (i.e., Case I, where $\sigma_w = 0$). All entries are based on $\delta = 0.998$. In Panel A the parameter configuration follows that in Table 1, i.e., $\mu = \mu_d = 0.0015$, $\rho = 0.979$, $\sigma = 0.0078$, $\phi = 3$, $\varphi_e = 0.044$, and $\varphi_d = 4.5$. Panels B and C describe the changes in the relevant parameters. The expressions $E(R_m - R_f)$ and $E(R_f)$ are, respectively, the annualized equity premium and mean risk-free rate. The expressions $\sigma(R_m)$, $\sigma(R_f)$, and $\sigma(p - d)$ are the annualized volatilities of the market return, risk-free rate, and the log price-dividend respectively.
| $\gamma$ | $\psi$ | $E(R_m - R_f)$ | $E(R_f)$ | $\sigma(R_m)$ | $\sigma(R_f)$ | $\sigma(p - d)$ |
|----------|--------|----------------|----------|---------------|---------------|----------------|
| | | | | | | |
| **Panel A: $\phi = 3.0$, $\rho = 0.979$** | | | | | | |
| 7.5 | 0.5 | 0.55 | 4.80 | 13.11 | 1.17 | 0.07 |
| 7.5 | 1.5 | 2.71 | 1.61 | 16.21 | 0.39 | 0.16 |
| 10.0 | 0.5 | 1.19 | 4.89 | 13.11 | 1.17 | 0.07 |
| 10.0 | 1.5 | 4.20 | 1.34 | 16.21 | 0.39 | 0.16 |
| | | | | | | |
| **Panel B: $\phi = 3.5$, $\rho = 0.979$** | | | | | | |
| 7.5 | 0.5 | 1.11 | 4.80 | 14.17 | 1.17 | 0.10 |
| 7.5 | 1.5 | 3.29 | 1.61 | 18.23 | 0.39 | 0.19 |
| 10.0 | 0.5 | 2.07 | 4.89 | 14.17 | 1.17 | 0.10 |
| 10.0 | 1.5 | 5.10 | 1.34 | 18.23 | 0.39 | 0.19 |
| | | | | | | |
| **Panel C: $\phi = 3.0$, $\rho = \varphi_e = 0$** | | | | | | |
| 7.5 | 0.5 | -0.74 | 4.02 | 12.15 | 0.00 | 0.00 |
| 7.5 | 1.5 | -0.74 | 1.93 | 12.15 | 0.00 | 0.00 |
| 10.0 | 0.5 | -0.74 | 3.75 | 12.15 | 0.00 | 0.00 |
| 10.0 | 1.5 | -0.74 | 1.78 | 12.15 | 0.00 | 0.00 |
Table III
Properties of Consumption Volatility
The entries in Panel A are the variance ratios ($VR(j)$) for $|\epsilon_{g^a,t}|$, which is the absolute value of the residual from the regression $g_t^a = \sum_{j=1}^{5} A_j g_{t-j}^a + \epsilon_{g^a,t}$, where $g_t^a$ denotes annual consumption growth rate. Panel B provides regression results for $|\epsilon_{g^a,t+j}| = \alpha + B(j)(p_t - d_t) + v_{t+j}$, and $j$ indicates the forecast horizon in years. The statistics are based on annual observations from 1929 to 1998 of real nondurables and services consumption (BEA). The price-dividend ratio is based on the CRSP value-weighted return. Standard errors are Newey and West (1987) corrected using 10 lags.
| Horizon | Panel A: Variance Ratios | Panel B: Predicting $|\epsilon_{g^a,t+j}|$ |
|---------|--------------------------|---------------------------------------------|
| | $VR(j)$ | S.E. | $B(j)$ | S.E. | $R^2$ |
| 2 | 0.95 | (0.38) | -0.11 | (0.04) | 0.06 |
| 5 | 1.26 | (1.09) | -0.10 | (0.05) | 0.04 |
| 10 | 1.75 | (2.46) | -0.08 | (0.08) | 0.03 |
Table IV
Asset Pricing Implications – Case II
The entries are model population values of asset prices. The model incorporates fluctuating economic uncertainty (i.e., Case II) using the process in equation (8). In addition to the parameter values given in panel A of Table II ($\delta = 0.998$, $\mu = \mu_d = 0.0015$, $\rho = 0.979$, $\sigma = 0.0078$, $\phi = 3$, $\varphi_e = 0.044$, and $\varphi_d = 4.5$), the parameters of the stochastic volatility process are $\nu_1 = 0.987$ and $\sigma_w = 0.23 \times 10^{-5}$. The predictable variation of realized volatility is 5.5%. The expressions $E(R_m - R_f)$ and $E(R_f)$ are respectively the annualized equity premium and mean risk free-rate. The expressions $\sigma(R_m)$, $\sigma(R_f)$, and $\sigma(p - d)$ are the annualized volatilities of the market return, risk-free rate, and the log price-dividend respectively. The expressions $AC1$ and $AC2$ denote, respectively, the first and second autocorrelation. Standard errors are Newey and West (1987) corrected using 10 lags.
| Variable | Estimate | S.E. | Model $\gamma = 7.5$ | Model $\gamma = 10$ |
|-------------------|----------|--------|----------------------|---------------------|
| **Returns** | | | | |
| $E(r_m - r_f)$ | 6.33 | (2.15) | 4.01 | 6.84 |
| $E(r_f)$ | 0.86 | (0.42) | 1.44 | 0.93 |
| $\sigma(r_m)$ | 19.42 | (3.07) | 17.81 | 18.65 |
| $\sigma(r_f)$ | 0.97 | (0.28) | 0.44 | 0.57 |
| **Price Dividend**| | | | |
| $E(exp(p - d))$ | 26.56 | (2.53) | 25.02 | 19.98 |
| $\sigma(p - d)$ | 0.29 | (0.04) | 0.18 | 0.21 |
| $AC1(p - d)$ | 0.81 | (0.09) | 0.80 | 0.82 |
| $AC2(p - d)$ | 0.64 | (0.15) | 0.65 | 0.67 |
Table V
Decomposing the Variance of the Pricing Kernel
Entries are the relative variance of different shocks to the variance of the pricing kernel. The entries are based on the model configuration described in Table IV with $\gamma = 10$. The volatility of the maximal Sharpe ratio is annualized in order to make it comparable to the Sharpe ratio on annualized returns.
| Relative Variance of Shocks | Volatility of Pricing Kernel | Independent Consumption | Expected Growth Rate | Fluctuating Economic Uncertainty |
|-----------------------------|------------------------------|-------------------------|----------------------|----------------------------------|
| 0.73 | 14% | 47% | 39% | |
Table VI
Predictability of Returns, Growth Rates, and Price-Dividend Ratios
This table provides evidence on predictability of future excess returns and growth rates by price-dividend ratios, and the predictability of price-dividend ratios by consumption volatility. The entries in Panel A correspond to regressing $r^e_{t+1} + r^e_{t+2} \ldots + \ldots r^e_{t+j} = \alpha(j) + B(j) \log(P_t/D_t) + v_{t+j}$, where $r^e_{t+1}$ is the excess return, and $j$ denotes the forecast horizon in years. The entries in Panel B correspond to regressing $g^a_{t+1} + g^a_{t+2} \ldots + \ldots g^a_{t+j} = \alpha(j) + B(j) \log(P_t/D_t) + v_{t+j}$, and $g^a$ is annualized consumption growth. The entries in Panel C correspond to $\log(P_{t+j}/D_{t+j}) = \alpha(j) + B(j)|\epsilon_{g^a,t}| + v_{t+j}$, where $|\epsilon_{g^a,t}|$ is the volatility of consumption defined as the absolute value of the residual from regressing $g^a_t = \sum_{j=1}^{5} A_j g^a_{t-j} + \epsilon_{g^a,t}$. The model is based on the process in equation (8), with parameter configuration given in Table IV and $\gamma = 10$. The entries for the model are based on 1,000 simulations each with 840 monthly observations that are time-aggregated to an annual frequency. Standard errors are Newey and West (1987) corrected using 10 lags.
| Variable | Data | S.E. | Model | Data | S.E. | Model | Data | S.E. | Model |
|----------|--------|-------|-------|--------|-------|-------|--------|-------|-------|
| $B(1)$ | -0.08 | (0.07)| -0.18 | 0.04 | (0.03)| 0.06 | -8.78 | (3.58)| -3.74 |
| $B(3)$ | -0.37 | (0.16)| -0.47 | 0.03 | (0.05)| 0.12 | -8.32 | (2.81)| -2.54 |
| $B(5)$ | -0.66 | (0.21)| -0.66 | 0.02 | (0.04)| 0.15 | -8.65 | (2.67)| -1.56 |
| $R^2(1)$ | 0.02 | (0.04)| 0.05 | 0.13 | (0.09)| 0.10 | 0.12 | (0.05)| 0.14 |
| $R^2(3)$ | 0.19 | (0.13)| 0.10 | 0.02 | (0.05)| 0.12 | 0.11 | (0.04)| 0.08 |
| $R^2(5)$ | 0.37 | (0.15)| 0.16 | 0.01 | (0.02)| 0.11 | 0.12 | (0.04)| 0.05 |
|
2006 Report to the Legislature (Version 1.0)
Submitted by the Criminal and Juvenile Justice Information Policy Group
January 15, 2007
I. Executive Summary.................................................................1
II. Legislative Recommendations.............................................6
III. Activities of the Criminal and Juvenile Justice Information Policy Group and Task Force in 2006..................................................8
IV. CriMNet Grant Program.......................................................17
V. 2006 CriMNet Program Office Projects..............................19
VI. Ongoing CriMNet Program Office Support Activities...........26
VII. 2006 Other CriMNet Enterprise Projects........................31
VIII. Additional Legislative Reporting Requirements...............38
IX. Appendices........................................................................43
I. Executive Summary
Background:
The Challenge: Justice and public safety services in Minnesota are delivered by more than 1,100 agencies and branches of local, state and federal government. These agencies are often headed by elected officials and each has a different enabling authority and funding source. The information systems for each agency were many times developed to meet individual operational needs without consideration of other criminal justice agency needs. Justice and public safety services are composed of many decisions from an initial decision to investigate; to arrest; to detain; to release pre-trial; to charge, adjudicate or dispose a case; as well as to sentence to an array of penalties and conditions. All of these decisions are based on information. Often that information is missing, incomplete, inaccurate or not available in a timely manner nor in a simple consolidated view for the particular decision point. Yet Minnesota state and local governmental units are spending more than $2.4 billion per year (2004) to operate a justice system that is dependent on complete, accurate and timely information.
Policy Foundation and Governance: Efforts to improve the sharing of criminal justice information began in the early 1990s, guided by the provisions of Minnesota Statutes 299C.65, which created the Criminal and Juvenile Justice Information Policy Group and Task Force (Policy Group and Task Force). The Policy Group, after several changes including those made during the 2005 legislative session, is comprised of four commissioners from the Executive Branch, four members of the Judicial Branch, and the chair and first vice-chair of the Task Force. The Policy Group is responsible for the successful completion of statewide criminal justice information system integration. The Task Force, currently made up of 35 representatives (criminal justice professionals, legislators, state agency representatives, local municipal representatives and citizen members), was also created to assist the Policy Group in making recommendations to the legislature regarding criminal justice information systems. And in 2001, the legislature created a central program office and executive director to coordinate and oversee criminal justice information integration that has come to be known as CriMNet.
The CriMNet Enterprise: Today CriMNet is Minnesota’s program to integrate criminal justice information. This program involves defining what information criminal justice professionals’ need, identifying barriers that prevent sharing of information among criminal justice professionals, offering solutions for these criminal justice professionals, and creating the business and technical standards that are needed to share information. Specifically, the scope of the CriMNet Program is to:
Support the creation and maintenance of a criminal justice information framework that is accountable, credible, seamless, and responsive to the victim, the public, and the offender. As a result, the right information will be in the hands of the right people at the right time and in the right place.
By the *right information*, we mean that information will be accurate and complete and expressed in a standardized way, so that it is reliable and understandable.
By the *right people*, we mean that people with different roles in the criminal justice system will have role-based views of the information that they need to do their jobs, and that access to certain private information is properly restricted.
By the *right time*, we mean that practitioners and the public are provided information when they need it – as events occur.
By the *right place*, we mean wherever the information is needed - squad car or courtroom, for example.
- The primary result the CriMNet Program seeks is:
- To accurately identify individuals;
- To make sure that criminal justice records are complete, accurate, and readily available;
- To ensure the availability of an individual’s current status in the criminal justice system;
- To provide standards for data sharing and analysis;
- To maintain the security of information; and
- To accomplish our tasks in an efficient and effective manner.
The CriMNet Program is made up of a number of projects and initiatives at the state and local level to improve integration. It’s important to note there are many ways to enhance the way agencies share information – the 1,100 criminal justice agencies in Minnesota are diverse and no single solution will connect them all effectively.
**Early and Recent Integration Activities**
Early integration activities focused on filling gaps in statewide criminal and juvenile justice data, as well as creating a domestic abuse order for protection (OFP) database and system to make restraining orders available to dispatchers and to squad cars with mobile data terminals. Results of these efforts, in addition to the OFP system is a juvenile criminal history of serious juvenile offenders; a predatory offender registration database; a database of arrest/booking photos; a database of statewide probation and detention data; a statewide person-based court information system (still being rolled out statewide through 2007); electronic fingerprint capture at booking locations statewide, including reducing the time to identify a suspect from months to hours; and efforts to reduce the number of disposition records not linked to fingerprints. Supporting activities included initial attempts to create architecture of information technology needs across the criminal justice and standards for integration, as well as local integration planning and implementation programs.
Minnesota has been an early leader in statewide integration activities in the United States, and in fact federal leadership on integration initiatives followed Minnesota’s initiatives
by many years. Staff members from the CriMNet Program Office participate in a range of national integration programs and activities.
More recently an integrated search service has been developed that supports the query of eight state and national criminal justice data repositories from a single user interface (with appropriate permissions); a statewide statute service to improve accuracy of charging and penalty information in statewide justice information systems; small jurisdiction integration planning support; a security architecture for integrated justice systems, as well as various policies and service level agreements on acceptable data use, and other privacy considerations. We are pleased to report the status of these activities in more detail in the report that follows.
2006 Planning for the 2007 Legislative Session (and Beyond)
Strategic “Framework” (Minnesota Criminal Justice Integration Framework and Blueprint): All CriMNet projects are related to requirements gathered from criminal justice agencies – those requirements are gathered at regular liaison meetings with local agencies and managed and tracked throughout all Program Office activities. These requirements roll up into the high-level strategies debated by the Task Force and decided by the Policy Group. Earlier this year the Program Office, Task Force and Policy Group undertook an extensive prioritization process that has resulted in a “Framework” document that identifies the long-term goals and strategies for integration (see Appendix A). The Framework is conceptualized as a triangle; a policy foundation for all activities; a leg of enabling activities – those activities that support delivery of information; and a leg for delivery – those systems and computer services that actually collect and deliver data to and from users.
This process was lengthy and engaged the Task Force and the stakeholder groups it represents in identifying their key priorities and goals for CriMNet. The Task Force conducted a survey of its constituent groups where criminal justice practitioners weighed in regarding their expectations for CriMNet. Those results were then considered as the Task Force members weighed its priorities, which were then reported to the Policy Group. These priorities are reflected in the Framework document, which also shows where activities are dependent upon progress in other areas.
The Framework also identifies key ongoing activities that have become Program Office priorities. This Framework, along with the detailed supporting plans for each initiative, represents in practice, the concept of the Blueprint for Integration, identified by the CriMNet Strategic Plan and Scope Statement. This Framework is far more than a work plan, though – it also provides a high-level strategic vision for enterprise activities. The document provides specific business outcomes and proposed performance measures for each identified initiative.
As each prioritized strategic initiative is commenced, project documentation will expand upon policies, definitions, standards and strategies for use by state and local agencies in their effort to participate in each initiative. When complete, the blueprint will include policies (data policies and others), business and technical integration standards, strategies, infrastructure definition, and interfaces. The blueprint will describe what is required to participate in each justice information sharing initiative. Developing the blueprint for each enterprise criminal justice information initiative is explicitly assumed to be critical to the success of each initiative and of the CriMNet Program. Detailed project plans including business cases, scope statements, milestones and work breakdown structures will be added for each initiative to complete the detail of the blueprint.
Some highlight initiatives from the Framework include those which would allow users to view records from a single event in a consolidated record – the ability to see when one individual has had several interactions with different justice agencies without having to resort to the time consuming effort of clicking through voluminous information from a number of sources. This will be enabled by linking records electronically and linking more of them to fingerprints – the biological certainty that records belong to the same individual. This improves the justice process by eliminating both the problems of failing to identify a real offender and mis-identifying someone as an offender who is not. These initiatives will also be enabled by security and other technologies that support accurate consolidated and customizable views based on the role of the practitioner and the lawful purpose for access. In addition, technologies are available today that will simplify delivery of these services beyond what was possible a few short years ago when CriMNet first began its work.
Other initiatives seek to increase the types of data available, improve data quality and insure data privacy policies are enabled through justice data delivery. Policy initiatives related to background checks and expungements are also included in the Framework for future consideration.
**Update on Governance**
In 2005, the legislature added the chair and first vice-chair of the Task Force as full voting members of the Policy Group. In 2005 and 2006, the Policy Group reviewed its existing governance structures. This resulted in the Policy Group and the Task Force adopting new charters. The Policy Group first adopted a charter that clarifies both the role of the executive director and the expectations for the Task Force (see Appendix B). The Task Force then adopted a charter in conformance with the Policy Group’s charter (see Appendix C). In addition, the executive director has been made a part of the BCA director team, which consists of senior-level management, and the executive director also reports to the superintendent to insure day-to-day accountability and oversight.
Conclusion
In conclusion, the process to create the Framework represented a key turning point for CriMNet – the Policy Group, Task Force and CriMNet Program Office have all identified the same key goals and voiced their commitment to assuring the success of these initiatives in the coming years. This kind of cooperation and collaboration is an important milestone for Minnesota, as is the effort to view criminal justice information integration from an enterprise perspective.
This strategic direction is very positive for the state of Minnesota and the Policy Group feels strongly it is moving Minnesota toward the vision identified in the CriMNet Strategic Plan and Scope Statement. Progress already achieved in the priorities identified is detailed on the following pages. As Minnesota moves forward with its integration efforts, new priorities identified in the Framework – reflective of the needs communicated by local agencies and users – will be the key focus of the CriMNet Program Office. Expectations for success are high, and the Policy Group is confident they can be met, given appropriate resources and support.
II. Legislative Recommendations
Pursuant to Minnesota Statutes 299C.65, Subdivision 2, the Criminal and Juvenile Justice Information Policy Group (Policy Group) must provide a report to the Legislature on January 15 each year detailing the statutory changes and/or appropriations necessary to ensure the efficient and effective operation of criminal justice information systems. This same statute requires the Criminal and Juvenile Justice Information Task Force (Task Force) to assist the Policy Group in developing recommendations.
A Legislative Delivery Team of the Task Force reviewed possible legislative changes and made recommendations to the Task Force in September and December 2006. The Task Force adopted many of the recommendations and forwarded them to the Policy Group. The Policy Group adopted the following legislative policy recommendations in December 2006 for consideration during the 2007 legislative session:
1. Changes to Task Force Membership and Local Grant Match Requirement (Minnesota Statutes 299C.65).
This language would allow most associations or organizations who have representation on the Task Force more flexibility in appointing members. For example, currently two sheriffs are recommended by the Minnesota Sheriffs Association to the Policy Group to serve on the Task Force. The new language would allow the Sheriffs Association to appoint representatives to the Task Force directly and would require that only one of the appointees must be a sheriff. The four public members would be appointed by the governor for a term of six years. This language would also clarify the language relating to the match requirement for local grants (see CriMNet Grant Program section).
2. Driver’s License Photograph Accessibility (Minnesota Statutes 171.07, Subd. 1a).
This language would expand the accessibility of driver’s license photographs to criminal justice agencies as defined by 299C.46, Subd. 2 for the purpose of investigation and prosecution of crimes, service of process, location of missing persons, investigation and preparation of cases for criminal, juvenile, and traffic court, and supervision of offenders. This language would also expand the accessibility of driver’s license photographs to public defenders as defined by 611.272 for the purpose of preparation of cases for criminal, juvenile and traffic court.
3. Subscription Service (Minnesota Statutes 299C.40).
This language would allow the Department of Public Safety to establish a secure subscription service. A subscription service is a process by which criminal justice agency personnel could obtain ongoing, automatic electronic notice of any contacts an authorized data subject has with any criminal justice agency. The data subject must be an individual who is the subject of an active criminal investigation, criminal charging process, or an open case in criminal court, probation or corrections.
4. Data Subject Access (Minnesota Statutes 13.873).
This language would allow individuals to request law enforcement agencies with access to the Integrated Search Service (a service operated by the Bureau of Criminal Apprehension which allows authorized criminal justice users to search and view data that are stored on one or more databases maintained by criminal justice agencies) to provide a list of government entities that have provided public or private data about that individual through the Integrated Search Service and to describe the type of data that was provided. The BCA would also be required to provide a list of all law enforcement agencies with access to the Integrated Search Service on a public internet site, as well as information on how data subjects may challenge the accuracy or completeness of data.
III. Activities of the Criminal and Juvenile Justice Information Policy Group and Task Force in 2006
Criminal and Juvenile Justice Information Policy Group:
The Criminal and Juvenile Justice Information Policy Group (Policy Group) is authorized under Minnesota Statutes 299C.65 and consists of the following ten members: commissioner of public safety, commissioner of corrections, commissioner of finance, state chief information officer, four members of the Judicial Branch appointed by the chief justice of the Minnesota Supreme Court, and the Criminal and Juvenile Justice Information Task Force (Task Force) chair and first vice-chair. This body has the authority to appoint additional non-voting members. The Policy Group is chaired by the commissioner of public safety and meets quarterly and other times as needed.
The Policy Group exists to provide leadership for the overall strategic and policy direction of the CriMNet Enterprise. The Policy Group is responsible for establishing priorities and high-level performance measures for the Enterprise, approving and monitoring the CriMNet Program budget (and other state agencies/courts as they relate to CriMNet), addressing high-level policy issues, determining Enterprise-wide strategies (including the distribution of grant funds), and advocating for CriMNet Enterprise initiatives.
The Policy Group is also charged with studying and making recommendations to the governor, Supreme Court and the legislature on issues related to criminal justice information integration.
2006 Policy Group in Review
New Executive Director
In January 2006, the Policy Group unanimously approved the recommendation of a hiring panel to appoint Dale Good as the new executive director of CriMNet. A number of Task Force and stakeholder representatives participated on a screening committee of potential candidates. That committee forwarded three candidates to a three-member panel of Policy Group members, who conducted interviews of those candidates. It was the unanimous decision of the three-member panel to recommend Dale Good to the full Policy Group to serve as the next CriMNet executive director.
Revised Policy Group Charter and Governance Structure
As a follow-up to work completed by the Policy Group in 2005 related to governance, the Policy Group met in February 2006 for a day-long retreat to continue discussing the governance model – specifically the roles and relationships between the Policy Group, the Task Force, and the CriMNet executive director. The Policy Group agreed that the executive director act as its direct agent and is responsible for developing and
implementing strategies to support the CriMNet Enterprise vision for integration. Policy Group members also agreed that the Task Force is critical in advising the executive director and Policy Group on business priorities and Enterprise activities. As a result, the Policy Group revised its governance charter to include more specific direction to both the Task Force and CriMNet executive director (see Appendix B). Following input from the Task Force and CriMNet executive director, the Policy Group Charter was finalized and adopted by the Policy Group in March 2006.
**Minnesota Criminal Justice Integration Framework and Blueprint**
At the February 2006 retreat, the CriMNet executive director presented the Minnesota Criminal Justice Integration Framework and Blueprint (Framework) – a framework developed by the CriMNet Program Office (from the user requirements gathered from multiple sources) to capture all the major initiatives of the CriMNet Enterprise separated into three main categories: policy, enabling and delivery. For each initiative, the Framework outlines the major projects, outcomes, possible performance measures and identifies whether it is a new initiative that requires funding in the FY08/09 biennium.
The Framework is the first step in developing the integration blueprint. As each prioritized initiative is defined, project documentation will expand upon policies, definitions, standards and strategies for use by state and local agencies in their effort to participate in each initiative. It is assumed by the Policy Group, that the detailed blueprint for each initiative is critical to the success of the initiative.
The Policy Group has agreed that this is a good model for capturing what initiatives are within scope of the CriMNet Program (what is known at this point in time) and what the priorities are. The Policy Group asked that the Task Force review the Framework and make recommendations on priorities to the Policy Group.
The Policy Group met again in July 2006 to continue discussion on the Framework examining each initiative in greater detail. The Task Force chairs presented the comprehensive process that the Task Force used to identify the priorities of the associations/constituent groups represented on the Task Force. The Task Force chairs presented the top six priorities of the Task Force to the Policy Group. It was noted that the priorities in the Framework align very closely with the priorities of the Task Force and that the Framework represents the “end-state vision” as currently envisioned less other “gaps” identified that are not included in the Framework. The Policy Group discussed potential gaps and whether some of the gaps identified are actually the responsibility of the Policy Group/CriMNet Enterprise.
The Framework was adopted by the Policy Group in September 2006, without specific funding tied to new initiatives, as the priorities and vision to move CriMNet forward.
**Fiscal Year 08/09 Investment Options**
As part of the Framework, the Policy Group reviewed potential new investment options for FY08/09, as well as the following two biennia. The Policy Group discussed the
complex dependencies of projects and what progress could be realized if only partial funding was allocated in the 2007 legislative session. The Policy Group also considered which initiatives were priorities as indicated by the Task Force. There was also a policy discussion regarding the state’s responsibility for funding certain projects versus the local responsibility and what is considered local benefit versus a statewide benefit.
**CriMNet Local Grant Strategy**
In early 2006, the Policy Group began to consider different grant strategies to utilize the 2005 and 2006 Congressional Earmarks (totaling just under $1 million). The CriMNet Program Office presented a proposal to offer grants to local agencies for a specific, targeted purpose (such as to connect to a specific statewide service like the Name-Event Index Service), which would arguably provide a greater statewide benefit than providing grants to local entities for individual integration initiatives (the approach for which local grant funds have been allocated in the past, primarily in the five largest counties in Minnesota). The Policy Group discussed the policy options and solicited input from the Task Force.
The Task Force and the Program Office agreed on a targeted approach for the grant funds to connect local agencies to the Comprehensive Incident-Based Reporting System (CIBRS); however, the recommendation to the Policy Group was to reduce the local match requirement. After much discussion related to reducing the local match requirement and whether the statute allowed for the match requirement to be reduced, an alternative approach was proposed to contract directly with local agency vendors to achieve the same purpose. The Policy Group agreed that this was the best approach at this time, but indicated that they would reconsider local grants in the future. The Policy Group approved that the 2005 Congressional Earmark be used to contract with vendors of local agencies to connect to CIBRS (the distribution of the 2006 Congressional Earmark will be determined by the Policy Group early in 2007).
Criminal and Juvenile Justice Information Task Force:
The Criminal and Juvenile Justice Information Task Force (Task Force) is authorized under Minnesota Statutes 299C.65 and consists of the following 35 members:
- two sheriffs recommended by the Minnesota Sheriffs Association;
- two police chiefs recommended by the Minnesota Chiefs of Police Association;
- two county attorneys recommended by the Minnesota County Attorney Association;
- two city attorneys recommended by the Minnesota League of Cities;
- two public defenders appointed by the Board of Public Defense;
- two district judges appointed by the Judicial Council, one of whom is currently assigned to the juvenile court;
- two community corrections administrators recommended by the Minnesota Association of Counties, one of whom represents a Community Corrections Act County;
- two probation officers;
- four public members, one of whom has been a victim of crime, and two who are representatives of the private business community who have expertise in integrated information systems;
- two court administrators;
- one member of the House of Representatives appointed by the speaker of the house;
- one member of the Senate appointed by the majority leader;
- the attorney general or a designee;
- two individuals recommended by the Minnesota League of Cities, one of whom works or resides in greater Minnesota and one of whom works or resides in the seven-county metropolitan area;
- two individuals recommended by the Minnesota Association of Counties, one of whom works or resides in greater Minnesota and one of whom works or resides in the seven-county metropolitan area;
- the director of the Sentencing Guidelines Commission;
- one member appointed by the state chief information officer;
- one member appointed by the commissioner of public safety;
- one member appointed by the commissioner of corrections;
- one member appointed by the commissioner of administration; and
- one member appointed by the chief justice of the Minnesota Supreme Court
Per Minnesota Statutes 299C.65, the Task Force is appointed by the Policy Group to assist the Policy Group in their duties. The statute also directs the Task Force to monitor, review and report to the Policy Group on CriMNet-related projects, in addition to providing oversight of ongoing operations, as directed by the Policy Group. The Task Force is also charged with assisting the Policy Group in writing an annual report to the
governor, Supreme Court, and legislature by January 15 each year. The Task Force also has a role in reviewing funding requests for criminal justice information system grants and making recommendations to the Policy Group.
2006 Task Force in Review
Revised Task Force Charter and Creation of By-Laws
Based on the Policy Group’s specific direction to the Task Force as part of the Policy Group’s revised charter, the Task Force agreed to revise its own charter and created a delivery team in February 2006 to clarify the roles and responsibilities of the Task Force. In response to the statutory requirements of the Task Force, the Policy Group directed the Task Force to do the following:
1. Advise the Executive Director and Policy Group on Enterprise activities;
2. Advise the Executive Director on business priorities;
3. Advocate constituent interests and communicate Enterprise decisions back to constituents;
4. Review grant strategies developed by the Executive Director and suggest alternatives.
The Charter Delivery Team worked over six months and recommended a revised Task Force charter which focused on the composition of the Task Force as well as the primary responsibilities of the Task Force as defined in statute and in the directives of the Policy Group (see Appendix C). The delivery team also created a separate by-laws document for regulating and managing the internal affairs of the Task Force (see Appendix D). New items considered were an attendance and proxy policy and the clarification of the role of delivery teams. The Task Force adopted the new charter and by-laws in August 2006.
Priorities Defined
A key responsibility of the Task Force is to advise the CriMNet executive director and Policy Group on Enterprise activities and business priorities. These priorities were a major contributing factor in the Framework discussion by both the Task Force and Policy Group and had a significant impact on what should be included in the Framework as well as what the investment options should be for the 2008-2009 fiscal biennium.
In developing a process for determining these business priorities, the Task Force chairs solicited the CriMNet Program Office’s assistance in creating an online survey that each Task Force member was tasked with sending out to the agency, organization, interest group or association (association) he or she represents (priority/issue categories for the survey were determined by grouping certain projects that have common high-level outcomes, but respondents were encouraged to list additional priorities not noted). In total, 139 people completed the survey. The responses were distributed to Task Force
members in the form of an executive summary as well as the individual responses by association at the June 2006 Task Force meeting.
At that time, Task Force members were asked to confirm with their associations what their business priorities were (as the Policy Group looks to the 2008-2009 fiscal biennium and potential new initiatives at the criminal justice enterprise level) and to be prepared to discuss their top priorities at the July 2006 Task Force meeting.
At the July Task Force meeting, each association represented had the opportunity to indicate its top three priorities as well as indicate further priorities if it chose to do so. The ability to view records from a single event in a consolidated record (part of the Name-Event Index Service (NEIS) project – a component of the Identification Roadmap) received the highest priority ranking and the overall, high-level priorities identified were very consistent with what has been identified by the CriMNet Program Office.
There was much debate about the priorities as well as the process for defining the priorities by the Task Force. A number of issues were raised relating to how priorities should be “weighted” and how they should be presented to the Policy Group. In the end, the Task Force chairs analyzed the votes on priorities and determined the top six Task Force priorities below, which they presented to the Policy Group in July 2006.
1. Ability to view records from a single event in a consolidated record – being able to see when one individual has several interactions with different criminal justice agencies without having to click through a list of records to determine that information.
2. Ability to customize information received when querying state systems to view only information you need (for a background check or criminal investigation) and to view that information from your own records management system rather than a special application.
3. Creating technical standards for electronic exchanges of information so that agencies building new systems or replacing systems know how to configure the technology and work with vendors to meet long-term needs.
4. Ability to access all BCA systems with one username and password.
5. Greater availability of local grants to connect to statewide systems or update local systems.
6. Working with local agencies to change business practices to increase data accuracy in all statewide systems.
**Delivery Teams**
As the Policy Group and Task Force discussed the role of the Task Force in 2006, both groups agreed that the work of delivery teams was a major strength and asset of the Task Force. The role of delivery teams was clarified in the new Task Force By-laws. The by-laws state that the Task Force Executive Board solicits participation and appoints members of delivery teams to ensure appropriate representation, the Task Force approves the creation of delivery teams, the Task Force maintains ultimate authority of delivery
teams, and participation by non-members is encouraged but that a Task Force member must serve as the chair. The Task Force utilized the efforts of many Task Force members and other stakeholders for delivery teams and committees in 2006 including the following:
- **Grant Delivery Team** (see grant program section)
- **Charter Delivery Team** (see update above)
- **Background Check/Expungement Delivery Team**. This team presented recommendations in a report to the Task Force in December 2006. The Task Force forwarded the report to the Policy Group without recommendation. The Policy Group continues to consider the recommendations.
- **Legislative Delivery Team** (see legislative recommendations section)
- **Fingerprint/Suspense Delivery Team**. This delivery team is working toward a recommendation to improve the business processes related to fingerprinting and when records are not linked to a fingerprint. The team is continuing to analyze successful business practices before finalizing recommendations.
- **Legal Advisory Board**. This group promotes and educates practitioners about the Minnesota Criminal Justice Statute Service. This group also recommends best practices and standards regarding criminal statutes and consistent formatting for charging documents, including citations and complaints.
- **Minnesota Offense Codes (MOC) Committee**. This committee provides input and requirements to the CriMNet Program Office as it evaluates the future use of MOC codes or a replacement solution.
- **Court Disposition Summary Delivery Team**. This team is working on a solution to provide a more efficient and consolidated way to retrieve disposition information for bail and sentencing documents. The team hopes to have a proposed recommendation by early 2007.
**2006 Stakeholder Issues Submitted**
The Task Force also reviews new issues submitted by criminal justice stakeholders and recommends a course of action whether that be to create a delivery team, recommend to the Policy Group that the issue should be placed within the current priorities, or recommend that the issue is not a priority at this time. The following four new issues were presented to the Task Force in 2006:
- **Citation Process**. It was suggested that the current citation process is inefficient and labor intensive. This issue submitted requested that a single, statewide solution for entering citations be developed. This issue is being considered as part of a broader electronic charging (eCharging) project.
- **Fingerprinting by Probation Agencies**. This issue was brought forward because of the legislative implication that probation agencies are the backstop for collecting fingerprints and how this relates to records going into “suspense” because they are not able to be linked to a fingerprint. The Task Force agreed to create a delivery team to address this issue as noted in the above section.
MNCIS to MCAPS Exchange. The Minnesota County Attorney Prosecution System (MCAPS) is one of the main county attorney prosecution systems in Minnesota. It was proposed that CriMNet develop a link between the counties using MCAPS and the Minnesota Court Information System (MNCIS) so that a statewide solution could be utilized rather than each county having to develop its own link. The Program Office agreed to consider developing this link as part of the broader Name Event Index Service (NEIS) and eCharging projects.
Predatory Offender Registration Follow-Up. The issue noted that there are still gaps in predatory offender registrations and some offenders are slipping through the cracks after the CriMNet Program Office had completed some analysis of the missing registrations and offered some recommendations in 2004. This issue was passed to the BCA for consideration of the recommendations. The BCA has integrated court predatory offender information with the Predatory Offender Registry (POR) as of December 2006.
Court Disposition Summaries. As stated above, this issue came before the Task Force because of the conversion from the former Trial Court Information System (TCIS) to MNCIS and the resulting inefficiencies in capturing disposition summary data for bail and sentencing documents. The Task Force created a delivery team to address this issue.
Program Activities Updates
The CriMNet Program Office has the responsibility, per the Policy Group, to provide regular reporting on the activities of the Program Office to the Task Force. The Task Force typically reviews monthly, written project status and financial reports and has the opportunity to ask questions and offer comments; however, due to changes in the project reporting format, financial reports were not available for August – December 2006. The Task Force also regularly hears presentations/updates on Program Office projects and provides input on those projects. A number of projects were discussed by the Task Force in 2006 such as: the Identification Roadmap, the Integrated Search Service (ISS), Data Quality (the Privacy Impact Assessment), the Comprehensive Incident-Based Reporting System (CIBRS), Suspense Prevention, Integration Planning (the “Cookbook”), the Integration Repository (standards website), and the Integrated Criminal History System (ICHS).
Executive Board Elections
At the Task Force biennial business meeting in September 2006, the Task Force held elections to elect the Executive Board which consists of the chair, first vice chair, and second vice chair. After six years of service, the current chair, Chris Volkers (Washington County Court Administrator), chose not to run for another term. Deb Kerschner (Department of Corrections) was elected to serve as the new chair of the Task Force. Steve Holmgren (Chief Public Defender, First Judicial District) was re-elected to serve as the second vice chair. Current second vice chair, J Hancuch (Isanti County Probation) also chose not to run for a second term and the Task Force elected Ray
Schmitz (Olmsted County Attorney) to serve as the second vice chair. The terms of the newly elected officers began at the close of the business meeting on September 8, 2006, and they will serve two-year terms. The chair and first vice chair also serve as voting members of the Policy Group.
IV. CriMNet Grant Program
Since 2002, CriMNet has awarded approximately $7 million in grant funds to local jurisdictions for integration planning and implementation projects. The majority of those funds were awarded to the five largest counties (Hennepin, Ramsey, Dakota, Anoka, and St. Louis) for their local integration efforts. While this has furthered local integration and has produced some very good work, the CriMNet Program Office proposed an alternative strategy for distributing the next round of local grant funds to the Policy Group. In February 2006, the CriMNet Program Office proposed using available federal funds (approximately $1 million in Congressional Earmark funds for 2005 and 2006) for grants to locals that targeted a specific statewide purpose.
The Policy Group asked the Task Force to review the possible high-level strategies for use of the local grant funds, including the proposal brought forward by the CriMNet Program Office. The Task Force appointed the Grant Delivery Team to consider the different strategies and to make a recommendation. The Grant Delivery Team discussed three separate purposes for the grant funds:
1. Award additional funds to the counties/entities who had received grants in the past to continue their implementation projects.
2. Award funds to the next tier of counties/entities (the medium to smaller jurisdictions) who are just beginning their integration planning.
3. Award funds to agencies for a specific purpose with statewide benefit, such as connecting to the Name Event Index Service (NEIS) or the Comprehensive Incident-Based Reporting System (CIBRS).
The delivery team made a recommendation to the full Task Force that the 2005 Congressional Earmark ($493,000) should be dedicated to connecting local agencies to CIBRS and implementing a single standard for the exchange of information (the delivery team agreed to wait on making a determination on the purpose of the 2006 Congressional Earmark until after the 2005 Congressional Earmark had been awarded). The CriMNet Program Office concurred with the delivery team’s recommendation. The delivery team also recommended that that the local match requirement be 20 percent instead of 50 percent as it had been in previous grant offerings. The Task Force approved that the delivery team’s recommendation be forwarded to the Policy Group.
In May 2006, the Policy Group discussed the high-level strategic direction recommended by the Task Force and agreed that this would provide the most statewide benefit at this time. The Policy Group also discussed reducing the local match requirement but had concerns that Minnesota Statutes 299C.65 may not allow for the match requirement to be reduced in separate grant offerings. At that time, an alternative proposal was presented
which would allow the funding to be used for the same purpose (to connect locals to CIBRS), but would allow the CriMNet Program Office to contract directly with vendors of local agencies, thus changing the venue from grants to contracts and eliminating the need for a local match requirement. There was some concern from Task Force members that this decision not permanently eliminate grants to locals, but overall, local representatives on the Task Force were pleased that the alternative contract approach would allow for more participation by smaller jurisdictions.
In July 2006, the Policy Group approved the $493,000 Congressional Earmark to be dedicated to contract with agency vendors for the purpose of connecting local agencies to CIBRS and implementing a single standard for the exchange of incident information, which would also collect the incident-based information needed to support the Name-Event Index Service (NEIS) and Electronic Charging (eCharging) exchanges in the future.
It is anticipated that the Request for Proposals (RFP) will be published by the CriMNet Program Office by the end of 2006. As part of the solicitation for contracts with vendors, joint powers agreements with local agencies will be required to ensure that once connections are created by the vendors that local agencies will begin the transmission of their records to CIBRS. The solicitation will also ensure that the development costs are only paid for once and all Minnesota users of that vendor’s application will benefit from the one time development costs. Preference will be given to those vendors/agencies able to develop the extended exchange for NEIS and eCharging, the number of records that will be transmitted to the state, and the cost. These contracts should be awarded in early 2007.
In addition, the Task Force approved a motion at the October 2006 Task Force that the Task Force chairs advocate that state funding be allocated for local grants in this next biennium.
V. 2006 CriMNet Program Office Projects
With the adoption of the Framework by the Policy Group, project reporting has been revised to align with the Framework versus the high-level categories from the CriMNet Scope Statement as in previous years. This allows for more specific project reporting with progress that is easier to track rather than reporting on broad integration principles such as gathering user requirements. The following section of the report covers current projects being managed by the CriMNet Program Office.
A number of projects or initiatives reported on in the 2005 annual report are now included as part of other projects or are considered ongoing support activities of the CriMNet Program Office such as: user requirements, business standards, statewide implementation plan, identification protocol, Integrated Search Services (ISS), middleware services, workflow/business processes, and service agreements.
**Integration Planning**
Two major components of the Integration Planning Project (formerly the Statewide Implementation Plan project) are the Integration Cookbook and Direct Planning Assistance.
1. **Integration Cookbook**
The CriMNet Program is finalizing development of a how-to guide to assist agencies, particularly small and medium-sized agencies, with their integration activities. Many of these small and medium-sized agencies do not have the resources or the know-how to even begin integration planning. The guide, called the *Integration Cookbook* (*Cookbook*), includes easy-to-understand information about how to plan for integration, including best practices and experiences from counties who have participated in integration planning through the CriMNet grant program. There will be examples of other agency integration work, guidelines to follow, and contact information for agencies that have already gone through the process.
The *Cookbook* will be available beginning in 2007 in a number of formats, including via the Web, for agencies to use free of charge. Tutorials and trainings for the *Cookbook* will also be available. The *Cookbook* will also be used throughout 2007 in any direct planning assistance the CriMNet Program Office provides to agencies.
**Progress and milestones:**
- Complete case study interviews - August 2006
- Finalize format and content plan - September 2006
- Complete initial draft - December 2006
- Review *Cookbook* internally and externally; revise content - December 2006 - February 2007
- Begin distribution - February 2007
• Continue distribution and training - February - December 2007
2. **Direct Planning Assistance**
Direct Planning Assistance lends local jurisdictions and agencies Program Office staff to assist them in their strategic integration planning efforts. Again, many of these local jurisdictions do not have the resources necessary to dedicate to integration planning. In two cases, Washington County, and currently in Nobles County, CriMNet Program staff facilitate sessions with a wide spectrum of criminal justice stakeholders in order to document existing business processes and current technologies, as well as future directions. The resulting integration plan is an important tool in identifying potential areas of future integration, in prioritizing among competing needs across a diversity of criminal justice stakeholders, and in creating a venue for agencies to leverage their collective expertise, integration experience, and decision-making for the benefit of collaboration. Integration planning efforts in Washington County concluded in February 2006 with comprehensive documentation of existing operations and technical systems (Washington County will continue with the visioning and implementation of the plan). Planning efforts in Nobles County are anticipated to continue into 2007.
CriMNet staff provide key facilitation and analysis functions, as directed by the lead agency within a jurisdiction, and draft project documents for review, including recommendations for realizing stated integration objectives and goals. Direct Planning Assistance serves as a resource for agencies and jurisdictions in creating a specialized roadmap that identifies current processes and systems related to criminal justice operations, potential areas of improvement, and the steps necessary to achieve future integration.
**Progress and milestones:**
- Complete a draft final integration plan for Washington County - February 2006
- Document and analyze “as is” processes and systems in Nobles County - First Quarter 2007
- Document and analyze “to be” processes and systems in Nobles County - Second Quarter 2007
- Draft final integration plan for Nobles County - Second Quarter 2007
**Warrants Business Process Improvement**
In early January 2005, the CriMNet Local User Group identified criminal warrant processes as a priority candidate for business process review and improvement, given the lack of a statewide standard to gather and store warrant information. Juvenile and adult warrants today largely rely upon the management of hardcopies, and its workflows are supported by redundant manual and electronic processes. The ability of agencies to pass critical warrant information impacts local and state law enforcement, prosecution, corrections (supervision), and court administration agencies. The purpose of the Warrants Business Process Improvement Project is to evaluate existing warrants processes in order to provide recommendations for streamlining and otherwise improving
these processes by avoiding re-keying of data, reducing the associated number of data errors, and increasing the accuracy and timeliness of warrant information to users statewide, including the timely removal of warrants in the event of service or cancellation. The warrants project will document the issuance, service or execution, modification, cancellation, and reporting of criminal juvenile and adult warrants in their entirety as “end-to-end” processes.
**Progress and Milestones:**
- Establish a scope of work for the warrants project (building upon previous information gathered – August 2006)
- Compile comparative, time series warrants statistics from courts and BCA systems – December 2006
- Complete documentation of current business processes and practices regarding both juvenile and adult warrants for Dakota, Hennepin, Ramsey, and St. Louis counties – December 2006
- Complete findings, conclusions, and preliminary recommendations for improvement – December 2006
- Review of initial draft report by stakeholders – January - February 2007
- Finalize report and present findings and recommendations – April 2007
**Minnesota Criminal Justice Statute Service (Statute Service)**
The Minnesota Criminal Justice Statute Service (Statute Service) is a Web service (the ability to access functionality of computer programs/applications through the Internet without installing specific programs/applications on a local computer) that provides a central database for Minnesota criminal justice statutes, accessible to criminal justice and non-criminal justice professionals statewide. This service will provide prosecutors the most current information on charging and penalty statutes so the charging process is more accurate which has direct affect on criminal history data. This service can be used to search criminal justice statutes and to connect directly and populate user’s in-house systems.
An advisory board, made up of a number of criminal justice stakeholders, works with the CriMNet Program Office to promote and educate practitioners about the Statute Service. This group also recommends best practices and standards regarding criminal statutes and consistent formatting for charging documents, including citations and complaints.
**Progress and milestones:**
- Hired legal analyst for Statute Service – January 2006
- Created Statute Service Advisory Board (Advisory Board) – January 2006
- Finalize Advisory Board Charter – September 2006
- Identify future enhancements to the Statute Service (guided by the Advisory Board) – May 2006 - Ongoing
• Develop Web service for connecting to and populating criminal justice agencies in-house systems - August 2006
• Final release of the Statute Service (as of this time) - December 2006
• Discontinue the Microsoft Access version of the Statute Table - February 2007
**MN Criminal Justice Information Integration Services (MNCJIIS)**
The key components of information integration are infrastructure, information sharing, information exchange and Service Oriented Architecture (SOA). Without these core components, the individual statewide services would not be consistent or cohesive but would continue to be “silos” or disconnected, standalone systems. These components provide the foundation for future statewide integration efforts.
Infrastructure is the hardware, software and network services that enable communications. The services that search disparate repositories of information and consolidate the results are information sharing services. Information exchange services send, transport or receive information that is used to populate repositories throughout the Minnesota Justice Enterprise. Service Oriented Architecture defines how the infrastructure, exchange and sharing services are packaged and interact to form a comprehensive and cohesive set of criminal justice information integration services for the Minnesota Justice Enterprise.
This project includes multiple smaller projects, including a pilot in Dakota County, and also the technology refresh of the Integrated Search Service – formerly known as the “backbone”; however, this project is much broader than the Integrated Search application in that it provides the collection and distribution services for all major Enterprise initiatives.
**Progress and milestones:**
- Evaluate existing infrastructure and services - December 2005 - May 2006
- Select technology refresh product - June 2006
- Implement technology - Ongoing
- Complete Iteration 3 of Dakota County pilot (Iterations 1 and 2 complete) - December 2006
- Reengineer search services - December 2006 - December 2007
- Develop base infrastructure for MNCJIIS - March 2007
- Develop integration MNCJIIS services - March 2007 - December 2007
**Electronic Charging (eCharging)**
The Electronic Charging Service (eCharging) will allow for routing, temporary retention, filing, and printing on demand of all charging documents (including electronic signatures) for all felony, gross misdemeanor and statutory misdemeanor cases. Currently, there is no centralized process available to allow law enforcement and
prosecution offices (at both the county and city level) to electronically prepare and transmit charging documents with the courts. The eCharging service will result in a tremendous increase in process efficiency such as management of the DWI administrative process and the elimination of the manual/paper charging process which will allow for more officer time on the streets. There will also be an increase in data accuracy and a reduction in delays within the criminal justice system. This effort builds on the work already begun by the courts on an electronic charging process.
**Progress and Milestones**
- Develop preliminary business requirements - December 2005
- Secure support from critical stakeholders - February 2006
- Develop criteria for pilot participation - March 2006
- Solicit pilot participation agreements from at least three counties (tentatively Carver, Kandiyohi and Olmsted) - May 2006
- Publish Request For Proposals (RFP) for eCharging design - June 2006
- Enter into a contract with vendor - December 2006
- Complete Phase I of project, detailed business requirements and design - June 2007
- Complete Phase II of project, pilot testing - estimated December 2007 *(dependent on available funding)*
- Complete Phase III of project, statewide rollout - estimated December 2008 *(dependent on available funding)*
**Background Checks/Expungement Study**
The CriMNet Program noted the increased public policy interest in the background check and expungement processes in Minnesota in recent years. This is due to the complexity of these processes as well as some perceived disparities for individuals. Because background checks and expungements are interrelated and within the scope of duties of the Criminal and Juvenile Justice Information Policy Group (Policy Group) per Minnesota Statutes 299C.65, the Program Office led the effort to research and analyze these processes. The Program Office requested the services of the Management Analysis and Development Division within the Department of Administration to conduct the initial research and analysis through review of current statutes, court cases, national studies, and personal interviews with a number of people representing multiple interests in these issues.
With the initial research as a starting point, an augmented Criminal and Juvenile Justice Information Task Force (Task Force) delivery team (with broad-based stakeholder involvement) met over the past eight months to discuss and debate potential solutions for clarifying and reforming the background check and expungement processes. The outcome is a comprehensive report, including policy options with an analysis of possible consequences associated with each option, for policymakers to consider in making changes to these processes.
Progress and milestones:
- Conduct initial research, survey and interviews - February - May 2006
- Create Background Check and Expungement Delivery Team - March 2006
- Discuss and develop possible options (by the delivery team) - March - October 2006
- Present findings to Task Force and Policy Group - December 2006
- Policy Group to consider recommendations and next steps - January 2007
Minnesota Offense Codes (MOC) Analysis
Minnesota Offense Codes (MOC) are a listing of codes used to classify and systematically describe the details of a specific offense. The codes are used primarily for the compiling of statistical information, such as information about the offenders and/or victims of certain types of crimes or about the frequency of certain crimes. The MOC system is exceedingly complicated, is not utilized in the same way among criminal justice professionals, does not meet many of the business needs of data consumers, and places unnecessary burdens on those who apply the codes to criminal offenses. The purpose of this project is to analyze current practices and identify the business needs that are supposed to be met by the MOC system and recommend any necessary changes.
Progress and Milestones
- Form MOC workgroup - November 2005
- Identify broad business needs for which MOCs are used - November 2005
- Identify specific business needs for individual sectors of the criminal justice system for statistical information about crimes - May 2006
- Develop recommendations - December 2006
Name-Event Index Service (NEIS)
Accurate identification is a cornerstone principle for criminal justice information sharing. Minnesota has no statewide process to link names and events in the criminal justice system. The Name-Event Index Service (NEIS) – a component of the larger Identification Roadmap initiative – is a service which will establish a definitive one-to-one relationship between an individual and the records stored and shared on that individual. NEIS will answer three fundamental questions:
1. Who are they?
2. What have they done?
3. Where are they in the criminal justice system?
NEIS will relate to the records it links much like the card catalog in the library relates to books. Eventually all critical records identified will be linked by a biometric identifier (such as a fingerprint). Biometrically supported identification enables positive linking of individuals to names and events in multiple jurisdictions. NEIS will provide criminal justice professionals an accurate and comprehensive view of a person’s criminal activity.
that is currently not available without significant, time-consuming research. While NEIS will allow criminal justice professionals to hold offenders accountable, it will also prevent innocent individuals from being wrongfully accused and assist in the fight against criminal identity theft.
**Progress and milestones:**
- Develop statement of work based on the ID Roadmap - July 2006
- Execute contract with vendor for the discovery phase - December 2006
- Complete discovery phase - May 2007
- Implement pilot project - October 2007 *(dependent on available funding)*
- Deploy full functionality to pilot stakeholders - December 2007 *(dependent on available funding)*
- Begin Statewide Rollout - July 2008 *(dependent on available funding)*
**Suspension Prevention**
When a valid court disposition cannot be matched to an arrest record with a fingerprint, the record goes into “suspense”. There are many variables as to why this occurs such as processing problems, linking data errors, and the fingerprints not being taken. This suspense issue creates gaps in criminal history records and consumes resources to fix other related problems. The suspense problem is two-fold – eliminating records from going into suspense (the “flow”) and clearing up those records already in suspense (the “tub”).
The purpose of the Suspense Prevention project is to: 1) identify the root causes of the suspense problem; and 2) recommend technical, legal, or business practice changes that will address the root causes of suspense. The BCA Criminal Justice Information Systems (CJIS) section continues to work on the records that are currently in suspense.
**Progress and Milestones**
- Develop scope statement - March 2006
- Understand and quantify suspense definitions and causes as determined from BCA computer systems - May 2006
- Study causes of suspense rooted in local business practices - November 2006
- Develop recommendations - December 2006
- Implement comparison suspense statistical report (will allow individual county suspense numbers to be compared with other counties) - February 2007
VI. Ongoing CriMNet Program Office Support Activities
The following projects are those ongoing activities that the CriMNet Program Office is responsible for as part of the foundational work for criminal justice information integration – these activities are also part of the Framework. There are also other internal support services such as management, grants/contracts, legislative, and office support that the CriMNet Program Office provides.
**Technical/Business Standards**
In order to improve the efficiency and effectiveness of information sharing, the CriMNet Program has been charged with coordinating, championing, and maintaining technical standards. These standards define what format data is exchanged from system-to-system based on business standards, including data practice statutory requirements. The CriMNet Program develops security and connectivity standards and defines system architecture for the integration and sharing of information. The CriMNet Program also develops standard statewide data dictionary definitions and standard message formats that define event content, data standards, and definitions based on the recognized business needs of criminal justice stakeholders. These standards comply with federal standards where applicable.
**Progress and milestones:**
- Create recommended standards from the security blueprint architecture report - August 2005
- Create technical standards development process - June 2006
- Create a pilot process for vetting standards by stakeholders and vendors - June 2006
- Create a Web site for the publication and vetting of business, architectural, and technical standards (https://cjir.crimnet.state.mn.us/cjir/default.aspx) - June 2006
- Create a policy for approving standards (based on the outcomes of the pilot) – June 2007
- Create technical data reference model – Ongoing
- Define standards for system message formatting - Ongoing
- Create architecture and infrastructure standards – Ongoing
- Create, publish and maintain the Minnesota Criminal Justice Data Dictionary - January 2007 – Ongoing
- Continue vetting and approving standards - Ongoing
**Liaison Program/Assistance to Criminal Justice Agencies**
The Liaison Program is a concentrated effort by the CriMNet Program to provide strong communication and connections between and among the CriMNet Program Office, different stakeholder groups, and criminal justice agencies within Minnesota.
The CriMNet Program Office arranges meetings across the state that local and county law enforcement, county/city attorneys, public defenders, court personnel, and corrections/probations agencies are all invited to attend. The purpose of these meetings is twofold: CriMNet representatives present information about the CriMNet Program and provide updates on criminal justice projects being developed at the state level through CriMNet/BCA. Second, the representatives solicit feedback from agency participants to capture their specific requirements and ensure that CriMNet considers their differing needs.
CriMNet Program liaisons also participate in focused stakeholder conferences and give presentations on projects of interest whenever possible. Types of conferences include: League of Minnesota Cities, Association of Minnesota Counties, Minnesota Sheriffs Association, Minnesota Chiefs of Police Association, Minnesota District Public Defenders, Minnesota Jailors Conference, Minnesota Association of County Probation Officers, Minnesota Professional Law Enforcement Assistant’s Association, Minnesota Association of Court Management, among others.
As a complement to its liaison efforts, the CriMNet Program Office additionally provides general assistance to criminal justice agencies and stakeholders on an as-needed basis. As a program that is committed to facilitating collaboration and integration across agencies within the criminal justice community, providing business and technical support as requested is a critical component of CriMNet’s work. Assistance may take one of many forms such as answering specific questions regarding business processes, use of technical systems, or strategic directions, and/or forwarding these questions to other criminal justice contacts who may serve as better references. Assistance may additionally require troubleshooting access to specific systems or presenting the diversity of information options available for daily use in decision-making. Overall, the philosophy that underlies assistance given to criminal justice agencies is the firm commitment to vetting every question, concern, comment, and critique - regardless of its direct relationship to CriMNet projects or initiatives - in order to be responsive to stakeholders statewide.
**Progress and Milestones**
- Crow Wing County, Integrated Search Services training/general update - January 2006
- Crow Wing County, Integrated Search Services training/general update - January 2006
- Rochester, Chiefs, Region 10 meeting - January 2006
- Fillmore County, general update - March 2006
- Morrison County, general update - April 2006
- Minnesota Chiefs of Police Association, annual conference - April 2006
- Winona County, general update - April 2006
- Kanabec County, general update - April 2006
- Minnesota Association County Probation Officers Conference - May 2006
- Traverse County, general update - May 2006
• Martin County, general update - May 2006
• Chippewa County, general update - May 2006
• Minnesota Counties Computer Cooperative, annual conference - June 2006
• Minnesota Sheriffs Association, summer conference - June 2006
• Kittson County, general update - June 2006
• Lake of the Woods, general update - June 2006
• Koochiching County, general update - June 2006
• Itasca County, general update - June 2006
• Minnesota Association of Court Management, annual conference - June 2006
• League of Minnesota Cities, annual conference - June 2006
• Le Sueur County, general update - August 2006
• Washington County Court, Integrated Search Services training - September 2006
• Minneapolis Public Defenders, Integrated Search Services training - September 2006
• Redwood County, general update - September 2006
• Minnesota Sheriffs Association jailor’s conference - September 2006
• Minneapolis Drug Enforcement Agency, Integrated Search Services training - September 2006
• Chisago County, general update - October 2006
• Toward Zero Deaths Conference - November 2006
• Association of Minnesota Counties, annual conference - December 2006
• Minnesota Sheriffs Association, winter conference - December 2006
• Minnesota County Attorneys Association, annual conference - December 2006
**Agency Assessments**
The CriMNet Program continues to assess the technical capabilities and status of criminal justice agencies. The initial agency assessment was completed in early 2006; however, ongoing assessment of current systems, vendors and integration efforts is an essential tool to assist in determining priorities and strategies for future projects. This information is being used in the CIBRS, NEIS and eCharging projects.
**Progress and milestones:**
• Complete initial assessment of sheriff offices (100%) - January 2006
• Complete initial assessment of county attorney offices (100%) - January 2006
• Complete initial assessment of county jails (100%) - March 2006
• Complete initial assessment of police departments (97%) - March 2006
• Support vendor outreach - Ongoing
**Communications**
The CriMNet Program aims to enhance communication regarding the integration of criminal justice information. In addition to the communication-related activities begun in late 2005 and continued as an ongoing program activity in 2006 (such as the “Cookbook”
and Liaison Program, both detailed previously, as well as communication activities for the Policy Group and Task Force) are regular vendor conferences. These meetings engage two principal entities: vendors who provide services to state and local criminal justice agencies, and professionals in those agencies responsible for information management and integration. These conferences help to inform vendors of the standards Minnesota is moving forward with and the future vision of the state. This has been well-received by the vendor community and has proven to be a key strategy for the future.
Vendor conferences are held quarterly at the BCA in St. Paul and delivered to remote participants via Web conference. On average, 60 people attend each meeting from outside the BCA, as well as a number of staff from both the CJIS and the CriMNet Program Office.
**Progress and milestones:**
- Facilitate quarterly vendor conferences - Ongoing
**Data Practices/Data Quality**
Data Practices and Data Quality at the state and local levels are two foundational policy areas which the CriMNet Program focuses on to ensure that data shared between agencies is accurate and that fair information practices and privacy principles are adhered to. The Data Practices/Data Quality Program presently consists of three major components: agreements; policies, procedures and practices; and data privacy and practices information and tools.
Updated agreements that delineate the roles and responsibilities between the BCA, courts, and other state and local systems in accessing and sharing information has been a key project over the past two years. This effort has resulted in the Agency Data Access Limitation Agreement (Agency Agreement) and the Court Data Sharing Agreement. The Agency Agreement, which was formally adopted in August 2006, includes requirements for following the Minnesota Government Data Practices Act, the security policies established by the BCA and the federal guidelines for access. The Court Data Sharing Agreement is still being reviewed by court legal staff before formal adoption.
The CriMNet Program Office works closely with the Information and Policy Analysis Division (IPAD) of the Department of Administration and others to develop data practices standards for information sharing based on the federal Fair Information Principles. This effort includes establishing policies and procedures for individuals to review their non-confidential BCA data and to process a challenge to the data accuracy. Compliance standards are included in the Agency Agreement as well as posted on the website.
The CriMNet Program Office has created a Privacy Impact Assessment (PIA) template for agencies to use in the development, implementation and management of statewide systems. This template ensures that all information practices and privacy principles are
considered as state and local agencies develop new systems... The Program Office is also currently developing a data practices booklet that will detail the basics of data quality and how to implement and enforce it. This booklet will include the definition and components of data quality and assist agencies with data privacy and data practices compliance. Many local agencies want to comply with data privacy requirements but have not had adequate training or do not have an adequate understanding of state law. These new resources will equip state and local agencies with the knowledge needed to comply with federal and state law.
**Progress and milestones:**
- Adopt Data Quality Business Plan (by the Task Force) - May 2006
- Adopt Agency Agreement (by Department of Public Safety and the Attorney General’s Office) - August 2006
- Create Privacy Impact Assessment template – March 2006
- Adopt Court Data Sharing Agreement (by BCA and State Court Administrator) - 2007
- Create data quality information booklet - December 2007
- Develop and maintain data practices policies and procedures - Ongoing
VII. 2006 Other CriMNet Enterprise Projects
The following projects are CriMNet Justice Enterprise initiatives managed by other state agencies under the oversight of the Policy Group. The CriMNet Program Office may provide input or consultation but does not provide direct oversight or funding over any of the following projects.
**Minnesota Court Information System (MNCIS)**
The Minnesota Court Information System (MNCIS) was designed to replace the old legacy court management system (TCIS). TCIS is a case and county-based system where MNCIS is a person-based system and statewide. To date, 76 sites (69 entire counties and portions of Hennepin, Ramsey and Sherburne counties) have been converted from TCIS to MNCIS. Part of the MNCIS rollout is to provide integration services so information can be consumed and supplied between the courts and other criminal justice business partners.
**Progress and Milestones:**
- Complete implementation in 33 sites - January - December 2006
- Complete implementation in the 5th, 8th, 3rd Judicial Districts - April - June 2006
- Complete implementation of the remainder of the judicial districts - December 2007
- Complete gap analysis and prepare change requests in five of the largest counties (Anoka, Washington, Dakota, Sherburne, Ramsey) - 2006 - Ongoing
- Provide training for one new release to current MNCIS counties – July 2006
- Complete customization with three additional releases for Minnesota - February - August 2007
**Statewide Supervision System (S³)**
The Statewide Supervision System (S³) is a centralized repository containing information on anyone under probation/supervised release, as well as anyone booked into jails, prisons or detention facilities. Information in S³ is delivered to users via a secure Web application. In addition, the Department of Corrections and the Minnesota Sentencing Guidelines Commission have collaborated to eliminate the manual sentencing guidelines worksheet process by including automated sentencing guidelines worksheets in S³. The Statewide Supervision System is accessible to criminal justice agencies only as per Minnesota Statutes 241.065 and public defenders as per Minnesota Statutes 611.272.
**Progress and milestones:**
- Integrate with Minnesota federal probation and pre-trial supervision agencies - December 2005
- Redesign and implement Detention Information System - June 2006
- Implement hardware/infrastructure enhancements - September 2006
• Implement software/infrastructure enhancements - November 2006
• Redesign assessment modules - February - June 2007
• Redesign sentencing worksheet module - April - August 2007
**Comprehensive Incident-Based Reporting System (CIBRS)**
The Comprehensive Incident-Based Reporting System (CIBRS) is a database containing Minnesota law enforcement incident data for investigative purposes (data maintained by a law enforcement agency, in a records management system (RMS) regarding calls for service and/or officer-initiated events). The database was completed in December 2005; however, with limited exception, local agencies have not submitted data to CIBRS for various reasons including limited resources, lack of vendor cooperation, and lack of technical capability. The CriMNet Program Office currently has funds dedicated to connecting locals to CIBRS (see grant program).
**Progress and milestones:**
• Train and certify individuals who will be accessing CIBRS - Ongoing
• Upgrade Criminal Justice Reporting System (CJRS) and establish relationships between CIBRS submissions and CJRS reporting requirements (separate project known as MN NIBRS) - 2010 *(dependent on available funding)*
**Automated Fingerprint Identification Service (AFIS)/Livescan**
The Automated Fingerprint Identification Service (AFIS) is the service which matches fingerprints submitted electronically (through Livescan devices) against those in the system to assist in accurate identification of individuals. This project has been pivotal in reducing accurate identification from what was months to two hours or less.
This project is designed to upgrade and replace the present AFIS to address expanded technology capabilities and anticipated additional legislative and functional work requirements. The mission of AFIS is a critical part of the criminal justice system and additional needs will be identified as biometrics evolve and as Minnesota requires quick and accurate identification of individuals. In addition to the new AFIS, a second major component of this project is Biometric Identification (BioID) workflow which is a business process management service to coordinate how information flows between services requesting biometric identification (such as Livescan devices) and the service receiving the results (such as criminal history). These two components will need to be completed in conjunction with each other.
**Progress and milestones:**
• Award contract for AFIS proposal - Second Quarter 2006
• Design BioID workflow management - First Quarter 2006 - First Quarter 2007
• Test AFIS/BioID combined functionality - Second Quarter 2007 - Fourth Quarter 2007
• Complete implementation - December 2007
Integration Criminal History System (ICHS)/New Criminal History System (nCCH)
The BCA Integrated Criminal History System (ICHS) initiative is an effort to re-envision the way criminal history information is managed in Minnesota and to improve service to BCA customers. Through this initiative, the BCA seeks to focus on users' needs for content, access, and dissemination; evaluate and re-engineer criminal justice business processes related to criminal history; and replace the existing computerized criminal history system with a new computerized criminal history system (nCCH).
The current criminal history system is 20 years old and becoming obsolete. This system no longer meets the requirements for accurate and complete criminal history information. Currently, there is not a complete criminal history record hosted in one statewide repository because certain data residing in local or other state repositories is not accessible by the current criminal history system at the BCA. The new system (nCCH) will have enhanced capabilities such as integration with other systems including the Automated Fingerprint Identification System (AFIS), Identity Access Management Service (IAM), Name Event Index Service (NEIS), and upgraded Livescan fingerprint capture devices. The new system will also interface to existing state, county, city and federal justice systems. These enhanced capabilities will increase the accuracy and completeness of criminal history.
Progress and milestones:
- Determine high-level requirements - June 2006
- Determine detailed requirements - First Quarter 2007
- Implement nCCH - 2010 (*dependent on available funding*)
- Implement ICHS - 2010 (*dependent on available funding*)
Security Architecture/Identity Access Management (IAM)
As state electronic information repositories were developed in Minnesota, they each developed separate security protocols and user administration systems. This has resulted in dozens of usernames and passwords for the different systems available, though each system bases its access on the job duties assigned by the agency. Because of this, there is vulnerability when data is shared between agencies and there is also a reduction in efficiency due to the loss of time to access information and the cumbersome process. In mid-2005, the CriMNet Program Office contracted with an independent consulting firm, Deloitte and Touche, to evaluate current practices and develop a Security Architecture Plan.
One of the recommendations in the Security Architecture Plan was to implement a coordinated identity and access management (IAM) system within key criminal justice organizations within the state, including BCA systems. Through the implementation of this system, the users of the BCA information systems will see a number of benefits
including creation of a “single sign-on” (reducing the number of IDs and passwords that each user must maintain), a security service which will determine user identity and privileges, and implementation of user-to-system and system-to-system security protocols. This project will greatly increase the security of information shared by the BCA and will ensure data practices are being adhered to.
**Progress and milestones:**
- Solicit the Request for Proposals (RFP) - November 2006
- Select IAM system contractor - January 2007
- Complete plan and design for IAM – February 2007 - February 2008
- Develop and implement IAM - February 2009 (*dependent on available funding*)
**Livescan Message Enhancement (LME)**
Livescan Message Enhancement (LME) was developed to help agencies manage the booking process via the Livescan device (Livescan devices capture electronic fingerprints). LME provides a Web-based view into all of the Livescan messages for a specific agency’s Livescans. The LME records the booking and the results and allows authorized users to view the original booking, responses from the BCA, and all updates to the booking in an easy-to-read format. Phase II will look at expanding the integration capabilities built into LME.
**Progress and Milestones:**
- Complete user pilots of 11 agencies - January 2006
- Complete statewide implementation (in use in 26 agencies, with 71 users) - December 2006
**Computerized Criminal History (CCH) Agency Interface**
Agency Interface is a Web-based application that provides criminal justice agencies with the means to view criminal history records and suspended court dispositions. In addition, this application provides the means for law enforcement to edit criminal history data and notify the courts that court dispositions possibly require changes. The current “Automatic Notification” message has also been included in the application functionality. This feature will allow agencies to view their most recent suspense records via this application.
**Progress and milestones:**
- Complete internal production testing - December 2005
- Complete external pilot testing - February 2006
- Complete statewide training and rollout - (in use in 138 agencies, with 202 users which includes all 87 counties) - March 2007
Minnesota Repository of Arrest Photographs (MRAP)
The Minnesota Repository of Arrest Photographs (MRAP) is a database of arrest and booking photos submitted from law enforcement agencies. The MRAP provides criminal justice agencies with an opportunity to search arrest and booking photos from a variety of law enforcement agencies, to create lineups and witness viewing sessions from those photos, and to enroll unidentified persons into the facial recognition component in an attempt to obtain accurate identification. There are currently 57 counties that submit arrest photos to the statewide repository.
Progress and Milestones:
- Implement new single photo lineup - June 2006
- Complete new release of MRAP (with faster response time, easier installation and improved facial recognition module) - June 2006
Audit Trail Services
The overall goal of the Audit Trail Services project is to provide a unified audit trail repository for all Criminal Justice Information System (CJIS) applications to be used by the BCA for audit and investigative purposes. This is being implemented to ensure that the right person has access to the right information.
Progress and Milestones:
- Gather requirements - August 2005
- Complete proof of concept - February 2007
- Design final service architecture - 2007
- Document and publish participation requirements - 2007
- Incorporate initial applications - 2007
- Transition to steady state - 2007
Predatory Offender Registry (POR) Refinements
The BCA is currently working on refinements to the Predatory Offender Registration (POR) system that were mandated by the legislature in the 2005 session. The BCA has also scheduled work on functionality which will allow the Supreme Court to pass predatory offender registration requirements electronically from MNCIS to POR. The BCA has plans to integrate the informed consent Computerized Criminal History (CCH) background checks into a POR query so that the return to the requestor will contain both CCH information and POR information.
Progress and milestones:
- Verify Level 3 offenders (whose supervision has expired) semi-annually - Ongoing
- Require photographs semi-annually for Level 3 offenders - Ongoing
• Verify contact visits for Level 2 and Level 3 offenders (whose supervision has expired) - Ongoing
• Integrate courts predatory offender information to POR - December 2006
• Test POR/CCH integration - December 2006
**DNR Hunting License/CCH Matching Project**
The Department of Natural Resources (DNR) issues firearms hunting licenses to individuals without performing a criminal background check. While it is not illegal for a convicted felon to purchase a hunting license, it may be illegal for them to possess a firearm. This project will match those individuals who purchase a hunting license (those that involve the use of a firearm) against the Computerized Criminal History (CCH), warrant, Orders for Protection (OFP), and probation databases. Reports of potential individuals who are ineligible to possess a firearm will be generated by the BCA and distributed to various law enforcement agencies.
**Progress and Milestones:**
• Develop requirement specifications and scope statement - *May 2006*
• Develop licensing database form for law enforcement - tentative December 2006 - January 2007
• Develop DNR – CCH – warrant match - November 2006
• Design report delivery method - 1st Quarter 2007
• Develop OFP and probation match - To Be Determined
**Custody Suspense Project**
As the BCA has worked on reducing the number of adult suspense records, additional types of adult suspense records were identified – these other types have all been collectively grouped together and identified as “custody suspense”. They include custodial information from the Department of Corrections (discharges from probation, sentence reductions, restoration of civil rights, firearms eligibility) and custody dispositions from the courts that are sent electronically.
**Progress and Milestones:**
Analysis at the BCA has determined that the main reasons for these records going into suspense are the same or similar as adult suspense records. This project has identified new requirements for the Integrated Criminal History System (ICHS), the new Automated Fingerprint Identification System (AFIS), and the Biometric Identification (BioID) projects. Those custodial court dispositions that go into suspense are worked on by data analysts at the BCA. Other than the new requirements identified for new projects mentioned above, the efforts in this area are now part of day-to-day operations at the BCA.
Juvenile Criminal History Suspense Project
With the progress made toward reducing adult suspense records, juvenile criminal records is the second of two areas uncovered that need additional work. The Juvenile Criminal History project consists of analyzing the procedures related to how juvenile criminal data is captured and reported to ensure that complete and current juvenile criminal history data is available in the criminal history record. Resolving juvenile suspense records requires research and resolution by BCA staff for each record.
Progress and Milestones:
This project has evolved into a comprehensive review of the juvenile records in the Computerized Criminal History system. Procedures have been developed to determine status of arrest records. Some records are able to be automatically deleted while others need to be manually reviewed. Procedures for reviewing various categories of records have been developed and data analyst staff plan to begin the process of reviewing the individual records. This project has identified new requirements for the Integrated Criminal History System (ICHS), the new Automated Fingerprint Identification System (AFIS), and the Biometric Identification (BioID) projects. Other than the new requirements identified for new projects mentioned above, the efforts in this area are now part of day-to-day operations at the BCA.
VIII. Additional Legislative Reporting Requirements
In addition to the annual report required in Minnesota Statutes 299C.65, Subd. 2, the Criminal and Juvenile Justice Information Policy Group is also charged with studying and making recommendations to the governor, the Supreme Court and the legislature on the following 15 items [Minn. Statutes 299C.65, Subd. 1(d)].
| 299C.65, Subdivision 1d. | Status/Comments |
|--------------------------|----------------|
| **1. A framework for integrated criminal justice information systems, including the development and maintenance of a community data model for state, county, and local criminal justice information** | In 2006, the Policy Group undertook an extensive prioritization process that has resulted in a “Framework” document that identifies CriMNet’s long-term goals and strategies for integration. The Framework elements are divided into three parts – policy considerations, enabling activities (such as standards), and delivery of systems or applications.
This process was lengthy and engaged the Task Force and the stakeholder groups it represents in identifying their key priorities and goals for CriMNet. This Framework, along with the detailed supporting plans for each initiative, represents, in practice, the concept of the Blueprint for Integration, identified by the CriMNet Strategic Plan and Scope Statement.
As each prioritized strategic initiative is commenced, project documentation will expand upon policies, definitions, standards and strategies for use by state and local agencies in their effort to participate in each initiative. Detailed project plans including business case, scope statements milestones and work breakdown structures will be added as to when things will be done and when the goals for each initiative will be finished.
**Recommendation:** As each prioritized strategic initiative is commenced, policies, definitions, standards and strategies for use by state and local agencies in their effort to participate in each initiative will be developed. When complete, the Blueprint will include policies (data policies and others) business and technical integration standards, strategies, infrastructure definition, and interfaces. The Blueprint will describe what is required to participate in each justice information sharing initiative. Report annually on progress.
*Included in current Scope Statement*
| **2. The responsibilities of each entity within the criminal and juvenile justice systems concerning the collection, maintenance, dissemination, and sharing of criminal justice information with one another** | See #1 above.
**Recommendation:** Report annually on progress.
*Included in current Scope Statement* |
| 299C.65, Subdivision 1d. | Status/Comments |
|-------------------------|----------------|
| **3. Actions necessary to ensure that information maintained in the criminal justice information systems is accurate and up-to-date** | The CriMNet Program has initiated a Data Quality Project that consists of three major initiatives: development of service agreements with users and data providers, development of data quality standards and measures, and development of security measures. An additional initiative out of the CriMNet Program Office is the Business Process Improvement Project.
**Recommendation:** Report annually on progress.
*Included in current Scope Statement* |
| **4. The development of an information system containing criminal justice information on gross misdemeanor-level and felony-level juvenile offenders that is part of the integrated criminal justice information system framework** | **Recommendation:** Development of this system was completed in early 1998. The CriMNet Program Office continues to work on prevention efforts for juvenile records still going into suspense. Future reporting as needed. |
| **5. The development of an information system containing criminal justice information on misdemeanor arrests, prosecutions, and convictions that is part of the integrated criminal justice information system framework** | The Minnesota Court Information System (MNCIS) integration to the Computerized Criminal History file (CCH) includes targeted misdemeanors; as counties are converted to MNCIS, the data is now available in CCH. In 2005, the courts passed *all* targeted misdemeanors from April 2002 to present to CCH and initiated a process to pass to CCH the archived TCIS targeted misdemeanor data (1997- April 2002) on a county-by-county basis as counties are converted to MNCIS.
**Recommendation:** Report annually on progress.
*Included in current Scope Statement* |
| **6. Comprehensive training programs and requirements for all individuals in criminal justice agencies to ensure the quality and accuracy of information in those systems** | There are a number of training programs available to criminal justice agencies related to the accuracy and quality of data. Currently, the BCA’s Data Integrity Team and the Training/Auditing Division within CJIS are offering specialized training statewide on criminal history, Livescan, the Integrated Search Services application and other statewide data functions. In addition, the CriMNet Program Office has implemented an outreach/liaison program to assist local agencies in developing plans to improve their data quality and accuracy through business process improvements.
The Task Force and Program Office have prioritized adding an additional training and auditing capacity to the BCA.
**Recommendation:** Report annually on issues identified by CriMNet business analysis and progress made.
*Included in current Scope Statement* |
| **7. Continuing education requirements for individuals in criminal justice agencies who are responsible for the collection, maintenance, dissemination, and sharing of criminal justice data;** | A number of training/certification programs are available through the BCA in such areas as CCH, Livescan, National Crime Information System (NCIC) and suspense file improvement. In addition, the consolidation of the BCA and CriMNet trainer/auditors has increased the effectiveness and efficiency of overall training efforts. Other CriMNet-related projects also offer specialized |
| 299C.65, Subdivision 1d. | Status/Comments |
|------------------------|----------------|
| | training (Statewide Supervision System, Court Web Access, Predator Offender Tracking, Minnesota Repository of Arrest Photos, etc). Data Practices training programs are planned to be developed and incorporated into existing training as appropriate. **Recommendation**: Future education requirements should be identified and prioritized through CriMNet user prioritization and outreach efforts. |
8. A periodic audit process to ensure the quality and accuracy of integrated criminal justice information systems
As a part of the CriMNet Strategic Plan, the importance of data quality standards was identified as a key objective. As part of the business plan for the quality project, CriMNet will work on developing standards and processes for auditing, as well as developing quality assurance standards and methods of evaluating data quality and accuracy. CriMNet will also work with the BCA’s Auditing Unit to add data quality audits as part of their function. The Task Force and Program Office have prioritized adding an additional training and auditing capacity to the BCA.
**Recommendation**: Report annually on progress and as needed on recommendations for process and legislative changes. The CriMNet Program Office has also developed a Privacy Impact Assessment (PIA) template which will be used on all projects that deliver any kind of technology solution. The Program plans to roll out this measure to other solution providers as well.
**Included in current Scope Statement**
See #1 above.
In support of this approach The CriMNet Program Office conducted a technology inventory of criminal justice agencies in the state. The purpose of the assessment was to identify the status of hardware/software platforms for agencies, as well as identify information technology resources. This information will help to establish a baseline measure of readiness for integration. Agencies were also asked to provide information about planned technology initiatives, e.g., future upgrades or replacements of systems. This information will help to determine the degree of effort involved in rolling out particular CriMNet services to specific agencies and the agencies’ ability to participate in information sharing and integration efforts. This database was successfully used to identify priorities for agency participation in the Comprehensive Incident-Based Reporting System (CIBRS), the Name-Event Index Service (NEIS) and the eCharging Service.
**Recommendation**: Report annually on technology resource status of criminal justice agencies and needs related to specific enterprise information sharing and integration initiatives and projects in accordance with the Framework Plan.
**Included in current Scope Statement**
10. The impact of integrated criminal justice information systems
The Criminal and Juvenile Justice Information Task Force has, through “Delivery Teams,” developed recommendations for the
| 299C.65, Subdivision 1d. | Status/Comments |
|------------------------|----------------|
| | 2004, 2005, and 2006 Legislatures related to the privacy interests of individuals. To date, most recommendations have been enacted. An additional recommendation on access to integrated data has been developed for possible consideration by the 2007 Legislature. In addition, a Task Force Delivery Team including broad public participation has made recommendations on changes to statutory background checks and to the criminal record expungement process. As noted above, the CriMNet Program Office has also developed a Privacy Impact Assessment (PIA) template which will be used on all projects that deliver any kind of technology solution. The program plans to roll out this measure to other agencies involved in providing technology solutions, as well. **Recommendation:** The delivery team for background checks and expungements has additional issues for study and recommends continued work in this area. Report annually or as needed. **Included in current Scope Statement** **Recommendation:** The Criminal and Juvenile Justice Information Policy Group and Task Force will monitor proposed legislation and fiscal impacts and report as needed. |
| 11. The impact of proposed legislation on the criminal justice system, including any fiscal impact, need for training, changes in information systems, and changes in processes | **Recommendation:** Report completed. Future reporting as requested. **Included in current Scope Statement** |
| 12. The collection of data on race and ethnicity in criminal justice information systems | As referenced in the 2003 Annual Report, the BCA assisted with the Racial Profiling study coordinated by the Office of Drug Policy and Violence Prevention. The Council on Crime and Justice completed a final report based on data collected through the BCA for report to the Minnesota Legislature. **Recommendation:** Report completed. Future reporting as requested. **Included in current Scope Statement** |
| 13. The development of a tracking system for domestic abuse orders for protection | Though the original system is complete, an issue has been identified regarding temporary restraining orders that are extended and the Brady indicator (weapons prohibition) is not set. A study was conducted and the results reported to the Judicial Branch. The report recommended additional training of court personnel on the impact of the extended temporary orders, as well as changes to the Orders for Protection (OFP) system. These activities have been added to judicial branch work plans. In addition they have made changes to the standard petition to help alert the petitioner to the impact of the extended temporary orders. **Recommendation:** Report on progress of the recommended changes. |
| 14. Processes for expungement, correction of inaccurate records, destruction of records, and other matters relating to the privacy interests of | A Task Force Delivery Team including broad public participation has made recommendations on changes to statutory background checks and to the criminal record expungement process. At a high level consideration of automatic expungements for arrests and |
| 299C.65, Subdivision 1d. | Status/Comments |
|-------------------------|----------------|
| **individuals** | dismissals; automatic expungement for continuances for dismissal and stays of adjudication; eligibility to petition for expungement for certain convictions under certain circumstances (including juveniles); access to expunged records; effect of expungement; and, expungement process is recommended. For both policy areas further study of additional issues are suggested. Some of these are broader than just the criminal justice process – for example the commercial harvesting of public data that may not be expunged currently even if data is expunged in the criminal justice process.
**Recommendation:** Make recommendations for process standardization and legislative/policy changes as needed. |
| **Included in current Scope Statement** | |
| **15. The development of a database for extended jurisdiction juvenile records and whether the records should be public or private and how long they should be retained** | There has been a database for Extended Jurisdiction Juvenile (EJJ) records for many years. These records are governed by Minnesota Statutes 299C.65 prior to the imposition of the adult sentence. Once the adult sentence is imposed, the records would be handled in the same manner as adult records.
**Recommendation:** Monitor and report as needed. |
IX. Appendices
A. Minnesota Criminal Justice Integration Framework & Blueprint
B. Criminal and Juvenile Justice Information Policy Group Charter
C. Criminal and Juvenile Justice Information Task Force Charter
D. Criminal and Juvenile Justice Information Task Force By-Laws
| INITIATIVE | PROJECT | OUTCOME | PERFORMANCE MEASURE/ROI (timelines dependent on available funding) |
|------------|---------|---------|------------------------------------------------------------------|
| P1. Criminal Justice Information Policy Issues | a. Study of Background Check Law in MN | Recommendations on options for simplified but comprehensive statutory background check policy. | This is an example, only. The Policy Group would develop the Outcome Measures: “Statutory background checks will be simplified to 5 types or less, and will be fingerprint-based” Criminal justice agencies and stakeholders will agree on policy options and consequences. Information on hazards of non-fingerprint based background checks |
| | b. Study of Expungement Law in MN and effects of State v. Schultz decision | Recommendations on policy to balance public safety needs for history data vs. an individuals need for employment/housing after satisfaction of sanction. | Expungement process will be clear and understandable to a lay person. Better information to data subjects on how/why they were disqualified so they can challenge mistaken identification. |
| INITIATIVE | PROJECT | OUTCOME | PERFORMANCE MEASURE/ROI (timelines dependent on available funding) |
|------------|---------|---------|------------------------------------------------------------------|
| **P2. Privacy/Access** | a. Polices for user authentication and system to system authentication (verification that a user or system is who they say they are and what their privileges are). | Security of data and implementation of data security policy in an integration environment. | All justice agencies with statewide systems will understand, adhere to, and implement the policy. Agencies will pass audits (see Audit Program below). |
| | b. Polices for acceptable use | Acceptable use. | All justice agencies with statewide systems will understand, sign, adhere to, and implement the Acceptable Use Policy. Agencies will pass audits (see Audit Program below). |
| **P3. Data Practices** | a. Privacy Impact Assessment Policy | Policy for statewide systems development, implementation, and management to ensure fair information practices and privacy principles are considered. | All agencies building or buying new statewide systems will utilize a PIA in the development lifecycle. |
Task Force Rank 6th
| INITIATIVE | PROJECT | OUTCOME | PERFORMANCE MEASURE/ROI (timelines dependent on available funding) |
|------------|---------|---------|------------------------------------------------------------------|
| **ENABLING** | **Task Force Rank 1st** | **E1. Identification (Rollout of Identification Roadmap)** | **a. Name Event Index Service (NEIS)** | Criminal justice records will be linked electronically with most linked to a biometric (fingerprint) | 70% of all designated criminal justice events will be linked by 2010 (31 event types identified in the I.D. Roadmap) |
| | | | **b. Completion of submission to statewide booking photo database (MRAP)** | Statewide Arrest Photos. Currently 30 counties do not have the technology to capture arrest photos and provide them to the MN Repository of Arrest Photos (MRAP) | 100% of bookings will have accompanying arrest photos in the state database by 2011. |
| | | | **c. Enhanced Biometric Identification Capability including implementing FBI standards (making Minnesota a NFF state)** | Ability to capture different types of fingerprints (two print, slap print etc.), as well as palm and side of palm prints for faster, more reliable identification and crime solving (latent processing) | Speed up 10-print biometric processing commonly used in jail bookings, reducing average processing times from approximately 1 hour to 5 minutes or less. |
| | | | | The National Fingerprint File (NFF) standard eliminates the need to roll prints on every charge – allowing two or slap prints for all subsequent charges on the same person.– so that the identity of the individual can be returned to the squad car within seconds, enhancing officer/public safety. | Reduce data errors to 3% or less in 10-print biometric transactions. Equip at least 50 percent of squad shifts, courtrooms, and probation check in locations with rapid ID units by 2010. |
Revision 10 - As Adopted by the Criminal and Juvenile Justice Information Policy Group on December 13, 2006
| INITIATIVE | PROJECT | OUTCOME | PERFORMANCE MEASURE/ROI (timelines dependent on available funding) |
|------------|---------|---------|------------------------------------------------------------------|
| **ENABLING** | | | |
| Task Force Rank 4<sup>th</sup> Dependent Project | E2. Security (ability to exchange and search information in a secure manner) | a. Implementing user-to-systems and systems-to-systems security (identity access management or IAM) and complete implementation of single sign-on. | Secure exchange of information between criminal justice entities. Ensuring that data policy rules are enforced across the entities. All BCA statewide systems will be connected through the security service by FY 2010. Includes funds to position large counties to participate in identity management. |
| Non-dependent Project | E3 Continuous operations of mission critical systems | a. Business Continuity Plans and Infrastructure. | BCA managed mission critical criminal justice information systems and statewide integration infrastructure will have business continuation plans and infrastructure as well 24 by 7 support. Business continuation Plan completed by FY’09. |
| Task Force Rank 6<sup>th</sup> Non-dependent Project | E4. Information Audit Capability – Security; Data Quality, and Data Practice | a. Performing Audits | Ability to audit criminal justice agencies on integration security policy, practice and technology; compliance with data policy and data accuracy standards and agreements. By FY’11 the BCA will have the capability to audit all agencies once every three years. |
| INITIATIVE | PROJECT | OUTCOME | PERFORMANCE MEASURE/ROI (timelines dependent on available funding) |
|------------|---------|---------|------------------------------------------------------------------|
| **ENABLING** | | | |
| E5. Data Quality | a. Service and Data Quality Agreements | All criminal justice users and providers will have signed User and provider agreements governing quality, access and security by FY’08. | All agreements between data providers and users of state managed statewide systems will be in place by FY 2008 and ability to be audited per E4. |
| Task Force Rank 6th | | | |
| E6. Technical Standards and Policies | a. Architectural Standards for new systems or vendor systems | Standards for new systems architecture to facilitate integration. | The Program Office will publish standards and integration tool website by FY’07 and will be continuously maintained and updated. The standards will adhere to state enterprise standards and will be tied to all program activities and funding. |
| Task Force Rank 3rd | | | |
| b. Data Standards | Minnesota Criminal Justice Data Model (MNJ) and dictionary that is an extension of the US Global Data Model and dictionary | | The Model and Dictionary will completed in FY’07 and continuously maintained. All future integration projects will adopt the MNJ |
| INITIATIVE | PROJECT | OUTCOME | PERFORMANCE MEASURE/ROI (timelines dependent on available funding) |
|------------|---------|---------|------------------------------------------------------------------|
| | c. Technical Assistance to Local Agencies | Assess new technologies and their applicability relative to architectural and data standards. | Assess the feasibility of providing this service by FY’07, and if feasible rollout by FY’09 |
| ENABLING | E7. Business Standards/Business Process Improvement | a. Electronic Charging and Warrant Processes and other processes as needed (e.g., MOC’s) | Electronic charging will define the business standards and process, and electronic signatures, for implementing electronic charging across the state of MN. The warrant process in MN will be re-engineered. |
| | The process standards for eCharging will be endorsed by the Policy Group and promulgated by the Program Office by FY’08. |
| | Process standards for the Warrant process will be adopted and promulgated by FY’08. |
| | E8. Communications and Assistance to Local Agencies/Local Government | a. Communication/Liaison Outreach/Agency Assistance | Local agencies will be informed about program activities to understand the statewide vision for integration and how it can affect their agency. |
| | Activities include six to eight conferences and 12 to 20 liaison meetings per year. |
| INITIATIVE | PROJECT | OUTCOME | PERFORMANCE MEASURE/ROI (timelines dependent on available funding) |
|------------|---------|---------|------------------------------------------------------------------|
| b. Small County/Agency Integration Planning | Small Agency Planning Assistance and Integration Cookbook | Analysis in Washington County and Nobles County will result in a plan for those counties as well as integration “Cookbook” (a document to assist medium/smaller counties in integration planning/implementation) by FY2007. Cookbook will be written in a way that supports local effort with minimal assistance from |
| c. Local Agency Assistance Team. Staff dedicated to providing direct planning assistance to medium/smaller jurisdictions to facilitate county-based and regional integration | Medium/smaller jurisdictions will have the assistance they need to integrate locally or to gain access to the new state services such as eCharging and Name Event Index Service | Twelve agencies will be assisted each year commencing in FY’09 |
| INITIATIVE | PROJECT | OUTCOME | PERFORMANCE MEASURE/ROI (timelines dependent on available funding) |
|------------|---------|---------|------------------------------------------------------------------|
| d. Vendor Communication and Assistance. | Vendors will be knowledgeable of state integration initiatives and standard as they enhance their products. Major system vendors will be aware of state integration initiatives through vendor conferences facilitated by the Program Office, and will incorporate state standards and connection to state services (NEIS, eCharging, etc.) in future releases. | |
**ENABLING**
| INITIATIVE | PROJECT | OUTCOME | PERFORMANCE MEASURE/ROI (timelines dependent on available funding) |
|------------|---------|---------|------------------------------------------------------------------|
| E9. Financial Assistance for the Benefit of Local Agencies | a. Grants/Contracts to Local Agencies/Vendors – Enterprise-wide Focus *(Note: Funding for local impact has been included in each individual project.)* | Local grant/contract program to focus on supporting statewide initiatives such as making changes to local record and case management systems to supply data to CIBRS, use the eCharging Service, Name Event Index Service or Identity Access Management Service. This includes support of field-based reporting. The State of Minnesota will see a direct, statewide benefit from local agencies on any state funding provided. Any agency receiving a grant will utilize the state service for which grant funding is provided. | See E1.a performance measure above. |
| | b. Continued County-based Integration Implementation in Large Counties | Provide additional integration implementation funds to the large counties that have previously received grant funds. | The largest five counties that have previously received integration planning and implementation grants have project in queues awaiting funding. |
| INITIATIVE | PROJECT | OUTCOME | PERFORMANCE MEASURE/ROI (timelines dependent on available funding) |
|------------|---------|---------|------------------------------------------------------------------|
| Task Force Rank 5th Non-dependent Project | E10. State-Provided Systems for Local Agencies | c. County-based Implementation in Medium and Small Counties | Provide integration implementation funds to medium and small counties that receive implementation assistance per E8.b above (cookbook). | Medium and small counties may want to do implementation work based on the planning tools and planning assistance from the Program Office. |
| | | a. Analyzing the feasibility of providing systems for local agencies to more efficiently manage information electronically. | This project would establish the feasibility of the state providing such systems and if deemed feasible, would establish the criteria and requirements for building such systems, without determining who should build the systems. | Feasibility study completed by FY’09. |
| INITIATIVE | PROJECT | OUTCOME | PERFORMANCE MEASURE/ROI (timelines dependent on available funding) |
|------------|---------|---------|------------------------------------------------------------------|
| **DELIVERY** | **D1. Increased Data and Kinds of Data Available** | a. Reduction of Suspense Records (traditional criminal history records not linked to a fingerprint) | Business process reengineering, the new Name Event Index Service and Roadmap as well as the eCharging Service will all help to facilitate suspense reduction. | 98% of all records will be linked to a fingerprint by FY’10. |
| Task Force Rank 1st Non-dependent Project | b. New Computerized Criminal History (nCCH) System will utilize Criminal Justice Information Capture and Distribution Services (see below), the Identity Access Management Service and the Name Event Index Service, the New AFIS (nAFIS) System and new LiveScan features | Criminal history will be accurate and complete with the addition of new linked data sources (nCCH to replace to replace 20 year old CCH). | New system to be implemented by FY’10 and to meet user requirements. |
| INITIATIVE | PROJECT | OUTCOME | PERFORMANCE MEASURE/ROI (timelines dependent on available funding) |
|------------|---------|---------|------------------------------------------------------------------|
| Non-dependent Project | c. MN NIBRS (Component of CIBRS) | More detailed data to meet federal reporting standards; More ease of local collection and reporting | Implementation of the new system by FY'10 and including local adaptations by FY'12. Replaces 30 year old system. |
| Non-dependent Project | d. MN Criminal Justice Statute Service (Statute Service) | More accurate charging by prosecutors resulting in more accurate criminal history records. | Identify future enhancements and maintain the Statute Service - Ongoing |
**DELIVERY**
| INITIATIVE | PROJECT | OUTCOME | PERFORMANCE MEASURE/ROI (timelines dependent on available funding) |
|---------------------------------------------------------------------------|------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------|
| **D2. Criminal Justice Information Capture & Distribution Services** | **a. MN Criminal Justice Information Integration Services (MNCJIIS)** | Will increase speed, usability, easy of use. Includes single point of delivery/data entry to BCA systems. Individual justice practitioners will have data tailored to their specific business event and location e.g., traffic stop, booking, arraignment, etc.). Delivery will conform to state data policy. | Role based individually configured access to information utilizing Integrated Search Services and single user interface by FY’11 (including single sign on, user controlled filters, delivery to the point of need including mobile unit delivery) |
**DELIVERY**
| INITIATIVE | PROJECT | OUTCOME | PERFORMANCE MEASURE/ROI (timelines dependent on available funding) |
|------------|---------|---------|------------------------------------------------------------------|
| Dependent Project | b. eCharging Service Rollout. | Law Enforcement agencies and county and city attorneys will be able to electronically prepare and file with courts, including electronic signatures all felony, gross misdemeanor and statutory misdemeanor cases. Printing will be on demand. Will result in reduction in paper process and staff inefficiencies. It will contribute to traffic safety by including DWI charges. | eCharging services rollout will start in FY’08 and completed in FY’10. |
| Non-dependent Project | c. Warrant Processing. | Implementation of electronic distribution of warrant information between justice agencies. The final plans will be known at the completion of the business analysis in E7.a above. | Design phase to commence in FY’09. |
| INITIATIVE | PROJECT | OUTCOME | PERFORMANCE MEASURE/ROI (timelines dependent on available funding) |
|------------|---------|---------|------------------------------------------------------------------|
| **DELIVERY** | **D3. Other Agency Enterprise Initiatives** | a. MNCIS – convert from multiple trial court case management systems to a single system and convert from multiple data repository designs to a single data repository design. Implement statewide to all 10 judicial districts in all counties. | MNCIS implemented in 66 counties as of Sept. 22, 2006. The third, fifth, sixth and eighth districts are completely converted to the new case management system. The fourth, seventh, ninth and 10th districts will be completed by July 2007. The first and second districts will be completed by the end of December 2007. | Completion of the rollout of a single case management system will improve the capability of consistent service delivery in the trial court system regardless of court location. Improved performance and measurement capacity will also be available, as well as the capacity to handle increased workloads through productivity improvements. |
| | b. Statewide Supervision System (S3) | Complete security and functionality enhancements for Minnesota Sentencing Guidelines Commission (MSGC), Assessments, and load processes. | Enhancements for MSGC and Assessments will be complete by 7/1/08. Load process complete and recommendations for future load enhancements by 7/1/08. |
Purpose of the Criminal and Juvenile Justice Information Policy Group
The Criminal and Juvenile Justice Information Policy Group (Policy Group) provides leadership, high-level oversight, and accountability to the citizens of Minnesota for the successful completion of statewide criminal justice integration and information sharing.
- Whereas: The Minnesota initiative to integrate justice information commenced in the early 1990’s, including the enactment of 299C.65, and made considerable progress filling gaps in statewide information;
- Whereas: The creation of a CriMNet Program Office (Program) in 2001 has provided an additional advantage for integration in Minnesota;
- Whereas: The Criminal and Juvenile Justice Information Task Force (Task Force) has played a more active role in support of the work of the Policy Group since 2003;
- Whereas: The composition of the Policy Group has been strengthened with the addition of the Task Force Chairs and State Enterprise Chief Information Officer in 2005, and the appointment of a new Executive Director in 2006;
- Whereas: The expanded Policy Group has been evaluating their governance model in light of these events and intends to clarify its role and that of the Task Force and Executive Director;
Now therefore, the Policy Group establishes the following charter and directives to the Task Force and Executive Director:
Make-up of the Policy Group
The Policy Group is authorized under Minnesota Statutes 299C.65 and consists of the following members: Commissioner of Public Safety, Commissioner of Corrections, Commissioner of Finance, State Chief Information Officer; four members of the judicial branch appointed by the Chief Justice of the Supreme Court; and the Criminal and Juvenile Justice Information Task Force (Task Force) Chair and First Vice-Chair. This body has the authority to appoint additional non-voting members. The Policy Group is chaired by the Commissioner of Public Safety and meets quarterly and other times as needed.
Primary Responsibilities
The Policy Group exists to provide leadership for the overall strategic and policy direction of the statewide Criminal Justice Integration Enterprise (Enterprise). At the Enterprise level, the Policy Group has the responsibility to:
1. Define, affirm and periodically review the mission statement and priority directions of the Enterprise;
2. Establish high-level performance measures and outcomes for the CriMNet Enterprise and ensure compliance with business and technical standards;
3. Provide high-level approval and monitoring of the CriMNet Program budget and Enterprise initiatives as part of the state biennial budget process;
4. Establish and approve any high-level policy decisions that need to be forwarded to the governor/legislature;
5. Monitor the budgets of the Courts, Department of Corrections and Department of Public Safety as they relate to CriMNet;
6. Resolve significant differences between the Task Force, CriMNet Program and stakeholders;
7. Annually report to the governor, Supreme Court and legislature on legislative changes or appropriations needed to ensure that criminal justice information systems operate accurately and efficiently;
8. Determine Enterprise-wide strategies, including distribution of grant monies;
9. Advocate and testify for CriMNet and related Enterprise initiatives;
10. Link to other statewide and national entities engaged in justice initiatives;
11. Continue to be educated about topics related to the Enterprise.
In relation to the Task Force, the Policy Group has the responsibility to:
1. Assign Enterprise issues to the Task Force for research and possible options; which the Task Force may initiate delivery teams to complete the work;
2. Review business priorities and grant strategies recommended by the Task Force;
3. Clarify the relationship of the Task Force to the Policy Group, the Executive Director, and the CriMNet Program;
4. Continue to rely on the Task Force as a major link to stakeholders and operational users of justice information.
In relation to the Executive Director, the Policy Group has the responsibility to:
1. Select, set the direction for and support the Executive Director;
2. Review business priorities and grant strategies recommended by the Executive Director;
3. Create an evaluation process and develop performance measures for the Executive Director as well as provide oversight and monitor performance of the Executive Director;
4. Frame policies and develop reporting mechanisms and measures related to finance and operations to guide the Executive Director and ensure the appropriate level of accountability to the Policy Group;
5. Clarify the relationship of the Executive Director to the Policy Group and Task Force.
Minnesota Statutes 299C.65 requires the Policy Group to appoint a Task Force to assist the Policy Group in their duties; and to monitor, review and report to the Policy Group on CriMNet-related projects and provide oversight to ongoing operations as directed by the Policy Group. In order to fulfill this statutory requirement, the Policy Group directs the Task Force to do the following:
1. Advise the Executive Director and Policy Group on enterprise activities;
2. Advise the Executive Director on business priorities;
3. Advocate constituent interests and communicate enterprise decisions back to constituents;
4. Review grant strategies developed by the Executive Director and suggest alternatives.
Minnesota Statutes 299C.65 allows the Policy Group to hire an Executive Director to manage the CriMNet projects and to be responsible for the day-to-day operations of CriMNet. The Executive Director serves at the pleasure of the Policy Group in unclassified service. The Policy Group directs the Executive Director to do the following:
1. Develop strategies to support the Enterprise vision as well as implement and maintain those strategies;
2. Direct the overall activities of the CriMNet Program Office including the short and long range strategic and financial plan to support the Enterprise vision;
3. Act as a liaison to the Policy Group, Task Force, legislature, criminal justice agencies (local, state, federal), and the general public on Enterprise issues and foster collaboration among those entities;
4. Provide regular reporting on CriMNet Program activities to the Task Force.
A. Purpose of the Criminal and Juvenile Justice Task Force
The Criminal and Juvenile Justice Information Task Force (Task Force) assists the Criminal and Juvenile Justice Information Policy Group (Policy Group) in their duties; monitors, reviews, and reports to the Policy Group on CriMNet-related projects; and provides oversight to ongoing operations as directed by the Policy Group.
- Whereas: The Minnesota initiative to integrate justice information commenced in the early 1990’s, including the enactment of 299C.65, and made considerable progress filling gaps in statewide information;
- Whereas: The CriMNet Program Office (Program) in 2001 has provided an additional advantage for integration in Minnesota;
- Whereas: The Task Force has played a more active role in support of the work of the Policy Group since 2003;
- Whereas: The composition of the Policy Group has been strengthened with the addition of the Task Force Chairs and State Enterprise Chief Information Officer in 2005, and the appointment of a new Executive Director in 2006;
- Whereas: The Policy Group adopted a new charter in March 2006 and provided specific directives to the Task Force;
- Whereas: The Task Force has been evaluating its role in light of these events and intends to clarify its role;
Now therefore, the Task Force establishes the following Charter and directives:
B. Composition of the Task Force
The Task Force is authorized under Minnesota Statutes 299C.65 and consists of the following members:
- two sheriffs recommended by the Minnesota Sheriffs Association;
- two police chiefs recommended by the Minnesota Chiefs of Police Association;
- two county attorneys recommended by the Minnesota County Attorneys Association;
- two city attorneys recommended by the Minnesota League of Cities;
- two public defenders appointed by the Board of Public Defense;
- two district judges appointed by the Judicial Council, one of whom is currently assigned to the juvenile court;
- two community corrections administrators recommended by the Minnesota Association of Counties, one of whom represents a community corrections act county;
- two probation officers;
- four public members, one of whom has been a victim of crime, and two who are representatives of the private business community who have expertise in integrated information systems;
- two court administrators;
one member of the house of representatives appointed by the speaker of the house;
one member of the senate appointed by the majority leader;
the attorney general or a designee;
two individuals recommended by the Minnesota League of Cities, one of whom works or resides in greater Minnesota and one of whom works or resides in the seven-county metropolitan area;
two individuals recommended by the Minnesota Association of Counties, one of whom works or resides in greater Minnesota and one of whom works or resides in the seven-county metropolitan area;
the director of the Sentencing Guidelines Commission;
one member appointed by the state chief information officer;
one member appointed by the commissioner of public safety;
one member appointed by the commissioner of corrections;
one member appointed by the commissioner of administration; and
one member appointed by the chief justice of the Supreme Court.
Members shall be selected for their expertise in integrated data systems or knowledge of best practices.
C. Primary Responsibilities of the Task Force
In order to fulfill this statutory requirement, the Task Force has the following primary responsibilities:
1. Advise the Executive Director and Policy Group on enterprise activities; *(PG Charter)*
2. Advise the Executive Director on business priorities; *(PG Charter)*
3. Advocate constituent interests and communicate enterprise decisions back to constituents; *(PG Charter)*
4. Research and provide possible options regarding enterprise issues assigned by the Policy Group; *(PG Charter)*
5. Review grant strategies developed by the Executive Director and suggest alternatives; *(PG Charter)*
6. Recommend business priorities and grant strategies to the Policy Group. *(PG Charter)*
7. Assist the Policy Group in the filing of an annual report with the governor, Supreme Court, and chairs and ranking minority members of the senate and house committees and divisions with jurisdiction over criminal justice funding and policy; *(299C.65)*
8. Consult with the CrimNet program office as the office creates the requirements for grant requests and determines the integration priorities for the grant period; *(299C.65)*
9. Review funding requests for criminal justice information systems grants and make recommendations to the Policy Group; *(299C.65)*
10. Facilitate communications between the Policy Group and stakeholders and operational users of justice information. *(PG Charter)*
State of Minnesota
Criminal and Juvenile Justice Information Task Force By-Laws
This instrument constitutes the Bylaws of Criminal and Juvenile Information Task Force (Task Force) adopted for the purpose of regulating and managing the internal affairs of the Task Force.
A. Role of Task Force Member
It is intended that the Criminal and Juvenile Information Task Force leverage the experiences, expertise, and insight of key individuals at organizations committed to building professionalism in project management. Task Force members are not directly responsible for managing program activities, but provide support and guidance for those who do. Thus, individually, Task Force members should:
- Understand the strategic implications and outcomes of initiatives being pursued through program outputs;
- Appreciate the significance of the program for some or all major stakeholders and represent their interests;
- Be genuinely interested in the initiative and encourage and contribute to critical thinking and evaluation of its component projects;
- Have a broad understanding of project management issues and approach being adopted.
In practice, this means Task Force members:
- Review the status of the program;
- Regularly attend Task Force meetings:
- If a member is absent for three consecutive meetings, the Chair will contact the person. If that person is absent two more consecutive times, that person is considered no longer a member of the Task Force and the person and the organization will be notified. The organization may appoint another member or the position will be declared vacant.
- If a member is absent 50% of the time in a rolling calendar year (six times in twelve months), the person is considered no longer a member of the Task Force and the person and the organization will be notified. The organization may appoint another member or the position will be declared vacant.
- Ensure the program’s outputs meet the requirements of the business owners and key stakeholders;
- Help balance conflicting priorities and resources;
- Provide guidance to the program team and users of the program’s outputs;
- Consider ideas and issues raised;
- Check adherence of program activities to standards of best practice both within the organization and in a wider context;
- Foster productive communication outside of the Task Force regarding the program’s progress and outcomes;
- Report on program progress when requested;
- Develop policy and recommend legislative changes as necessary to ensure the success of the program;
• Advocate constituent interests and communicate enterprise decisions back to constituents.
**B. Task Force Officers**
The Task Force shall elect from their membership one Chair, one First Vice-Chair and one Second Vice-Chair. The Officers constitute the Executive Board of the Task Force. The purpose of the Executive Board is to provide leadership to the Task Force in running the meetings, setting the agendas and creating and soliciting membership for delivery teams.
**RESPONSIBILITIES OF OFFICERS:**
- **Chair** – The Chair shall serve on the Policy Group as a voting member, representing the Task Force; the Chair shall report to the Policy Group the business transacted and recommendations forwarded by the Task Force and shall report back to the Task Force any actions of the Policy Group. The Chair shall serve on the Executive Board of the Task Force, facilitate the Task Force meetings and approve the agenda.
- **First Vice-Chair** – The First Vice-Chair shall serve with the Chair on the Policy Group as a voting member, representing the Task Force; the Vice-Chair shall assist the Chair in reporting to the Policy Group the business transacted and recommendations forwarded by the Task Force and in reporting back to the Task Force any actions of the Policy Group. The First Vice-Chair shall serve on the Executive Board of the Task Force and preside at meetings of the Task Force when the Chair is unavailable.
- **Second Vice-Chair** – The Second Vice-Chair shall serve on the Executive Board of the Task Force and preside at meetings of the Task Force when the Chair and First Vice-Chair are unavailable.
**C. Task Force Meetings**
- **QUORUM:**
- A quorum is a majority of voting members; more than one-half.
- Changes in the number of vacant positions will result in a change in the definition of “majority”. “Majority” is defined as a simple majority of non-vacant positions.
- **PROXIES:**
- Temporary proxies are allowed to serve on the Task Force as needed but are not allowed to vote.
- Until 8/1/07, permanent proxies are allowed to serve on the Task Force and are allowed to vote.
- **SCHEDULE:**
- The Task Force will meet monthly or as required to keep track of issues and the progress of the program’s implementation and on-going statewide support to its stakeholders.
The biennial business meeting and conference may serve as the monthly meeting in the month in which those meetings are held.
**PROCESS:**
- The Task Force will follow modified Roberts Rules of Order in the conduct of meetings, motions, discussion and voting.
**MEETING AGENDA:**
1. Monthly meetings will be conducted in the following order:
A. Introductory Items such as:
1. Make Introductions
2. Review and approve Agenda
3. Review and approve Minutes from last meeting
4. Review of actions arising from previous Task Force meetings.
B. Update and Review of Program Status by CriMNet Executive Director and Program Staff
1. Overall Status
2. New issues arising since the last meeting
3. Review of Program or Program change orders
4. Budget
5. Formal acceptance of deliverables
6. Outstanding issues and accomplishments, open points, program conflicts
7. Review and prioritize new issue submittal forms
C. Issue Presentation(s)
D. Other:
1. Grant Updates
2. Delivery Team Updates
3. Legislative Updates
E. Agenda items for next meeting
**D. Task Force Election Process**
- **TERMS OF OFFICE:**
- Members of the Executive Board shall serve two-year terms, beginning at the close of the biennial business meeting, and may serve consecutive terms.
- In the event a member of the Executive Board cannot fulfill the full two-year term, a special election shall be held at either at a special business meeting called at any time to fill a vacancies of the board or may be conducted in conjunction with a regular Task Force meeting.
- **NOMINATING COMMITTEE:**
The Chair shall appoint a Nominating Committee prior to elections and it shall consist of three Task Force members – one from local government, one from State government, and one member-at-large – and shall designate one member to serve as Nominating Committee Chair. The Nominating Committee shall run the election of officers at the biennial business meeting. The members of the nominating committee cannot be seeking election to a seat on the Executive Board of the Task Force.
The Nominating Committee shall solicit nominations for positions on the Executive Board and shall recommend candidates for those positions to the full Task Force at its biennial business meeting.
The Chair may either appoint a Nominating Committee for a special election or declare that nominations from the floor will be accepted.
**PROXY VOTING:**
- The authority to vote by proxy may be given to another member of the Task Force.
- Any task force member wishing to vote by proxy must notify the Nominating Committee Chair, in writing or by e-mail, of the name of their proxy at least one day prior to the election.
- For the purposes of an election only, proxies count toward the fulfillment of a quorum requirement.
**NOMINATIONS FROM THE FLOOR:**
- On the day of the actual election, nominations to fill Executive Board positions will be accepted from the floor.
**VOTING FOR OPEN POSITIONS ON THE EXECUTIVE BOARD:**
- Voting will be completed through a non-secret process.
**ANNUAL MEETINGS:**
- The Task Force shall hold a business meeting in September of every even-numbered year and shall sponsor a criminal justice information conference, highlighting innovations and progress in line with its mission, at a time designated by the Chair during every odd-numbered year.
**E. Task Force Delivery Teams**
**Delivery Teams**
- Task Force may approve by a majority vote of the members present at a meeting the formation of Delivery Teams.
The Executive Board may create a Delivery Team which must be ratified at the next regularly scheduled Task Force meeting.
While the Task Force maintains ultimate authority, the Task Force may delegate certain decision-making power to the Delivery Teams.
The Executive Board shall solicit participation and appoint members of delivery teams to ensure appropriate representation.
Delivery team members need not be members of the Task Force and the participation of non-Task Force members is strongly encouraged.
Delivery Teams shall report their activity, progress and timeline to the Task Force quarterly, or as requested by the Task Force.
A Task Force member shall serve as Delivery Team chair.
|
Multiyear Surveillance for Avian Influenza Virus in Waterfowl from Wintering Grounds, Texas Coast, USA
Pamela J. Ferro, Christine M. Budke, Markus J. Peterson, Dayna Cox, Emily Roltsch, Todd Merendino, Matt Nelson, and Blanca Lupiani
We studied the prevalence of influenza A virus in wintering waterfowl from the Central Flyway on the Gulf Coast of Texas. Of 5,363 hunter-harvested migratory and resident waterfowl and wetland-associated game birds sampled during 3 consecutive hunting seasons (September–January 2006–07, 2007–08, and 2008–09), real-time reverse transcription–PCR detected influenza A matrix sequences in 8.5% of samples, H5 in 0.7%, and H7 in 0.6%. Virus isolation yielded 134 influenza A viruses, including N1–N9, H1–H7, H10, and H11 subtypes. Low-pathogenicity H7 subtype was isolated during January, September, and November 2007 and January 2008; low-pathogenicity H5 subtype was isolated during November and December 2007.
Wild waterfowl, primarily species in the orders Charadriiformes and Anseriformes (1), are natural reservoirs for type A influenza viruses. These viruses, which are occasionally transmitted to other species, including humans, poultry, and swine, result in subclinical to highly pathogenic diseases. Two subtypes (H5 and H7) have been most frequently associated with high pathogenicity in poultry and are of considerable interest to the poultry industry and to researchers who study avian influenza viruses (AIVs) (2–4). The migratory nature of many waterfowl species and the persistence of AIV in them present a potential vehicle for global dissemination of influenza viruses, as well as a constant source of viruses and genetic material for new pandemic strains. Preventing the introduction and adaptation of wild bird–origin AIVs to other susceptible species is an efficient strategy for minimizing the effects of AIV on global health and the global economy (5,6). Thus, surveillance in reservoir species is crucial for identifying viruses and gene pools with interspecies and intraspecies transmission potential.
In North America, migratory birds use 4 major flyways: Pacific, Central, Mississippi, and Atlantic (www.flyways.us). Three flyways (Pacific, Mississippi, and Atlantic) are well represented in the literature that addresses AIV surveillance (summarized in [7]); however, data are limited for the Central Flyway (8–10). Approximately 90% of waterfowl that use the Central flyway winter in Texas. Of these, ≈10 million ducks and geese winter in wetlands throughout the state, whereas 1–3 million ducks and >1 million geese winter along the Texas Gulf Coast (11). Before the implementation of surveillance programs to detect subtype H5N1 highly pathogenic AIV, few surveillance studies included migratory waterfowl on their wintering grounds or nonmigratory waterfowl during winter, particularly for the Texas–Louisiana Gulf Coast, where most studies were limited to just a few waterfowl species and limited by time of year and number of years studied (8,9,12,13). Although the US Interagency Strategic Plan for the Early Detection of Highly Pathogenic Avian Influenza H5N1 has extensively sampled waterfowl across all flyways, the program focuses on detection of subtype H5N1 virus;
Author affiliations: Texas A&M University, College Station, Texas, USA (P.J. Ferro, C.M. Budke, M.J. Peterson, D. Cox, E. Roltsch, B. Lupiani); and Texas Parks and Wildlife Department, Bay City, Texas, USA (T. Merendino, M. Nelson)
DOI: 10.3201/eid1608.091864
1Current affiliation: Ducks Unlimited, Texas Gulf Coast, Richmond, Texas, USA.
thus, only information pertaining to this subtype is publicly available (14). To understand the ecology, natural history, and evolution of influenza viruses, long-term surveillance studies are needed, particularly those that investigate waterfowl in understudied areas, such as wintering grounds. Long-term surveillance is even more important in areas where commercial poultry operations and migratory waterfowl stopover or wintering areas overlap (15).
We recently reported AIV prevalence, as determined by real-time reverse transcription–PCR (rRT-PCR) and virus isolation, from a multiyear surveillance project (September 2005–January 2009) of hunter-harvested waterfowl in the Texas mid–Gulf Coast region (16). We found little variation in overall AIV prevalence within or between seasons, except for 1 season (2007–08) when the overall prevalence was higher (16). The objectives of the current study were to 1) determine subtype diversity of AIV in both migratory and resident waterfowl populations (mostly ducks and geese) to which humans may be exposed and 2) compare prevalence and subtype diversity of AIV among species, according to age and sex, focusing on the Texas mid–Gulf Coast region during early fall and winter, which coincides with the regional hunting season.
Methods
Sample Collection and Analysis
During 2006–09, cloacal swab samples were collected from hunter-harvested waterfowl (17) and other wetland-associated game birds (18) during 3 consecutive hunting seasons: September 2006–January 2007 (season 1), September 2007–January 2008 (season 2), and September 2008–January 2009 (season 3) at 4 state wildlife management areas (WMAs) along the Gulf Coast of Texas: Justin Hurst WMA in Brazoria County, Mad Island WMA in Matagorda County, Guadalupe Delta WMA in Calhoun County, and Matagorda Island WMA in Calhoun County (Figure). Trained field personnel identified the species, sex, and age (when possible) of the bird on the basis of plumage (19). The bird’s age was recorded as adult, if it was not the bird’s hatch-year, and juvenile, if it was the bird’s hatch-year. Waterfowl species and areas sampled reflected hunters’ choices and personnel available to collect swabs on sampling days. Data from all 4 WMAs were combined for analysis.
All samples were collected, processed, and tested as previously described (8,16). Briefly, all samples (N = 5,363) were screened for AIV by AIV-matrix rRT-PCR, and virus isolation was performed on all 455 rRT-PCR–positive samples and 3,664 rRT-PCR–negative samples. All rRT-PCR–positive samples were screened for H5 and H7 subtypes by rRT-PCR by using the AgPath-ID One-Step RT-PCR Kit (Ambion, Inc., Austin, TX, USA) and an ABI 7500Fast Real-time PCR System (Applied Biosystems, Inc., Foster City, CA, USA) in a 25-μL final reaction volume. Primers and probes for the M and H5 (20,21) and H7 subtypes (21,22) were those previously described. All AIV isolates were submitted to the National Veterinary Services Laboratory (NVSL; Ames, IA, USA) for subtyping by hemagglutination (HA) and neuraminidase (NA) inhibition tests and screening for the presence of the N1 gene by rRT-PCR. Additionally, all H5 and H7 isolates were pathotyped at NVSL by analysis of the amino acid sequence at the HA protein cleavage site.
Statistical Analysis
We previously documented that prevalence estimates calculated on virus isolation following a positive AIV-matrix rRT-PCR provided results nearly identical to those obtained by performing both tests in parallel (16); for this reason, we calculated apparent prevalence by dividing the number of virus isolation–positive samples (after a positive rRT-PCR result) by the total number of samples collected and tested by rRT-PCR (16).
Pearson χ² analyses were used to evaluate differences in AIV-infected proportion by sex (drake vs. hen), age (adult vs. juvenile), species of waterfowl, and hunting season of collection (seasons 1, 2, 3). Fisher exact test was used instead of χ² when ≥1 cells were expected to have a frequency of ≤5. A p value <0.05 was considered significant. Wald 95% confidence intervals were calculated for all proportions of AIV infections (i.e., sex, age, species).
A multivariate main effects logistic regression model was also constructed to assess differences in AIV detection by using rRT-PCR by age, sex, and bird species. Spec-
Figure. Locations of state wildlife management areas where samples were collected from waterfowl for avian influenza virus surveillance, Texas mid–Gulf Coast, USA, September–January 2006–07, 2007–08, and 2008–09. Inset shows location of Texas (shaded).
cies were categorized as blue-winged teal, green-winged teal, gadwall, northern shoveler, or other species. We chose the 4 species-specific categories because they represented the largest numbers of tested birds. Sample records with missing rRT-PCR results or age, sex, or species data were removed from this analysis. We analyzed all data using Intercooled Stata version 9 (Stata Corp., College Station, TX, USA).
**Results**
**Sampling Overview**
A total of 5,363 cloacal swab samples were collected from 33 different potential host species, including a variety of waterfowl and other wetland-associated game birds (online Appendix Table 1, www.cdc.gov/EID/content/16/8/1224-appT1.htm; online Appendix Table 2, www.cdc.gov/EID/content/16/8/1224-appT2.htm; and online Appendix Table 3, www.cdc.gov/EID/content/16/8/1224-appT3.htm) during 3 consecutive hunting seasons (season 1: 2,171 birds; season 2: 2,424 birds; and season 3: 768 birds). Most samples (3,138 [58.5%]) were from teal (blue-winged [*Anas discors*] and green-winged [*A. crecca*]), followed by northern shovelers (*A. clypeata;* 703 [13.1%]), gadwall (*A. strepera;* 437 [8.2%]), and American wigeon (*A. americana;* 238 [4.4%]); the remaining samples (847 [15.8%]) were from a variety of other species (online Appendix Tables 1–3). Adults accounted for 2,759 (51.5%) samples; 1,504 (28.0%) were collected from juveniles, and 1,100 (20.5%) from birds of undetermined age. Additionally, 2,445 (45.6%) samples were from drakes, 2,262 (42.2%) from hens, and 656 (12.2%) from birds of undetermined sex.
**Subtype Prevalences**
Of 4,119 samples processed for virus isolation, influenza A viruses were isolated from 134. All 9 NA subtypes (N1–9) were isolated, whereas only 9 of the 16 different HA subtypes (H1–7, 10, and 11) were isolated. Thirty-two different HA and NA subtype combinations were identified (online Appendix Table 4, www.cdc.gov/EID/content/16/8/1224-appT4.htm), and for 8 isolates, either the HA (*n = 7*) or NA (*n = 1*) was not identified.
The most frequently identified HA subtypes during season 1 were H3 and H6 (8 [25.0%] and 9 [28.1%, respectively], whereas for season 2, H4 and H10 were predominant (26 [26.8%] and 17 [17.5%, respectively]); the H4 subtype (4 [80.0%]) remained predominant in season 3. With respect to NA subtypes, N1 and N8 were most common in season 1 (8 [18.8%] and 10 [31.3%, respectively], whereas N6, N7, and N8 (19 [19.6%]; 16 [16.5%], and 19 [19.6%, respectively) were predominant in season 2, with N6 and N8 (2 [40.0%] each) remaining predominant in season 3. The most frequent HA and NA subtype combinations identified during season 1 were subtype H3N8 (*n = 7*) and H6N1 (*n = 4*) viruses, whereas H4N6 (*n = 17*), H3N8 (*n = 9*), and H10N7 (*n = 9*) viruses were the most common subtype combinations identified in season 2, and H4N6 (*n = 2*) and H4N8 (*n = 2*) were most common in season 3 (online Appendix Table 4).
H7 subtype was identified by rRT-PCR during all 3 hunting seasons (*n = 2*, 28, and 2, respectively). Additionally, H5 subtype was detected by rRT-PCR for all 3 seasons (*n = 14*, 21, and 2, respectively). Yet, H5 viruses were isolated only during season 2, whereas H7 viruses were isolated during all 3 hunting seasons (Tables 1, 2). All H5 and H7 viruses were determined to be low-pathogenicity AIVs by analysis of the amino acid sequence at the HA protein cleavage site.
**Prevalence by Sex, Age, and Species**
Apparent AIV prevalence did not differ significantly between hens and drakes by rRT-PCR or virus isolation during any of the 3 hunting seasons or all seasons combined (online Appendix Table 5, www.cdc.gov/EID/content/16/8/1224-appT5.htm). Prevalence as determined by rRT-PCR and virus isolation differed significantly between juvenile and adult birds during the 3 hunting seasons and for all seasons combined (Table 3). However, when data were analyzed on the basis of samples for which both sex and age of the birds were known, results differed significantly between adult drakes and hens according to rRT-PCR results during season 1 and between juvenile hens and drakes by virus isolation during season 3 and for all 3 seasons combined (online Appendix Table 6, www.cdc.gov/EID/content/16/8/1224-appT6.htm).
To determine whether a species effect existed for age differences, we assessed apparent AIV prevalence by age for species for which >100 samples from adult birds and >100 samples from juvenile birds were tested (i.e., blue-winged teal, green-winged teal, gadwall, and northern shoveler; Table 4; online Appendix Table 7, www.cdc.gov/EID/content/16/8/1224-appT7.htm; and online Appendix Table 8, www.cdc.gov/EID/content/16/8/1224-appT8.htm). When data from all 3 hunting seasons were combined, significantly more juvenile than adult birds were positive for AIV by virus isolation for 3 of the predominant host species analyzed (blue-winged teal, green-winged teal, and northern shoveler); no significant difference was observed for gadwall (online Appendix Tables 7, 8). However, apparent AIV prevalence by rRT-PCR was significantly higher only for juvenile blue-winged teal and northern shovelers (online Appendix Tables 7, 8). According to multivariate logistic regression, rRT-PCR results were associated with age and species but not with sex (Table 4).
Blue-winged teal and northern shovelers had the greatest diversity in subtypes, followed by green-winged
Table 1. Subtypes of avian influenza viruses isolated in the fall (September and November) from selected species during 3 consecutive hunting seasons, Texas mid–Gulf Coast, USA, 2006–07, 2007–08, and 2008–09
| Species* | September† | | | | November | | |
|-----------------------------------------------|------------|----------|----------|----------|----------|----------|----------|
| | 2006 | 2007 | 2008 | 2006 | 2007 | 2008 |
| Fulvous whistling duck | – | – | – | H6N1 | – | – |
| (*Dendrocygna bicolor*) | | | | | | |
| Mottled duck × mallard (*Anas fulvigula × A. platyrhynchos*) | – | – | – | – | H6N8 | – |
| Mottled duck (*A. fulvigula*) | – | – | – | H6N5 | – | – |
| Northern pintail (*A. acuta*) | – | – | – | – | H4N8 | – |
| Northern shoveler (*A. clypeata*) | – | – | – | H2N9, H3N8, H4N2, H4N6, H4N8 | H4N2, H5N2, H5N3, H7N2, H6N2, H10N2, H11N9 (2) | H4N8 |
| Teal, blue-winged (*A. discors*) | H1N1, H3N6, H3N8 (6) | H1N1 (2), H2N8, H3N4, H3N6, H3N8 (9), H4N1, H4N6 (17), H4N8 (6), H6N1, H7N1, H7N1/4, H7N7 (2), H10N7 (5) | H4N6, H4N8 | H2N9, H4N2 H4N6, H4N8 | H3N6, H5N2 (2), H5N3 (2), H7N4, H7N7 (3), H10N7, H11N9 (3) | H4N8 |
| Teal, green-winged (*A. crecca*) | H6N2 | H10N7 | – | H1N1 | H5N2, H7N1/4, H11N9 | – |
*Species selected by significance as determined by prevalence, uniqueness to the area, or native, nonmigratory species.
†Teal are the only species hunted during September on the Texas mid–Gulf coast.
Teal (Tables 1, 2; online Appendix Tables 1–3). Nine HA (H1–7, 10, and 11) and all 9 NA (N1–9) subtypes were identified in blue-winged teal, 8 HA (H2–7, 10, and 11) and 6 NA (N2, N3, and N6–9) subtypes were identified in northern shovelers, whereas 6 HA (H1, 5–7, 10, and 11) and 6 NA (N1–4, N7, and N9) subtypes were identified in green-winged teal.
Discussion
The Texas Gulf Coast provides winter habitat for ≈2–3 million ducks and ≥1 million geese (11). In this region, migratory waterfowl intermingle with resident wild species such as the mottled duck, and are in close contact with poultry operations and humans, primarily hunters (15,17). Recently, we reported prevalence for the first multiyear study of AIV that covered waterfowl wintering grounds along the Texas Gulf Coast (16), a previously understudied area. Unlike results of previous studies, we found little to no variation in apparent AIV prevalence by month within in wintering seasons (September–January) with the exception of rRT-PCR during December 2007–January 2008 and virus isolation during 2005–06 and 2006–07 (16). Additionally, AIV prevalence, as determined by rRT-PCR or virus isolation, varied little among the 4 consecutive hunting seasons studied (September–January 2005–06 through 2008–09), except for the 2007–08 season, during which overall AIV prevalence was higher than the other 3 seasons as determined by both rRT-PCR and virus isolation (16). Detection of AIV at low levels throughout the wintering season supports the contention that AIV can persist in wild-bird populations through continuous circulation in a proportion of the population (1). The low rate of virus isolation observed in the current study (29.9% of rRT-PCR–positive samples) is consistent with findings of other studies and is not surprising (2,23). Real-time RT-PCR is considered more sensitive than virus isolation, enabling the detection of genome fragments and viruses that do not grow in embryonated chicken eggs. Also, consistent with other surveillance studies, no differences were noted in AIV prevalence based on sex, and AIV was more prevalent in juvenile birds than in adults (1,7,23). The latter finding supports the assumption that immunologically immature (juvenile) birds are more susceptible to AIV than are mature (adult) birds (24,25).
Table 2. Subtypes of avian influenza viruses isolated in the winter (December–January) from selected species during 3 consecutive hunting seasons, Texas mid–Gulf Coast, USA, 2006–07, 2007–08, and 2008–09
| Species* | December | | | January | | |
|-----------------------------------------------|------------|----------|----------|----------|----------|----------|
| | 2006 | 2007 | 2008 | 2007 | 2008 | 2009 |
| Northern pintail (*Anas acuta*) | – | H10N3/7 | H4N6 | – | H10N3 | – |
| Northern shoveler (*A. clypeata*) | – | H5N2, H6N2, H10N7 | – | – | – | – |
| Teal, blue-winged (*A. discors*) | – | – | – | – | H10N3 (3)| – |
| Teal, green-winged (*A. crecca*) | H10N7, H11N3 | – | – | – | H7N3 | H7N3, H10N3 (2) | – |
*Species selected by significance as determined by prevalence, uniqueness to the area, or native, nonmigratory species.
The most commonly identified HA and NA subtype combinations during season 1 were H3N8 and H6N1; during season 2, H3N8 remained, but it was not detected during season 3. During season 2, H4N6 and H10N7, which have been reported on the Gulf Coast (8,13), were the predominant subtype combinations; H4N6 also was detected during season 3. The annual variations in AIV subtype prevalence observed in this study show the need for continued annual surveillance in domestic and migratory avian species, particularly in areas of high poultry and waterfowl density, such as the Texas Gulf Coast (15).
Outbreaks of H5 AIV have been documented previously in Texas. In 1993, an outbreak of H5N2 occurred in emus, in 2002 H5N3 was detected in chickens, and in 2004 highly pathogenic avian influenza virus (H5N2) was reported in a commercial poultry operation (26–28). We isolated subtype H5N2 and H5N3 viruses from apparently healthy free-roaming waterfowl only during season 2. Although no data are available on subtypes circulating in waterfowl on the Texas coast before the 3 outbreaks noted above, our data document the presence of these subtypes in migratory waterfowl near commercial poultry operations (15). Molecular characterization of the subtype H5N2 and H5N3 viruses we isolated should help clarify the relation between these viruses and those isolated from commercial species.
Our isolation of AIVs from resident (nonmigratory) mottled ducks and mottled duck/mallard hybrids suggests AIV transmission on the wintering ground and is consistent with previous reports (13). Mallards interbred with mottled ducks and are sister species phylogenetically (29). Before the isolation of H6 AIVs from a mottled duck/mallard hybrid in November 2006 and a mottled duck in November 2007, we isolated H6 subtypes from migratory teals and northern shovelers (September and November 2006 and 2007). Additional support for AIV transmission on wintering grounds included isolation of an H6 virus from a fulvous whistling duck, a species that breeds on the Texas–Louisiana coast and leaves during late summer to winter further south in Mexico; nearly all whistling ducks are gone by late January (17). Although circulation of AIVs within fulvous whistling ducks, mottled ducks, and mottled duck hybrids throughout the year cannot be ruled out, such circulation seems unlikely. Hanson et al. were unable to isolate AIVs from mottled ducks collected on the Texas Gulf Coast during August (9); additionally, we did not detect AIV by rRT-PCR in samples collected during June–August 2007 (n = 155, S. Rollo et al., unpub. data), which suggests that these viruses are not readily circulating in these resident populations during summer. Genetic characterization of these H6 isolates will help determine whether these isolates are related and help clarify the role of waterfowl wintering grounds in the transmission and perpetuation of AIVs in nature. Further studies focused on AIV prevalence and immune responses to AIV in these resident populations also are needed to clarify the maintenance and transmission of AIVs in the wintering grounds.
Before singling out a particular species on which to focus surveillance efforts, one must consider the technique used for subject selection (hunter-harvest vs. live-capture) as well as the area under study (e.g., breeding grounds vs.
---
**Table 3. Comparison of apparent prevalence of avian influenza virus in hunter-harvested waterfowl, Texas mid–Gulf Coast, USA, September–January 2006–07, 2007–08, and 2008–09**
| Hunting season | Juvenile waterfowl† | Adult waterfowl† | p value |
|----------------|---------------------|------------------|---------|
| | No. tested | rRT-PCR | VI | No. tested | rRT-PCR | VI | |
| 2006–07 | 518 | 8.30 | 3.28 | 1,081 | 5.46 | 0.74 | 0.029 |
| | | (5.92–10.68) | (1.75–4.82) | (4.10–6.81) | (0.23–1.25) | | <0.001 |
| 2007–08 | 763 | 13.80 | 5.50 | 1,189 | 10.51 | 3.28 | 0.030 |
| | | (11.30–16.20) | (3.89–7.12) | (8.77–12.20) | (2.27–4.29) | | 0.022 |
| 2008–09 | 222 | 8.56 | 1.80 | 489 | 4.70 | 0.20 | 0.043 |
| | | (4.88–12.24) | (0.49–4.55) | (2.82–6.58) | (0.01–1.13) | | 0.035 |
| Total‡ | 1,503 | 11.10 | 4.06 | 2,759 | 1.74 | 1.74 | <0.001 |
| | | (9.52–12.69) | (3.06–5.06) | (1.25–2.23) | (1.25–2.23) | | <0.001 |
* rRT-PCR, real-time reverse transcription–PCR; VI, virus isolation.
† Values for rRT-PCR and VI are apparent prevalence, % (95% confidence interval).
‡ Total = the 3 hunting seasons combined (September–January, 2006–07, 2007–08, and 2008–09).
---
**Table 4. Multivariate logistic regression model to identify variables associated with a positive real-time RT-PCR result, Texas mid–Gulf Coast, USA, 2006–07, 2007–08, and 2008–09**
| Variable | Odds ratio (95% CI) | p value |
|----------------|---------------------|---------|
| Sex | | |
| Drake | 1.0† | |
| Hen | 1.07 (0.859–1.320) | 0.558 |
| Age | | |
| Adult | 1.0† | |
| Juvenile | 1.45 (1.17–1.81) | **0.001** |
| Species | | |
| Other species | 1.0† | |
| Gadwall | 0.407 (0.120–0.825) | **0.013** |
| Northern shoveler | 1.51 (0.987–2.320) | 0.057 |
| Blue-winged teal | 2.18 (1.52–3.13) | **<0.001** |
| Green-winged teal | 1.12 (0.742–1.680) | 0.592 |
* Results for a total of 4,187 samples, collected during September–January for each season. RT-PCR, reverse transcription–PCR; CI, confidence interval. Boldface indicates significant result.
† Reference category.
wintering grounds; fresh water vs. salt water) and which populations are prevalent within the study areas. Mallards have become a primary species of interest not only because of their susceptibility to H5 and H7 subtypes but also because of their abundance and relative ease of capture (2,17,23,30–32). During our study, few mallard samples were collected because most Texas mallards winter in the playa lakes and sorghum fields of the Texas Panhandle with few (<4%) wintering along the Gulf Coast (17). Our data indicate that mallards, although appropriate focal species for AIV monitoring in some portions of North America, are not as suitable as blue-winged teal or northern shoveler in other regions, such as the Texas mid–Coast (8,9,13). In many studies that found mallards as a high-prevalence species for AIV infection, they were captured live for testing and dominated the samples (2,23). The few studies in which other species were more frequently sampled and tested positive for AIV were conducted on hunter-harvested waterfowl (8,13,33).
Our study supports the consensus that dabbling ducks are more likely than diving ducks to be positive for AIV; however, as others have documented, not all dabbling ducks are equally likely to be AIV positive (2,23,33). We found blue-winged teal to be the species with the highest prevalence, followed by northern shoveler and green-winged teal. Gadwalls, also a dabbling duck from which we collected substantial numbers of samples, were the least likely to test positive for AIV. Blue-winged teal are generally the first ducks to fly south in the fall, first arriving on wintering grounds in September, and the last to pass through Texas in late February–March on their return north (17). They also make exceedingly long flights compared with other dabbling ducks between feeding and resting areas during migrations (17). On the other hand, gadwalls are short-distance migrants and migrate later, generally beginning their southward migration in early September and their return north starting in February (17). The physiologic demands of long-distance migration can suppress the immune system (34); thus, blue-winged teal might be more susceptible to infection than some other dabbling ducks because of their long-distance migration. More extensive studies are needed incorporating more ecologic factors such as food resources, body mass, and immune status to more fully understand how AIV persists in nature and why the prevalence of AIV is higher in particular species.
Although our samples were not collected probabilistically (i.e., the samples reflect hunters’ choices, as well as the relative abundance of each species), use of hunter-harvested waterfowl was convenient for obtaining large number of samples with which to estimate the prevalence of AIV subtypes carried by waterfowl in the Gulf Coast of Texas. In addition, because hunters have been identified as the human population most at risk for exposure to AIV (35) and antibodies to H11 subtype have been identified in hunters and wildlife professionals (36), continued monitoring of AIV in waterfowl and humans exposed to them should provide useful information about the prevalence and significance of wild animal-to-human transmission.
AIV surveillance studies over time in the same region are critical, particularly in understudied areas. Although studies in areas of low AIV prevalence are inconvenient because of the large sample sizes required to isolate substantial numbers of AIVs, such surveys are critical to gain more knowledge of the ecology of influenza viruses. Our data contribute temporal information about AIV prevalence and subtype diversity for a historically understudied area of North America, the waterfowl wintering grounds of the Texas Gulf Coast.
**Acknowledgments**
We greatly appreciate the cooperation and patience of the waterfowl hunters of the Texas Gulf Coast who graciously allowed us to sample their harvested waterfowl. We thank everyone who assisted in sample collecting over the years. We also appreciate the help of biologists and technicians from the Texas Parks and Wildlife Department. For assistance with molecular testing, we thank the Animal Health Solutions Group at Ambion, Inc., for subtyping and pathotyping of the influenza isolates and the Avian Section in the Diagnostic Virology Laboratory at the NVSL. The work was completed in the laboratory of B. Lupiani at Texas A&M University.
This research was supported by the National Research Initiative of the US Department of Agriculture Cooperative State Research, Education, and Extension Service AICAP grant 2005-3560515388 (Z507201), awarded to B.L.
Ms Ferro is a PhD candidate in veterinary microbiology in the College of Veterinary Medicine and Biomedical Sciences at Texas A&M University. Her primary research interests include wildlife disease ecology, particularly pathogens at the exotic/wild–domestic animal interface, such as avian influenza virus, and diagnostics associated with viruses of veterinary importance.
**References**
1. Webster RG, Bean WJ, Gorman OT, Chambers TM, Kawaoka Y. Evolution and ecology of influenza A viruses. *Microbiol Rev*. 1992;56:152–79.
2. Dusek RJ, Bortner JB, DeLiberto TJ, Hoskins J, Franson JC, Bales BD, et al. Surveillance for high pathogenicity avian influenza virus in wild birds in the Pacific Flyway of the United States, 2006–2007. *Avian Dis*. 2009;53:222–30. DOI: 10.1637/8462-082908-Reg.1
3. Munster VJ, Wallensten A, Baas C, Rimmelzwaan GF, Schutten M, Olsen B, et al. Mallards and highly pathogenic avian influenza ancestral viruses, northern Europe. *Emerg Infect Dis*. 2005;11:1545–51.
4. Reperant LA, Rimmelzwaan GF, Kuiken T. Avian influenza viruses in mammals. *Rev Sci Tech*. 2009;28:137–59.
5. US Department of Agriculture. Wild Bird Plan: an early detection system for highly pathogenic H5N1 avian influenza in wild migratory birds. US Interagency Strategic Plan 2006 March 14 [cited 2009 Dec 1]. http://www.usda.gov/wps/portal/usdahome?contentid=true&contentid=2006/03/0094.xml
6. Capua I, Alexander DJ. The challenge of avian influenza to the veterinary community. Avian Pathol. 2006;35:189–205. DOI: 10.1080/03079450600717174
7. Krauss S, Walker D, Pryor SP, Niles L, Chenghong L, Hinshaw VS, et al. Influenza A viruses of migrating wild aquatic birds in North America. Vector Borne Zoonotic Dis. 2004;4:177–89. DOI: 10.1089/vbz.2004.4.177
8. Ferro PJ, El-Attrache J, Fang X, Rollo SN, Jester A, Merendino T, et al. Avian influenza surveillance in hunter-harvested waterfowl from the Gulf Coast of Texas (November 2005–January 2006). J Wildl Dis. 2008;44:434–9.
9. Hanson BA, Swayne DE, Senne DA, Lobpries DS, Hurst J, Stallknecht DE. Avian influenza viruses and paramyxoviruses in wintering and resident ducks in Texas. J Wildl Dis. 2005;41:624–8.
10. Kocan AA, Hinshaw VS, Daubney GA. Influenza A. Viruses isolated from migrating ducks in Oklahoma. J Wildl Dis. 1980;16:281–6.
11. Ducks Unlimited Texas. CARE. Conserving Agricultural Resources and the Environment. 2008 [cited 2009 Dec 7]. http://www.ducks.org/Page379.aspx
12. Stallknecht DE, Senne DA, Zwank PJ, Shane SM, Kearney MT. Avian paramyxoviruses from migrating and resident ducks in coastal Louisiana. J Wildl Dis. 1991;27:123–8.
13. Stallknecht DE, Shane SM, Zwank PJ, Senne DA, Kearney MT. Avian influenza viruses from migratory and resident ducks of coastal Louisiana. Avian Dis. 1990;34:398–405. DOI: 10.2307/1591427
14. U.S. Geological Survey National Wildlife Health Center. Highly pathogenic avian influenza early detection data system [cited 2009 Dec 1]. http://wildlifedisease.nwhc.usgs.gov/ai/index.jsp
15. Miller R. Analysis identifies areas, populations for Asian H5N1 HPAI surveillance. NAHSS Outlook Quarter Two 2007 [cited 2009 Dec 4]. http://www.usda.gov/wps/portal/utp/s_7_0_A/7_0_IOP?navid=SEARCH&mode=simple&q=ryan+millier&x=0&y=0&site=usda
16. Ferro PJ, Peterson MJ, Merendino T, Nelson M, Lupiani B. Comparison of real-time RT-PCR and virus isolation for estimating prevalence of avian influenza in hunter-harvested wild birds at waterfowl wintering grounds along the Texas mid–Gulf Coast (2005–2006 through 2008–2009). Avian Dis. 2010;54(Suppl):655–9. DOI: 10.1637/8810-040109-ResNote.1
17. Bellrose FC. Ducks, geese, and swans of North America, 2nd ed. Harrisburg (PA): Stackpole Books; 1978.
18. Tacha TC, Braun CE, eds. Migratory shore and upland game bird management in North America. Washington: International Association of Fish and Wildlife Agencies; 1994.
19. Braun CE, ed. Techniques for wildlife investigations and management. 6th ed. Bethesda (MD): The Wildlife Society by Port City Press; 2005.
20. Spackman E, Senne DA, Bulaga LL, Myers TJ, Perdue ML, Garber LP, et al. Development of real-time RT-PCR for the detection of avian influenza virus. Avian Dis. 2003;47(Suppl):1079–82. DOI: 10.1637/0005-208647-s3.1079
21. Spackman E, Senne DA, Myers TJ, Bulaga LL, Garber LP, Perdue ML, et al. Development of a real-time reverse transcriptase PCR assay for type A influenza virus and the avian H5 and H7 hemagglutinin subtypes. J Clin Microbiol. 2002;40:3256–60. DOI: 10.1128/JCM.40.9.3256-3260.2002
22. Spackman E, Ip HS, Suarez DL, Slemons RD, Stallknecht DE. Analytical validation of a real-time reverse transcription polymerase chain reaction test for Pan-American lineage H7 subtype avian influenza viruses. J Vet Diagn Invest. 2008;20:612–6.
23. Munster VJ, Baas C, Lexmond P, Waldenstrom J, Wallensten A, Fransson T, et al. Spatial, temporal, and species variation in prevalence of influenza A viruses in wild migratory birds. PLoS Pathog. 2007;3:e61. DOI: 10.1371/journal.ppat.0030061
24. Stallknecht DE, Brown JD. Wild birds and the epidemiology of avian influenza. J Wildl Dis. 2007;43:S15–20.
25. Stallknecht DE, Shane SM. Host range of avian influenza virus in free-living birds. Vet Res Commun. 1988;12:125–41. DOI: 10.1007/BF00362792
26. Lee CW, Senne DA, Linares JA, Woolcock PR, Stallknecht DE, Spackman E, et al. Characterization of recent H5 subtype avian influenza viruses from US poultry. Avian Pathol. 2004;33:288–97. DOI: 10.1080/0307945042000203407
27. Lee CW, Swayne DE, Linares JA, Senne DA, Suarez DL. H5N2 avian influenza outbreak in Texas in 2004: the first highly pathogenic strain in the United States in 20 years? J Virol. 2005;79:11412–21. DOI: 10.1128/JVI.79.17.11412-11421.2005
28. Pelzel AM, McCluskey BJ, Scott AE. Review of the highly pathogenic avian influenza outbreak in Texas, 2004. J Am Vet Med Assoc. 2006;228:1869–75. DOI: 10.2460/javma.228.12.1869
29. Onmland KE. Character congruence between a molecular and a morphological phylogeny for dabbling ducks (Anas). Syst Biol. 1994;43:369–86.
30. Jourdain E, Gunnarsson G, Wahlgren J, Latorre-Margalef N, Brojer C, Sahlin S, et al. Influenza virus in a natural host, the mallard: experimental infection data. PLoS One. 2010;5:e98935. DOI: 10.1371/journal.pone.0098935
31. Olsen B, Munster VJ, Wallensten A, Waldenstrom J, Osterhaus AD, Fouchier RA. Global patterns of influenza A virus in wild birds. Science. 2006;312:384–8. DOI: 10.1126/science.1122438
32. Wallensten A, Munster VJ, Latorre-Margalef N, Brytting M, Elberg J, Fouchier RA, et al. Surveillance of influenza A virus in migratory waterfowl in northern Europe. Emerg Infect Dis. 2007;13:404–11. DOI: 10.3201/eid1303.061130
33. Siembieda J, Johnson C, Cardona C, Anchell NL, Dao N, Reisen W, et al. Influenza A viruses in wild birds of the Pacific Flyway, 2005–2008. Vector Borne Zoonotic Dis. 2010 Jan 8; [Epub ahead of print]. DOI: 10.1089/vbz.2009.0095
34. Weber TP, Stilianakis NI. Ecologic immunology of avian influenza (H5N1) in migratory birds. Emerg Infect Dis. 2007;13:1139–43.
35. Siembieda J, Johnson CK, Boyce W, Sandrock C, Cardona C. Risk for avian influenza virus exposure at human-wildlife interface. Emerg Infect Dis. 2008;14:1151–3. DOI: 10.3201/eid1407.080066
36. Gill JS, Webby R, Gilchrist MJ, Gray GC. Avian influenza among waterfowl hunters and wildlife professionals. Emerg Infect Dis. 2006;12:1284–6.
Address for correspondence: Blanca Lupiani, Department of Veterinary Pathobiology, College of Veterinary Medicine and Biomedical Sciences, Texas A&M University, 4467 TAMU, College Station, TX 77843-4467, USA; email: email@example.com
All material published in Emerging Infectious Diseases is in the public domain and may be used and reprinted without special permission; proper citation, however, is required.
|
Introduction
Excessive use of alcohol ranks globally among the most significant risk factors for premature loss of health and mortality. According to official data, 3.3 million deaths worldwide were attributed to alcohol in 2012, representing 5.9% of all deaths. The problem is most pronounced in Europe, where consumption reaches the highest levels in the world (10.9 l of pure alcohol per capita vs 6.2 l globally) and deaths attributable to alcohol account for 13.3% of all deaths.\(^1\)
The largest decline in alcohol use was seen in Southern Europe from 1990 to 2010, followed by Central-Western and Western Europe. In Nordic countries, consumption remained fairly stable. However, consumption increased by almost 8% in Central and Eastern Europe (CEE) within the same period.\textsuperscript{2}
Adolescence is an important period for initiation and development of substance use, including alcohol. Social motives, such as identification with adult-like behaviour, getting one’s own way, resisting social norms, etc., prevail among the reasons for drinking alcohol.\textsuperscript{3,4} Adolescents may also perceive alcohol as a mediator to intensify contacts with peers and initiate new relationships.\textsuperscript{5} On the other hand, young people usually underestimate the health effects of alcohol (particularly those associated with long-term use). For this reason, monitoring the use of psychoactive substances among adolescents is of great importance for evaluating population health in this age group.
Besides personal characteristics, the overall social environment substantially determines alcohol use by adolescents,\textsuperscript{6} regardless of their family background (i.e., the drinking behaviour of their parents). A strict implemented policy may significantly limit drinking among youngsters.\textsuperscript{7} Aside from restrictive measures limiting access to alcoholic beverages, pricing\textsuperscript{8} and marketing regulations\textsuperscript{9} play an important role in prevention at population level.
Since the late 1990s, alcohol consumption among adolescents in most European countries has shown a declining trend, primarily among boys but also among girls.\textsuperscript{10} However, this trend was not apparent in CEE, and consumption increased in some countries of this region, particularly among girls, between 1998 and 2006.\textsuperscript{10–12} However, from 2002 to 2010, a decline has also been apparent in other parts of the continent,\textsuperscript{3} indicating that the unfavourable trend in CEE has been broken, and development is approaching the situation first seen in Western Europe.
The aim of this study was to analyze changes in selected indicators of alcohol use (lifetime use, initiation of drinking at $\leq 13$ years, weekly use, beverage preferences, initiation of drunkenness at $\leq 13$ years and lifetime drunkenness) among adolescents in Slovakia from 2005 to 2014 using Health Behaviour in School Aged Children (HBSC) data. This analysis will contribute to understanding the development of alcohol consumption in the light of social changes taking place in Slovakia.
**Methods**
The HBSC study is an international, school-based cross-sectional study. Its standardized design makes it possible to create harmonized datasets appropriate for cross-country comparisons and for identifying changes over time. Data are collected through uniform anonymous questionnaires completed at schools. The questionnaires include mandatory modules of questions used in every participating country, and optional modules containing sets of questions based on the specific needs of individual countries.
The sample is created in accordance with the structure of the educational system in the given country and is stratified by region and type of school in order to obtain representative data on 11-, 13- and 15-year-old adolescents.
HBSC surveys were undertaken in Slovakia in school years 2005/2006, 2009/2010 and 2013/2014 (i.e., May–June 2006, 2010 and 2014). Two-step sampling was used in keeping with the standardized research protocol.\textsuperscript{14} In the first step, participating schools were selected at random with probability proportional to size using an official list of all schools obtained from the Slovak Institute of Information and Prognosis for Education. The sample of schools was stratified by region (eight administrative self-governing regions) and type of school (elementary schools comprising 1st–9th grades and grammar schools comprising 6th–13th grades). In the second step, classes within the participating schools were selected at random for data collection. Parents were informed in advance about the study via the school administration and could opt out if they did not wish their child to participate. Participation in the study was fully voluntary and anonymous, with no explicit incentives provided for participation. This approach provided samples that were proportionally representative of all areas and population subgroups at nationwide level, thus eliminating possible bias caused by heterogeneity of the target population. Pupils from the 5th–9th grades were considered eligible for this study (i.e., adolescents aged 11–15 years), and only 11-, 13- and 15-year-old respondents were included in the analysis. Table 1 shows the basic characteristics of the samples obtained in three waves of the survey. Drop outs were mainly due to absence of children due to illness or other personal reasons, and the refusal of parents or adolescents to be involved in the study. No notable differences in response rate were observed between the selected schools.
This study analyzed HBSC data related to adolescents’ reports on lifetime experience of drinking alcohol, early initiation of drinking, weekly alcohol drinking, weekly drinking of certain types of beverages (beer, wine and spirits), early initiation of drunkenness and lifetime experience of drunkenness.
Lifetime experience of drinking alcohol was measured by the question, ‘On how many days (if any) have you drunk alcohol in your lifetime?’ Possible responses were ‘never’, ‘1–2 days’, ‘3–5 days’, ‘6–9 days’, ‘10–19 days’, ‘20–29 days’ and ‘30 days or more’. All answers except ‘never’ were considered as positive. This variable was only analyzed in 15-year-old respondents.
Early initiation of alcohol drinking was measured by the question, ‘At what age did you first drink alcohol?’ Possible responses were ‘never’, ‘11 years or less’, ‘12 years’, ‘13 years’, ‘14 years’, ‘15 years’ and ‘16 years or older’. The answers ‘11 years or less’, ‘12 years’ or ‘13 years’ were considered as positive. This variable was only analyzed in 15-year-old respondents.
Weekly alcohol drinking and weekly drinking of beer, wine and spirits were measured by the question, ‘At present, how often do you drink anything alcoholic, such as beer, wine or spirits?’ The following beverage types were stated: beer, wine, spirits, alcopops and other drinks. For each beverage type, possible responses were ‘every day’, ‘every week’, ‘every month’, ‘sometimes’ and ‘never’. An answer of at least ‘every day’ or ‘every week’ for at least one of the beverage types was considered as weekly drinking. An answer of ‘every day’ or
Table 1 – Basic characteristics of samples obtained in three waves of the Health Behaviour in School Aged Children survey in Slovakia.
| Year | Overall response rate | Respondents (n) |
|------|-----------------------|-----------------|
| | | 11 years old | 13 years old | 15 years old |
| 2006 | 85.6% | 1298 (608 boys) | 1327 (591 boys)| 1252 (591 boys)|
| 2010 | 79.5% | 1140 (528 boys) | 1600 (774 boys)| 1568 (771 boys)|
| 2014 | 78.8% | 1534 (776 boys) | 2162 (1035 boys)| 1549 (813 boys)|
‘every week’ for the respective type of beverage was considered as weekly drinking of beer, wine or spirits. Weekly drinking of particular beverages was only analyzed in 15-year-old respondents.
Early initiation of drunkenness was measured by the question, ‘At what age did you first get drunk?’ Possible responses were ‘never’, ‘11 years or less’, ‘12 years’, ‘13 years’, ‘14 years’, ‘15 years’ and ‘16 years or older’. The answers ‘11 years or less’, ‘12 years’ or ‘13 years’ were considered as positive. This variable was only analyzed in 15-year-old respondents.
Lifetime experience of drunkenness was measured by the question, ‘On how many days (if any) have you got drunk in your lifetime?’ Possible responses were ‘never’, ‘1–2 days’, ‘3–5 days’, ‘6–9 days’, ‘10–19 days’, ‘20–29 days’ and ‘30 days or more’. All answers except ‘never’ were considered as positive.
The results are presented as percentages with the relevant 95% confidence intervals (CI). Differences between rates were considered to be significant if the 95% CI did not overlap.
Results
Lifetime alcohol drinking among 15-year-old respondents (Fig. 1) decreased significantly during the study period in both boys (from 86.9% to 69.3%) and girls (from 88.4% to 69.9%). The decrease was gradual in boys, but a significant decline was seen between 2005/2006 and 2009/2010 in girls and the prevalence remained almost unchanged in 2013/2014. No notable sex differences were observed.
Initiation of drinking at ≤13 years of age as reported by 15-year-olds (Fig. 1) decreased substantially during the study period in both boys (from 61.0% to 39.5%) and girls (from 58.0% to 32.2%). A significant sex difference was only seen in 2009/2010 when boys prevailed over girls.
Weekly alcohol drinking (Fig. 2) declined notably during the study period, with the change holding for all age groups as well as for both boys and girls. Boys predominated over girls in each age group over the whole study period.
Regarding reports on drinking the most common types of alcoholic beverages in 15-year-old respondents (Fig. 3), the decline in beer consumption was significant among both boys (from 24.1% to 15.9%) and girls (from 9.4% to 5.6%) over the study period. Wine consumption only decreased in boys (from 8.5% to 4.1%). While the prevalence of positive answers regarding weekly drinking of spirits only differed slightly between 2005/2009 and 2013/2014, the percentage was notably higher in both boys and girls in 2009/2010.
Approximately one-third of the 15-year-old respondents reported positive answers regarding initiation of drunkenness at ≤13 years (Fig. 1), and the rates did not change significantly over the study period. Moreover, no remarkable sex differences were observed.
The prevalence of reports on lifetime drunkenness (Fig. 4) declined significantly after 2009/2010 in 13- and 15-year-old
Fig. 1 – Lifetime experience with alcohol drinking, early initiation of drinking and drunkenness. 15-years old respondents reporting to drink alcohol at least twice a lifetime. 15-years old respondents reporting the first alcohol drinking and drunkenness at age 13 or younger. HBSC Slovakia 2005/2006, 2009/2010, 2013/2014 (Error bars represent confidence interval 95%).
Fig. 2 – Weekly alcohol drinking. Respondents reporting to drink alcohol at least once a week. HBSC Slovakia 2005/2006, 2009/2010, 2013/2014 (Error bars represent confidence interval 95%).
Fig. 3 – Weekly drinking of beer, wine and spirits. 15-years old respondents reporting to drink selected kind of alcohol at least once a week. HBSC Slovakia 2005/2006, 2009/2010, 2013/2014 (Error bars represent confidence interval 95%).
Fig. 4 – Lifetime drunkenness. Respondents reporting being drunk at least twice a lifetime. HBSC Slovakia 2005/2006, 2009/2010, 2013/2014 (Error bars represent confidence interval 95%).
boys. In girls, the changes were insignificant, with the exception of 13-year-olds in whom prevalence declined over the study period.
Discussion
The HBSC results provide a valid and representative insight into development of the epidemiological situation in Slovakia regarding the drinking behaviour of adolescents. The findings clearly indicate a decline in alcohol consumption among adolescents: in indicators of experimenting with alcohol consumption (lifetime drinking and initiation of drinking at $\leq 13$ years) as well as in the indicator of regular use (weekly drinking). Considering the indicators of binge drinking leading to drunkenness, a decline is not so explicit. Particularly in girls, the decrease is only slight and insignificant. Moreover, early initiation of drunkenness at $\leq 13$ years remained virtually unchanged in both boys and girls. The findings also highlight changes in beverage preferences among adolescents. The above-mentioned overall decline in weekly drinking was mainly due to decreased frequency of beer consumption. However, the unchanged frequency of spirits consumption indicates a relative increase in their popularity among adolescents.
These findings are consistent at some level with the official estimate of the World Health Organization\textsuperscript{1} showing a downward trend in Slovakia in the age-standardized death rate attributable to selected alcohol causes (decreased by 34% between 1992 and 2010, from 124.1 to 82.4 deaths per 100,000 population). It seems that the trend shown in the present results is a follow-up to development in CEE, as indicated in previous analyses of international HBSC data.\textsuperscript{13} Moreover, when browsing through the rankings of alcohol use indicators stated in the international HBSC reports from the 1990s to the present time,\textsuperscript{15} Slovakia's position can be seen to move gradually from among the leading positions closer to the European average. Although this trend is also seen in neighbouring countries, according to the latest HBSC results, the decline is particularly apparent in Slovakia and the Czech Republic.\textsuperscript{16} This development offers some optimism regarding a possible reduction in the traditional notable difference in alcohol-attributable loss of health between CEE and Western Europe.\textsuperscript{17,18}
The declining trend in alcohol use by adolescents in Slovakia may reflect progressive change in the social environment, particularly a decrease in social tolerance of excessive drinking and drunkenness, as well as an overall decline in the popularity of alcohol use. However, further research is needed to justify this hypothesis. Such changes are attributable, at least in part, to legislative changes and improvement in their enforcement. For example, in 2009, Act No. 219/1996 Coll. on Protection against Alcohol Abuse was amended (Act. 214/2009 Coll.) making it more effective. Moreover, from 1 March 2010, the excise tax on spirits was increased by 15% (Act No. 474/2009 Coll.). According to official data from the Statistical Office of the Slovak Republic, the customer price index for alcohol increased by up to 139.3% from 2000 to 2014.\textsuperscript{19} The effect of pricing was pronounced given the decrease in the average real wage by 3.8% between 2010 and 2011 and its stagnation over the following two years.\textsuperscript{19,20} Act No. 313/2011 Coll., which changed and amended Act No. 8/2009 Coll. on Road Traffic, reclassifies driving under the influence of alcohol (blood alcohol concentration $>1 \text{ g/kg}$) from an offence to a criminal act and generally specifies stricter sentences for offences committed by offenders in traffic under the influence of addictive substances. Moreover, on 3 July 2013, the Government of the Slovak Republic approved a strategy for state health policies based on official documents of the World Health Organization,\textsuperscript{21} in which alcohol control is considered as one of the main priorities of public health. Such positive changes in the social environment can have a positive influence on the behaviour of adolescents, regardless of family background.\textsuperscript{6}
However, despite the above-mentioned positive changes, some aspects of alcohol use by adolescents still raise concerns. For example, the findings indicate a disappearing trend in traditional sex differences, particularly in indicators of drunkenness. This corresponds with the relative increase of popularity of spirits among both boys and girls (i.e., no significant decrease, unlike wine and beer). Such a preference is particularly associated with binge drinking and drunkenness.\textsuperscript{22,23} Binge drinking leading to drunkenness among adolescent girls is currently a topical problem across many European countries. The situation found in Slovakia therefore reflects the overall development in Europe and deserves appropriate attention.
These findings demonstrate the development of alcohol use by adolescents in Slovakia, which is undergoing a social and economic transformation process typical of CEE countries. Therefore, they contribute to overall understanding of alcohol control in adolescents at population level.
This study has a few limitations. The results are based on self-reports of respondents, and the prevalence data may vary, to some extent, from the actual situation.\textsuperscript{24} However, as standardized uniform methods were used in each survey, the sensitivity and specificity of the results have remained the same, and differences found over time should be considered as valid findings reflecting actual development in the country. Moreover, the sampling method used (stratification by region and type of school, as well as selection with probability proportional to size) provided representative data reflecting the actual epidemiological situation on a nation-wide level.
Finally, the study findings suggest the following implications for practice.
- The development of current alcohol policy in Slovakia seems to be on a good path. However, there are several reserves in the pricing of alcohol beverages, namely regarding spirits, for more effective enforcement of the legislative norms regulating availability and restrictions towards minors.\textsuperscript{8} As reported by Esser and Jernigan,\textsuperscript{9} alcohol marketing regulation in Slovakia is at an average level compared with other European countries, so there is still space to make policies more effective.
- Binge drinking, the decline in which is not as notable as seen in other aspects of alcohol use, needs attention. Drinking patterns, especially drinking leading to intoxication, play an important role in alcohol-related harms in young people.\textsuperscript{25} Therefore, binge drinking among
adolescents should be considered as a special issue in preventive programmes and campaigns.
- The development of the situation among girls indicates a need for attention. As seen in numerous countries in Western Europe, the gradual disappearance of the traditional predominance of males in alcohol drinking, including binge drinking, should be expected in Slovakia. This should be taken into consideration in preventive programmes in schools and communities, as well as in media campaigns.
**Author statements**
**Acknowledgements**
The authors wish to extend their thanks to the World Health Organization Country Office in Slovakia, namely Dr. Darina Sedláková, for substantial support of the HBSC project in Slovakia.
**Ethical approval**
The study was approved by the Ethics Committee of the Faculty of Medicine, P.J. Safarik University in Kosice.
**Funding**
The study was partially supported by the Research and Development Support Agency under contract no. APVV 0032-11 and by the Scientific Grant Agency of the Ministry of Education, Science, Research and Sport of the Slovak Republic and the Slovak Academy of Science (reg. no. 1/0981/15).
**Competing interests**
None declared.
**References**
1. World Health Organization. *Global status report on alcohol and health*. Geneva: WHO; 2014.
2. World Health Organization. *Status report on alcohol and health in 35 European countries*. Copenhagen: WHO Regional Office for Europe; 2013.
3. Kuntsche E, Knibbe R, Gmel G, Engels R. Who drinks and why? A review of socio-demographic, personality, and contextual issues behind the drinking motives in young people. *Addict Behav* 2006;31:1844–57.
4. Kuntsche E, Gabhainn SN, Roberts C, Windlin B, Vieno A, Bendtsen P, et al. Drinking motives and links to alcohol use in 13 European countries. *J Stud Alcohol Drugs* 2014;75:428–37.
5. Engels RCME, ter Bogt TF. Influences of risk behaviours on the quality of peer relations in adolescence. *J Youth Adolesc* 2001;30:675–95.
6. Bendtsen P, Damsgaard MT, Tolstrup JS, Erbsøll AK, Holstein BE. Adolescent alcohol use reflects community-level alcohol consumption irrespective of parental drinking. *J Adolesc Health* 2013;53:368–73.
7. Simons-Morton B, Pickett W, Boyce W, ter Bogt TF, Vollebergh W. Cross-national comparison of adolescent drinking and cannabis use in the United States, Canada, and The Netherlands. *Int J Drug Policy* 2010;21:64–9.
8. Purshouse RC, Meier PS, Brennan A, Taylor KB, Rafia R. Estimated effect of alcohol pricing policies on health and health economic outcomes in England: an epidemiological model. *Lancet* 2010;375:1355–64.
9. Esser MB, Jernigan DH. Assessing restrictiveness of national alcohol marketing policies. *Alcohol Alcohol* 2014;49:557–62.
10. Simons-Morton BG, Farhat T, ter Bogt TF, Hublet A, Kuntsche E, Nic Gabhainn S, et al. Gender specific trends in alcohol use: cross-cultural comparisons from 1998 to 2006 in 24 countries and regions. *Int J Public Health* 2009;54(Suppl. 2):199–208.
11. Kuntsche E, Kuntsche S, Knibbe R, Simons-Morton B, Farhat T, Hublet A, et al. Cultural and gender convergence in adolescent drunkenness: evidence from 23 European and North American countries. *Arch Pediatr Adolesc Med* 2011;165:152–8.
12. Zaborskis A, Sumskas L, Maser M, Pudule I. Trends in drinking habits among adolescents in the Baltic countries over the period of transition: HBSC survey results, 1993–2002. *BMC Public Health* 2006;6:67.
13. Looze MD, Raaijmakers Q, Bogt TT, Bendtsen P, Farhat T, Ferreira M, et al. Decreases in adolescent weekly alcohol use in Europe and North America: evidence from 28 countries from 2002 to 2010. *Eur J Public Health* 2015;25(Suppl. 2):69–72.
14. HBSC survey methods. Available at: [http://www.hbsc.org/methods/index.html](http://www.hbsc.org/methods/index.html); 2016 (last accessed 9 January 2016).
15. HBSC Publications. *International reports*. HBSC Publications. Available at: [http://www.hbsc.org/publications/international/](http://www.hbsc.org/publications/international/); 2016 (last accessed 9 January 2016).
16. Inchley J, Currie D, Young T, Samdal O, Torsheim T, Augustson L, et al., editors. *Growing up unequal: gender and socioeconomic differences in young people’s health and well-being. Health Behaviour in School-Aged Children (HBSC) Study: international Report from the 2013/2014 survey*. Copenhagen: World Health Organization; 2016.
17. Zatonski W, Manczuk M, Sulkowska UHEM Project Team. *Closing the health gap in European Union*. Warsaw: Cancer Epidemiology and Prevention Division, the Maria Sklodowska-Curie Memorial Cancer Center and Institute of Oncology; 2008.
18. Rehm J, Shield KD, Rehm MX, Gmel G, Frick U. *Alcohol consumption, alcohol dependence and attributable burden of disease in Europe: potential gains from effective interventions for alcohol dependence*. Toronto: Centre for Addiction and Mental Health; 2012.
19. Statistical Office of the Slovak Republic. Available at: [www.statistics.sk](http://www.statistics.sk); 2016 (last accessed 23 May 2016).
20. European Trade Union Institute. Wage development infographic. Brussels: European Trade Union Institute. Available at: [https://www.etui.org/Topics/Crisis-austerity-alternatives/Wage-development-infographic/](https://www.etui.org/Topics/Crisis-austerity-alternatives/Wage-development-infographic/); 2016 (last accessed 23 May 2016).
21. World Health Organization. *Global strategy to reduce the harmful use of alcohol*. Geneva: WHO; 2010.
22. Kuntsche E, Knibbe R, Gmel G, Engels R. ‘I drink spirits to get drunk and block out my problems…’ beverage preference, drinking motives and alcohol use in adolescence. *Alcohol Alcohol* 2006;41:566–73.
23. Naimi TS, Siegel M, DeJong W, O'Doherty C, Jernigan D. Beverage- and brand-specific binge alcohol consumption among underage youth in the U.S. *J Subst Use* 2015;20:333–9.
24. Lintonen T, Ahlström S, Metsö L. The reliability of self-reported drinking in adolescence. *Alcohol Alcohol* 2004;39:362–8.
25. Bye EK, Rossow I. The impact of drinking pattern on alcohol-related violence among adolescents: an international comparative analysis. *Drug Alcohol Rev* 2010;29:131–7.
|
Generalized power calculations for generalized linear models and more
Roger Newson
King's College London, UK
firstname.lastname@example.org
Abstract. The `powercal` package can compute any one of the 5 quantities involved in power calculations from the other 4. These quantities are power, significance level, detectable difference, sample number, and the standard deviation (SD) of the influence function, which is equal to the standard error multiplied by the square root of the sample number. `powercal` can take arbitrary expressions (involving constants and/or scalars and/or variables) as input, and calculates the output as a new variable. The user can therefore plot input variables against output variables, and this often communicates the tradeoffs involved better than a point calculation as output by the `sampsi` command. General formulas are given for calculating the SD of the influence function when the detectable difference is a linear combination of link functions of subpopulation means for an outcome variable distributed according to a generalized linear model (GLM). This general case includes a very broad range of special cases, where the parameters to be estimated are differences between subpopulation proportions, arithmetic means and algebraic means, or ratios between subpopulation proportions, arithmetic means, geometric means and odds. However, `powercal` is not limited to GLMs, and can even be used with rank methods.
Keywords: st0001, power, alpha, significance level, detectable difference, detectable ratio, sample number, standard deviation, influence function, sample design, generalized linear model, proportion, arithmetic mean, algebraic mean, geometric mean, odds
1 Introduction
When statisticians are not making their living producing confidence intervals and $p$-values, they are often producing power calculations, or urging their colleagues to involve them at the design stage, so that they can produce power calculations. The traditional tool for doing this in Stata is `sampsi`, which has several limitations. First, `sampsi` can only output power and sample size, and requires the detectable difference and desired significance level to be input. Second, it is only designed to output power and sample size for a limited range of parameters (differences between subpopulation means and proportions) for a limited range of designs (sampling in parallel from two or fewer populations). Third, `sampsi` outputs only point calculations, and does not produce plotted power curves, which often communicate the tradeoffs involved better than point calculations.
Because of these limitations, I wrote the `powercal` package, which is a low-level
programming tool for calculating any of the 5 quantities involved in power calculations. These quantities are power, significance level, detectable difference, number of sample units, and the standard deviation (SD) of the per-unit influence function, defined as the standard error multiplied by the square root of the number of sampling units. Each one of these 5 quantities can be calculated as output from the other 4 as input. The input quantities can be specified by the user as expressions involving constants and/or scalars and/or variables, and the output quantity is calculated as a new variable. The user can therefore list and plot the input and output variables used by \texttt{powercal}. Detectable differences are defined very broadly, and may be logarithms of ratio parameters or other transformed parameters. Sample units are also defined very broadly, and each unit may be a cluster, or a set of primary units sampled in a defined ratio from subpopulations.
\texttt{powercal} is therefore a very comprehensive package. The price of its general usefulness is that the user may need to know some formulas, especially to calculate the SD of the influence function. However, this article also gives a guide to the derivation of such formulas. These usually follow a standard pattern, especially if the differences to be estimated are linear combinations of parameters from generalized linear models (GLMs). However, the usefulness of \texttt{powercal} is not limited to GLMs, and extends to other statistics for which a Central Limit Theorem applies, including many rank statistics.
\section{The \texttt{powercal} package}
\subsection{syntax}
\begin{verbatim}
powercal newvarname[if exp] [in range] , [ nunit(expression_1)
power(expression_2) alpha(expression_3) delta(expression_4)
sdinf(expression_5) tdf(expression_6) noceiling float ]
\end{verbatim}
\subsection{Description}
\texttt{powercal} performs generalized power calculations, storing the result in a new variable with a name specified by \textit{newvarname}. All except one of the expression options \texttt{nunit()}, \texttt{power()}, \texttt{alpha()}, \texttt{delta()} and \texttt{sdinf()} must be specified. The single unspecified option in this list specifies whether the output variable is the number of sampling units, power, alpha (significance level), delta (difference in parameter value to be detected), or the standard deviation (SD) of the influence function. Any of these 5 quantities can be calculated from the other 4. \texttt{powercal} can be used to calculate any of these quantities, assuming that we are testing a hypothesis that a parameter is zero, and that the true value is given by \texttt{delta()}, and that the sample statistic is distributed around the population parameter in such a way that the pivotal quantity
\[ PQ = \sqrt{nunits} \times (\text{delta}/\text{sdinf}) \]
has a standard Normal distribution (if \texttt{tdf()} is not specified) or a $t$-distribution with \texttt{tdf()} degrees of freedom (if \texttt{tdf()} is specified). The formulas used by \texttt{powercal} define power as the probability of detecting a difference in the right direction, using a two-tailed test.
### 2.3 Options
\texttt{nunit(expression\_1)} gives an expression whose value is the number of independent sampling units. Sampling units are defined very generally. For instance, in an experiment involving equal-sized samples of individuals from Population $A$ and Population $B$, a sampling unit might be a pair of sampled individuals, one from each population. Similarly, in a case-control study with 4 controls per case, a sampling unit might be a case together with 4 controls.
\texttt{power(expression\_2)} gives an expression whose value is the power to detect a difference specified by the \texttt{delta()} option (see below). The power is defined as the probability that the sample difference is in the correct direction, and also large enough to be significant, using a 2-tailed test, at the level specified by the \texttt{alpha()} option (see below).
\texttt{alpha(expression\_3)} gives an expression whose value is the size, or significance level, of the statistical test (in units of probability, not percentage).
\texttt{delta(expression\_4)} gives an expression whose value is the true population difference to be detected. This difference is assumed to be positive. Therefore, if the user wishes to detect a negative difference, then s/he should specify an expression equal to minus that difference. The difference may be the log of a ratio parameter, such as an odds ratio, rate ratio, risk ratio or ratio of geometric means.
\texttt{sdinf(expression\_5)} gives an expression whose value is the SD of the influence function. That is to say, it is an expression equal to the expected standard error of the sample difference multiplied by the square root of the number of sampling units, where sampling units are defined generally, as specified in the option \texttt{nunit()}. In the simple case of a paired $t$-test, \texttt{sdinf()} is the SD of the paired differences. More generally, \texttt{sdinf()} can be defined by calculating a standard error for a particular number of units, from a pilot study, from a simulation or from a formula, and multiplying this standard error by the square root of the number of units in the pilot study, simulation or formula.
\texttt{tdf(expression\_6)} gives an expression whose value is the degrees of freedom of the $t$-distribution to be assumed for the pivotal quantity \texttt{PQ} specified above. The degrees of freedom expression is not necessarily integer-valued. If \texttt{tdf()} is absent, then \texttt{PQ} is assumed to follow a standard Normal distribution.
\texttt{noceiling} specifies that, if the output variable specified by \textit{newvarname} is a number of units, then it will not be rounded up to the lowest integer no less than itself (as calculated by the Stata 8 \texttt{ceil()} function). This option can be useful if the output variable is intended to specify an amount of exposure, such as a number of
person-years, and the input \texttt{sdinf()} expression specifies a standard deviation of the influence function per unit exposure. If \texttt{noceiling} is not specified, and \texttt{power()}, \texttt{alpha()}, \texttt{delta()} and \texttt{sdinf()} are specified, then \texttt{powercal} rounds up the output variable, so that it contains a whole number of units.
\texttt{float} specifies that the output variable will have a storage type no higher than \texttt{float}. If \texttt{float} is not specified, then \texttt{powercal} creates the output variable with storage type \texttt{double}. Whether or not \texttt{float} is specified, \texttt{powercal} compresses the output variable as much as possible without loss of precision. (See help for \texttt{compress}.)
## 3 Methods and formulas
Generalized power and sample size calculation formulas are based on the Central Limit Theorem applied to influence functions. Suppose that a sequence of (scalar or vector) random variables $\{X_i\}$ is sampled independently from a common (univariate or multivariate) population distribution, and suppose that $\theta(F)$ is a (scalar or vector) parameter, defined from the set of candidate (univariate or multivariate) cumulative distribution functions $F$ that might apply to the $X_i$. Denote by $\hat{F}_n$ the sample cumulative distribution function, based on the first $n$ of the $X_i$, and define $\hat{\theta}_n = \theta(\hat{F}_n)$ to be a sample estimator of $\theta(F_0)$, where $F_0$ is the true population cumulative distribution function of the $X_i$. An influence function $\Upsilon(X; \theta, F)$ is a function, defined for each possible $X$-value, parameter value and cumulative distribution function, and having the properties that
$$E[\Upsilon(X_i; \theta(F_0), F_0)] = 0 \quad (1)$$
and
$$\hat{\theta}_n = \theta(F_0) + n^{-1} \sum_{i=1}^{n} \Upsilon(X_i; \theta(F_0), F_0) + o_p(n^{-1/2}) \quad (2)$$
where $E[.]$ denotes expectation, and $o_p(n^{-1/2})$ is a term having the property that $o_p(n^{-1/2})/n^{-1/2}$ converges in probability to zero. Therefore, in words, the sample statistic is equal to the population parameter, plus the sample mean of the population influence function, plus a third term, which is negligible if the sample size is sufficiently large. (In the simplest case, where the $X_i$ are scalar random variables, $\theta$ is their population mean and $\hat{\theta}_n$ is the sample mean for the first $n$ of the $X_i$, the influence function is $\Upsilon(X; \theta, F) = X - \theta$.)
Influence functions with properties (1) and (2) exist for a wide range of parameters, including those estimated by maximum likelihood (whether or not the likelihood function is correctly specified). They are the reason why the Central Limit Theorem can be generalized from sample means to more general sample statistics. More details about the theory, and more rigorous definitions of influence functions, can be found in Hampel (1974), Hampel et al. (1986) and Huber (1981). However, for power calculation purposes, the main consequences of properties (1) and (2) are that, for a wide range of
parameter estimates $\hat{\theta}_n$, the quantity
$$Z_n = \frac{n^{1/2}}{\sigma} \left[ \hat{\theta}_n - \theta(F_0) \right] = \left[ \hat{\theta}_n - \theta(F_0) \right] / \text{SE}(\hat{\theta}_n)$$ \hspace{1cm} (3)
has an asymptotic standard Normal distribution, where
$$\sigma = E \left[ \Gamma(X_i; \theta(F_0), F_0)^2 \right]^{1/2}$$ \hspace{1cm} (4)
is the population standard deviation (SD) of the population influence function, and
$$\text{SE}(\hat{\theta}_n) = \sigma / \sqrt{n}$$ \hspace{1cm} (5)
is known as the asymptotic standard error. In the simplest case of estimating the population mean of scalar $X_i$ by the sample mean, $\sigma$ is simply the population SD of the $X_i$. However, in the more general case, if we have a formula or estimate for $\text{SE}(\hat{\theta}_n)$ for known $n$, then we can multiply that formula or estimate by $\sqrt{n}$ to derive a formula or estimate for $\sigma$.
In practice, when calculating confidence intervals and $p$-values, we usually estimate $\sigma$ with a consistent estimator $\hat{\sigma}_n$, calculated from the first $n$ of the $X_i$, and calculate an estimated standard error $\text{SE}(\hat{\theta}_n) = \hat{\sigma}_n / \sqrt{n}$, and then the quantity
$$\hat{Z}_n = \frac{n^{1/2}}{\hat{\sigma}_n} \left[ \hat{\theta}_n - \theta(F_0) \right] = \left[ \hat{\theta}_n - \theta(F_0) \right] / \hat{\text{SE}}(\hat{\theta}_n)$$ \hspace{1cm} (6)
is a consistent estimator of $Z_n$ and has an asymptotic standard Normal distribution. Sometimes, the distribution of $\hat{Z}_n$ for finite $n$ can be approximated better by a $t$-distribution with finite degrees of freedom, which may or may not be integer.
Most power and sample size calculations aim to calculate power and sample size to detect a non-zero value for a population difference parameter $\delta$, estimated by a sample difference statistic $\hat{\delta}$, by showing that the confidence limits for the population $\delta$ exclude zero. (Note that a difference may be a log ratio or other difference between parameter values transformed by a Normalizing and/or variance-stabilizing transformation.) In the following formulas, we will assume that a significance threshold $\alpha$ is used to define $100(1 - \alpha)\%$ confidence intervals, or to reject the null hypothesis $\delta = 0$ with $p \leq \alpha$. If the number of sampling units is $n$ and the SD of the influence function is $\sigma$, then the standard error of $\hat{\delta}$ is $\text{SE}(\hat{\delta}) = \sigma / \sqrt{n}$, and the pivotal quantity
$$Z = (\hat{\delta} - \delta) / \text{SE}(\hat{\delta}) = n^{1/2}(\hat{\delta} - \delta) / \sigma$$ \hspace{1cm} (7)
is assumed to be distributed with a cumulative density function $G(\cdot)$ such that, for any $z$,
$$G(z) = \Pr(Z \leq z) = \Pr(Z < z) = 1 - G(-z)$$ \hspace{1cm} (8)
The first equality is a definition, the second equality specifies a continuous distribution, and the third equality specifies that the distribution is symmetrical around zero. These
conditions hold whether $G(\cdot)$ specifies a standard Normal distribution or a central $t$-distribution. If $G^{-1}(\cdot)$ is the inverse of $G(\cdot)$, then a 100$(1 - \alpha)\%$ confidence interval for $\delta$ is defined (approximately) by $\hat{\delta} \pm G^{-1}(1 - \alpha/2) \times \text{SE}(\hat{\delta})$, and the null hypothesis $\delta = 0$ is rejected in a positive direction by a two-tailed test at $p \leq \alpha$ if and only if $Z \geq G^{-1}(1 - \alpha/2)$. (We are assuming that, if the standard error is estimated, then it is estimated well, so that the $\hat{Z}_n$ of (6) is a good approximation to the $Z_n$ of (3).) If the power to detect a positive difference $\delta$ is no less than a required level $\gamma$, then it follows that
\[
\begin{align*}
\gamma & \leq \Pr \left[ \frac{\hat{\delta}}{\text{SE}(\hat{\delta})} \geq G^{-1}(1 - \alpha/2) \right] \\
& = \Pr \left[ \frac{\hat{\delta}}{\text{SE}(\hat{\delta})} - \delta/\text{SE}(\hat{\delta}) \geq G^{-1}(1 - \alpha/2) - \delta/\text{SE}(\hat{\delta}) \right] \\
& = 1 - G \left[ G^{-1}(1 - \alpha/2) - \delta/\text{SE}(\hat{\delta}) \right] \\
& = G \left[ \delta/\text{SE}(\hat{\delta}) - G^{-1}(1 - \alpha/2) \right]
\end{align*}
\]
(9)
The first inequality is a requirement, the first equality follows trivially, the second equality follows from the fact that $G(\cdot)$ specifies a continuous distribution for $Z$, and the third equality follows from the symmetry of that distribution around zero. Applying $G^{-1}(\cdot)$ to both sides of the inequality (9), we have
\[
G^{-1}(\gamma) \leq \delta/\text{SE}(\hat{\delta}) - G^{-1}(1 - \alpha/2)
\]
(10)
or, equivalently,
\[
\frac{\delta \sqrt{n}}{\sigma} \geq G^{-1}(\gamma) + G^{-1}(1 - \alpha/2)
\]
(11)
The inequality (11) expresses the power requirements elegantly and briefly, as the left hand side is increasing in $\delta$ and $n$ and decreasing in $\sigma$, and the right hand side is the sum of two terms, the first increasing in $\gamma$ and the second decreasing in $\alpha$. We can therefore rearrange (11) to derive a minimum or maximum value for each of the 5 parameters $\gamma$, $\alpha$, $\delta$, $\sigma$ and $n$, compatible with the power requirements (9) and with given values of the other 4 parameters. These minima or maxima may or may not exist for $\gamma$ and $\alpha$ in the interval $(0, 1)$ and positive $\delta$, $\sigma$ and $n$, because the inequality (11) may be satisfied nowhere or everywhere in the open interval parameter range. If we define the quantities
\[
R = G^{-1}(\gamma) + G^{-1}(1 - \alpha/2) \quad \text{and} \quad S = \delta \sqrt{n}/\sigma - G^{-1}(\gamma)
\]
(12)
then the minima and maxima are defined as follows:
\[
\begin{align*}
\gamma_{\text{max}} & = G \left[ \delta \sqrt{n}/\sigma - G^{-1}(1 - \alpha/2) \right] & (\text{if } S > 0) \\
\alpha_{\text{min}} & = 2G(-S) & (\text{if } R > 0) \\
\delta_{\text{min}} & = \frac{\sigma}{\sqrt{n}} R & (\text{if } R > 0) \\
\sigma_{\text{max}} & = \delta \sqrt{n}/R & (\text{if } R > 0) \\
n_{\text{min}} & = \left\lceil \left( \frac{\sigma}{\delta} R \right)^2 \right\rceil & (\text{if } R > 0)
\end{align*}
\]
(13)
The operator $\lceil x \rceil$ represents the minimum integer no less than $x$, as calculated by the $\text{ceil}()$ function in Stata 8. This operator is not applied if the user specifies the
noceiling option. The inequality (11) is not satisfied by any $\alpha \in (0, 1)$ if $S \leq 0$, and is satisfied by all positive $\delta$, $\sigma$ and $n$ if $R \leq 0$. Note that $R \leq 0$ can only be true if $\gamma \leq 1/2$, and that $S \leq 0$ can only be true if $\delta$ represents fewer standard errors than $G^{-1}(\gamma)$. In practice, we usually aim for more than 50% power to detect an interesting positive population difference, and we usually choose a sample size large enough to make the standard error small enough to prevent the sample difference from being negative even when the population difference is positive enough to be interesting.
### 3.1 Formulas for the SD of the influence function
The parameter $\sigma$ is usually an input parameter, provided by the user. It may be estimated by multiplying a standard error from a pilot study, a simulation or a formula by the square root of the number of sampling units involved in calculating that standard error. In the absence of a pilot study or a simulation, a formula is usually known only for the simplest cases. For instance, in the case of a paired $t$-test, or a sign test, the SD of the influence function is simply the SD of the pairwise differences, or of the signs of these differences, respectively.
However, many experimental designs involve sampling in parallel and independently from $K$ subpopulations of primary sampling units (PSUs), estimating a population parameter $\eta_j$ for the $j$th subpopulation by means of a sample estimate $\hat{\eta}_j$, and thereby estimating a contrast of interest
$$\delta = \sum_{j=1}^{K} a_j \eta_j - \omega$$ \hspace{1cm} (14)
where $\omega$ and the $a_j$ are constants. The contrast $\delta$ is assumed to be zero under a null hypothesis to be tested, and $\omega$ is usually (but not always) zero. Usually, but not always, the $\eta_j$ are link functions of subpopulation means in a generalized linear model, as defined by McCullagh and Nelder (1989). Examples include arithmetic subpopulation means, log geometric subpopulation means, log subpopulation incidence rates, or log case and control odds of exposure in an unmatched case-control study.
A sample for such a design may contain a number $n$ of compound sampling units (CSUs), where each CSU consists of $m_j$ PSUs sampled independently from each $j$th subpopulation. (For instance, an unmatched case-control study may have a fixed number of controls per case, and then $K = 2$, $a_1 = 1$, $a_2 = -1$, $m_1 = 1$, and $m_2$ is the number of controls per case.) Sample size calculations for such designs usually output or input numbers of CSUs, rather than numbers of PSUs. The estimate for $\delta$ is
$$\hat{\delta} = \sum_{j=1}^{K} a_j \hat{\eta}_j - \omega$$ \hspace{1cm} (15)
The standard error of $\hat{\eta}_j$ is
$$\text{SE}(\hat{\eta}_j) = \sigma_j / \sqrt{n m_j}$$ \hspace{1cm} (16)
where $\sigma_j$ is the SD of the influence function (per PSU) of $\eta_j$. If $\eta_j$ is a link function in a generalized linear model, and there is one observation per PSU, then the SD of the per-PSU influence function is equal to
$$\sigma_j = \frac{d\eta_j}{d\mu_j} \sqrt{\phi V(\mu_j)}$$ \hspace{1cm} (17)
where, in the notation of McCullagh and Nelder (1989), $\mu_j$ is the subpopulation mean corresponding to $\eta_j$, $V(\mu_j)$ is the variance function, and $\phi$ is the dispersion parameter. The standard error of $\hat{\delta}$ is
$$\text{SE}\left(\hat{\delta}\right) = \sqrt{\sum_{j=1}^{K} a_j^2 [\text{SE}(\hat{\eta}_j)]^2}$$ \hspace{1cm} (18)
It follows that the SD of the per-CSU influence function of $\delta$ is derived from the SDs of the per-PSU influence functions of the $\eta_j$ by the formula
$$\sigma = \sqrt{n} \times \text{SE}\left(\hat{\delta}\right) = \sqrt{\sum_{j=1}^{K} \frac{a_j^2}{m_j} \sigma_j^2}$$ \hspace{1cm} (19)
Table 1: Some commonly used link functions for generalized linear models.
| Link function | $\eta(\mu)$ | $d\eta/d\mu$ | Interpretation of $\eta$ |
|---------------|-------------|--------------|--------------------------|
| Identity | $\mu$ | 1 | Arithmetic mean of $Y$ |
| Power $r \neq 0$ | $\mu^r$ | $r\mu^{r-1}$ | Power-$1/r$ algebraic mean of $Y^r$ |
| Log | $\ln(\mu)$ | $1/\mu$ | Log arithmetic mean of $Y$ |
| Logit | $\ln[\mu/(1-\mu)]$ | $1/\mu + 1/(1-\mu)$ | Log odds of binary $Y$ |
Table 2: Some commonly used variance functions for generalized linear models.
| Family | $V(\mu)$ | Interpretation of $\phi$ |
|-----------|----------|--------------------------|
| Normal | 1 | Variance |
| Gamma | $\mu^2$ | Squared coefficient of variation |
| Bernoulli | $\mu(1-\mu)$ | Always 1 |
| Poisson | $\mu$ | Variance/mean ratio |
Table 1 gives some commonly used link functions for generalized linear models, with formulas for the link function $\eta$ as a function of a subpopulation mean $\mu$ of a variable $Y$ and for its derivative for use in Equation (17), and interpretations in words for the link $\eta$. Table 2 gives some commonly used variance functions for generalized linear models, which assume that the variance of a subpopulation with mean $\mu$ is equal to $\phi V(\mu)$, together with an interpretation in words of the dispersion parameter $\phi$. Each variance
Table 3: Standard deviations of influence functions for some variance-link combinations.
| Family | Link | $\sigma_j$ | Typical interpretation of $\delta$ |
|----------|---------|-----------------------------|------------------------------------------------------------------------|
| Normal | Identity| $\sqrt{\phi}$ | Difference between arithmetic means |
| Normal | Log | $\sqrt{\phi}/\mu_j$ | Log ratio between arithmetic means |
| Gamma | Identity| $\mu_j \sqrt{\phi}$ | Difference between arithmetic means |
| Gamma | Log | $\sqrt{\phi}$ | Log ratio between arithmetic means |
| Poisson | Identity| $\sqrt{\phi} \mu_j$ | Difference between incidence rates |
| Poisson | Log | $\sqrt{\phi}/\mu_j$ | Log ratio between incidence rates |
| Bernoulli| Identity| $\sqrt{\mu_j (1 - \mu_j)}$ | Difference between proportions |
| Bernoulli| Log | $\sqrt{(1 - \mu_j)/\mu_j}$ | Log ratio between proportions |
| Bernoulli| Logit | $\sqrt{1/\mu_j + 1/(1 - \mu_j)}$ | Log ratio between odds |
function applies to a distributional family, from which it derives its name. There are many other possible link functions and variance functions, and more comprehensive tables can be found in the Appendices of Hardin and Hilbe (2001).
Table 3 gives formulas for the SD of the influence function for the $j$th subpopulation derived according to Equation (17) for some common combinations of variance and link functions. The $\sigma_j$ can be entered into Equation (19) to derive a SD of the per-CSU influence function for the contrast $\delta$ of (14), whose typical informal interpretation in words is given in the right-hand column. Again, there are many more possible combinations, some of which are mentioned in Hardin and Hilbe (2001). In particular, we may plan to transform the outcome data prior to analysis. If we use the log transformation and a generalized linear model with the identity link, then the parameter $\delta$ will be a log ratio between geometric means, or (in other words) a difference between arithmetic mean logs. If we use a power-$r$ transformation and a generalized linear model with the power-$1/r$ link, then $\delta$ will be a difference between power-$r$ algebraic means, where the power-$r$ algebraic mean of a variable $Y$ is defined as $[E(Y^r)]^{1/r}$. Note that these links can be combined with any variance function.
## 4 Examples
The examples in the help file for `powercal` are designed to work both under Stata 7 and under Stata 8, and are described in detail in the Adobe Acrobat manual `powercal.pdf`, distributed with the `powercal` package. In this paper, we give more advanced examples, demonstrating the power of Stata 8 graphics.
### 4.1 Example 1. Geometric mean ratios
The geometric mean (defined as the antilogarithm of the arithmetic mean logarithm) is frequently used as an approximation to the median if a variable is positive-valued and positively skewed. Power calculations for ratios between geometric means usually
assume that the outcome variable has a lognormal distribution, so that the log of the outcome variable has a Normal distribution. Under this assumption, the geometric mean is the median, its log is the mean log, and the SD of the logs is the other parameter of the distribution, measuring dispersion. Alternative measures of dispersion for positive-valued variables, more familiar to non-mathematicians, are the coefficient of variation (defined as the SD/mean ratio) and the $q$th tail ratio (defined as the ratio of the 100$(1-q)$th percentile to the 100$q$th percentile if $0 < q < 1/2$). If the lognormal assumption is true, then the SD of the natural logs can be calculated from the coefficient of variation or the $q$th tail ratio by the formulas
$$\text{SD}_{\log} = \sqrt{\ln(CV^2 + 1)} = -\ln(r_q)/[2\Phi^{-1}(q)]$$ \hspace{1cm} (20)
where $\text{SD}_{\log}$ is the SD of the natural logs, $CV$ is the coefficient of variation of the unlogged variable, $r_q$ is the $q$th tail ratio of the unlogged variable, and $\Phi^{-1}(\cdot)$ is the inverse standard normal cumulative distribution function. (See Aitchison and Brown (1963), or Stanislav Kolenikov’s website at http://www.komkon.org/~tacik/, which contains some formulas from that source for quick reference.)
When we perform lognormal power calculations, the difference $\delta$ that we aim to detect is usually a linear contrast between logs of geometric means. In the notation of Subsection 3.1, the log outcomes are distributed according to a generalized linear model with an identity link function and a Normal variance function, the $\eta_j = \mu_j$ are subpopulation arithmetic mean logs (or logs of geometric means), and the dispersion parameter $\phi$ is the variance of the log outcomes, usually assumed to be the same in all subpopulations. Therefore, from Table 3, the $\sigma_j$ are the within-subpopulation SDs of the log outcomes. We wish to know the SD $\sigma$ of the influence function of the contrast $\delta$, so that we can apply the formulas (13). In the simplest case, we may plan to measure the ratio between geometric means in 2 treatment groups. In this case, we have $K = 2$, $\eta_1$ and $\eta_2$ are the log geometric means in treatment groups 1 and 2 respectively, $a_1 = 1$, $a_2 = -1$, and the difference to detect is the log geometric mean ratio $\delta = \eta_1 - \eta_2$. The PSUs are treated units. If we decide to apply the treatments to 2 unmatched samples of equal size, then each CSU might be a pair of PSUs, one allocated to each treatment group, and therefore we have $m_1 = m_2 = 1$. If the two treatment groups have a common coefficient of variation (and therefore common tail ratios), then we also have $\sigma_1 = \sigma_2 = \text{SD}_{\log}$, where $\text{SD}_{\log}$ is derived from the assumed coefficient of variation or tail ratio by (20). By (19), the SD of the per-CSU influence function of $\delta$ is then given by
$$\sigma = \text{SD}_{\log} \times \sqrt{2}$$ \hspace{1cm} (21)
The following example assumes a coefficient of variation of 0.5 within each of 2 treatment groups. This implies a 20% tail ratio of 2.2147318, meaning that, within each treatment group, the bottom of the top quintile is 2.2147318 times the top of the bottom quintile. The variable $\text{logratio}$ is created, containing a range of log geometric mean ratios, and the variable $\text{gmratio}$ is created, containing the corresponding ratios themselves, which range from 1 to 2. We then use $\text{powercal}$ to calculate, in a new variable $\text{power}$, the power to detect each geometric mean ratio with $p \leq 0.01$, using 50
units in each group (and therefore 50 CSUs) and carrying out a two-sample $t$-test on the logs (with $2 \times 50 - 2 = 98$ degrees of freedom). The power is plotted against the geometric mean ratio in Figure 1, with vertical-axis reference lines for 80% and 90% power. We see that a geometric mean ratio of 1.39 can be detected with 80% power, whereas a geometric mean ratio as high as 1.45 can be detected with 90% power.
```
. scal cv=0.5
. scal sdlog=sqrt(log(cv*cv + 1))
. scal r20=exp(-2*sdlog*invnorm(0.2))
. disp _n as text "Coefficient of variation: " as result cv ///
> _n as text "SD of logs: " as result sdlog ///
> _n as text "20% tail ratio: " as result r20
Coefficient of variation: .5
SD of logs: 47238073
20% tail ratio: 2.2147318
. set obs 100
obs was 0, now 100
. gene logratio=log(2)*(_n/_N)
. lab var logratio "Log GM ratio"
. gene gmratio=exp(logratio)
. lab var gmratio "GM ratio"
. powercal power, alpha(0.01) delta(logratio) sdinf(sdlog*sqrt(2)) ///
> nunit(50) tdf(98)
Result to be calculated is power in variable: power
. line power gmratio, ///
> yscale(range(0 1)) ///
> ylab(0(0.05)1, grid gmin gmax angle(0)) yline(0.8 0.9, lpattern(shortdash)) ///
> xscale(log range(1 2)) xlab(1(0.1)2, grid gmin gmax)
```
Figure 1: Power to detect geometric mean ratios.
Alternatively, we might wish to calculate the detectable geometric mean ratios closest
to unity as a function of sample number. The following example does this by creating a variable `npergp`, containing possible numbers per group from 1 to 100, and then using `powercal` to calculate the detectable positive log geometric mean ratio as a function of `npergp`, assuming that we require 90% power to detect a difference with $p \leq 0.01$ by $t$-testing the logs, and that the coefficient of variation within each treatment group is 0.5 as before. We then calculate the detectable geometric mean ratios greater than 1 and less than 1 as `hiratio` and `loratio`, respectively, and plot these against the number per treatment group, with a vertical-axis reference line indicating a ratio of 1. This plot is Figure 2. Note that we have suppressed the spectacular ratios detectable with 4 or fewer subjects per group. A plot such as Figure 2 has the advantage that it communicates to colleagues the inverse square law, which states that, to halve the detectable difference, you must approximately *quadruple* (not double) the number of subjects. Non-statisticians frequently do not appreciate this law, although they usually are vaguely aware that larger sample sizes increase power.
```
. scal cv=0.5
. scal sdlog=sqrt(log(cv*cv + 1))
. scal r20=exp(-2*sdlog\ln\norm(0.2))
. disp _n as text "Coefficient of variation: " as result cv ///
> _n as text "SD of logs: " as result sdlog ///
> _n as text "20% tail ratio: " as result r20
Coefficient of variation: .5
SD of logs: .47238073
20% tail ratio: 2.2147318
. set obs 100
obs was 0, now 100
. gene npergp=_n
. lab var npergp "Number per group"
. powercal logratio, power(0.9) alpha(0.01) sdinf(sdlog*sqrt(2)) ///
> nunit(npergp) tdf(2*(npergp-1))
Result to be calculated is delta in variable: logratio
. gene hiratio=exp(logratio)
(1 missing value generated)
. gene loratio=exp(-logratio)
(1 missing value generated)
. lab var hiratio "Detectable GM ratio >1"
. lab var loratio "Detectable GM ratio <1"
. line hiratio loratio npergp if _n>=5, ///
> ylabel(, angle(0) grid gmin gmax) yline(1, lpattern(shortdash)) ///
> ytitle("Detectable GM ratio") ///
> xlab(0(10)100, grid gmin gmax)
```
### 4.2 Example 2. Odds ratios from case-control studies
Case-control studies are commonly recommended as the design of choice in genomic epidemiology for measuring an association between a gene and a disease (see Clayton and McKeigue (2001)). If we are designing an unmatched case-control study, then we typically plan to sample a given number of subjects with each possible disease status (eg
“with the disease” and “without the disease”), and then measure, in each subject, the exposure, which might be the presence of a genetic pattern in a patient. The difference $\delta$ that we wish to detect is the log ratio of the odds of the exposure between cases and controls, or possibly some other linear contrast of the log odds of exposure for different disease status values, if there are more than 2 possible disease status values. If there are $K$ possible values of disease status, and $E_j$ is the prevalence of exposure in subjects with the $j$th disease status, then the odds of exposure in the $j$th disease category is defined as $E_j/(1-E_j)$, and its logarithm is typically used as a normalizing and variance-stabilizing transformation.
In the generalized linear model notation of Subsection 3.1, the PSUs are subjects (cases or controls), the subpopulations correspond to the possible disease status values, and the “outcome” variable is a binary exposure variable, whose distribution in each subpopulation is governed by a generalized linear model with a logit link function and a Bernoulli variance function. The mean “outcome” in the $j$th subpopulation is therefore $\mu_j = E_j$, the link function for the $j$th subpopulation is $\eta_j = \ln [\mu_j/(1-\mu_j)]$, its derivative is $d\eta_j/d\mu_j = 1/\mu_j + 1/(1-\mu_j)$, the variance function is $V(\mu_j) = \mu_j(1-\mu_j)$, and the dispersion parameter is $\phi = 1$. From the bottom row of Table 3, the SD of the per-PSU influence function of the log odds $\eta_j$ is
$$\sigma_j = \sqrt{1/E_j + 1/(1-E_j)} \quad (22)$$
A CSU in this case is composed of $m_j$ subjects sampled independently from the subpopulation with each $j$th disease status. This is because, although the case-control study is unmatched, we may plan to sample subjects of different disease status according to a particular ratio, such as two controls per case. The SD of the per-CSU influence
function is then given by the formula (19). Note that, in the sample size calculations, the generalized linear model is defined with the disease status as the “predictor” and the exposure status as the “outcome”. This is in contrast to the statistical analysis, where the disease status is usually the “outcome” and the exposure status is usually the “predictor”.
In the simplest case-control studies, there are $K = 2$ possible values for disease status, namely “diseased” and “undiseased”, and a CSU is a single case together with $m_2$ unmatched controls, so that $m_1 = 1$. We are interested in measuring a log odds ratio $\delta = \eta_1 - \eta_2$, so, in the notation of Subsection 3.1, we have $a_1 = 1$ and $a_2 = -1$. In this case, the SD of the per-CSU influence function is given, according to (19), by
$$\sigma = \sqrt{\frac{1}{E_1} + \frac{1}{1 - E_1} + \frac{1}{m_2} \left[ \frac{1}{E_2} + \frac{1}{1 - E_2} \right]}$$ \hspace{1cm} (23)
When designing a case-control study, we typically have a good prior estimate of the control exposure prevalence $E_2$, because the control exposure prevalence is intended to be an estimate for the total population exposure prevalence. Therefore, if we hypothesize a particular value OR for the odds ratio, then we can multiply this odds ratio by the “known” control odds of exposure to arrive at the corresponding hypothesized case odds of exposure by the formula
$$E_1/(1 - E_1) = \text{OR} \times E_2/(1 - E_2)$$ \hspace{1cm} (24)
and then calculate the hypothesized case exposure prevalence $E_1$ from the case exposure odds $E_1/(1 - E_1)$. Note that, if we have an estimate for the control exposure $E_2$, then the SD of the per-case influence function of the log odds ratio, given by (23), is dependent on the log odds ratio itself. This is in contrast to the case with lognormal geometric mean ratios, where $\sigma$ is independent of $\delta$ and is given by (21).
The following example assumes that we are planning a case-control study to measure the association of a rare disease with a binary exposure, whose control prevalence is expected to be 0.25, or 25%. We decide to recruit $m_2 = 2$ unmatched controls per case. We create a data set with 1 observation for each of a range of odds ratios from 1.25 to 5, which will correspond to relative risks of the same size, if the rare disease assumption is true. The log odds ratios are stored in the variable $\text{logor}$, the odds ratios are stored in $\text{or}$, the case exposure odds are stored in $\text{caseodds}$, the case exposure prevalences are stored in $\text{caseprev}$, and the control exposure prevalence and odds are stored in scalars. We use the formulas (24) and (23) to calculate the SD of the influence function of the log odds ratio in $\text{sdinflor}$. We then use $\text{powercal}$ to calculate the minimum numbers of cases to detect each odds ratio with 90% power at significance levels $p \leq 0.01$ and $p \leq 0.001$, respectively, and plot these odds ratios against those minimum numbers of cases, suppressing odds ratios which require over 2000 cases to be detectable. The resulting graph is Figure 3. Note that uninteresting low unadjusted odds ratios are very expensive to detect, as well as being more credibly attributed to confounding than spectacular high odds ratios.
. scal conprev=0.25
. scal conodds=conprev/(1-conprev)
. disp _n as text "Expected control prevalence: " as result conprev ///
> _n as text "Expected control odds: " as result conodds
Expected control prevalence: .25
Expected control odds: .33333333
. set obs 101
obs was 0, now 101
. gene logor=log(1.25)+(log(5)-log(1.25)*(_n-1)/(_N-1)
. gene or=exp(logor)
. gene caseodds=conodds*or
. gene caseprev=caseodds/(1+caseodds)
. gene sdinflor=sqrt( ///
> 1/caseprev + 1/(1-caseprev) + (1/2)*( 1/conprev + 1/(1-conprev) ) ///
> )
. lab var logor "Log odds ratio"
. lab var or "Odds ratio"
. lab var caseodds "Case exposure odds"
. lab var caseprev "Case exposure prevalence"
. lab var sdinflor "SD of influence for log OR"
. desc
Contains data
obs: 101
vars: 5
size: 2,424 (99.8% of memory free)
| variable name | storage | display | value | label |
|---------------|----------|---------|-----------|------------------------------|
| logor | float | %9.0g | Log odds ratio |
| or | float | %9.0g | Odds ratio |
| caseodds | float | %9.0g | Case exposure odds |
| caseprev | float | %9.0g | Case exposure prevalence |
| sdinflor | float | %9.0g | SD of influence for log OR |
Sorted by:
Note: dataset has changed since last saved
. * Detectable OR by number of cases *
. powercal ncases01, power(0.9) alpha(0.01) delta(logor) sdinf(sdinflor)
Result to be calculated is nunit in variable: ncases01
. powercal ncases001, power(0.9) alpha(0.001) delta(logor) sdinf(sdinflor)
Result to be calculated is nunit in variable: ncases001
. lab var ncases01 "Minimum cases (alpha=0.01)"
. lab var ncases001 "Minimum cases (alpha=0.001)"
. line or ncases01 if ncases01<=2000 || line or ncases001 if ncases001<=2000 ,
> ///
> yscale(log range(1 5)) ylabel(i 1 5 2(i)5, angle(0) grid gmin gmax) ///
> xlab(, grid gmin gmax) xtitle("Minimum number of cases") ///
> legend(label(1 "Alpha=0.01") label(2 "Alpha=0.001"))
Using the same data set, we can also calculate, for each odds ratio, the significance levels attainable with 50% or 90% power, using 100 cases and their 200 controls. This
a Mann-Whitney-Wilcoxon ranksum test (see [R] `ranksum`), which produces a $p$-value but no confidence interval.
Today, most statisticians would argue that confidence intervals are more informative than $p$-values alone, even if rank methods are used. In Newson (2002), it is argued that there are at least three possible parameters corresponding to the so-called “nonparametric” ranksum test, namely Somers’ $D$, the Hodges-Lehmann median difference and the Hodges-Lehmann median ratio, and that confidence intervals can be calculated for any of them using the `somersd` package, downloadable from SSC. Somers’ $D$ of blood pressure with respect to male gender is defined as the difference between two probabilities, namely the probability that a randomly-sampled male has a higher blood pressure than a randomly-sampled female and the probability that a randomly-sampled female has a higher blood pressure than a randomly-sampled male. Power formulas are more easily defined for Somers’ $D$ than for the other two parameters. This is because Somers’ $D$ is closely related to Kendall’s tau and has a very well-behaved influence function, for which the Central Limit Theorem works very well at low sample numbers, whereas influence functions for medians are very unpredictable. See Hampel et al. (1986) and Huber (1981) for discussion on the influence functions of medians, and Kendall and Gibbons (1990) for discussion of the Central Limit Theorem as applied to Kendall’s tau.
Although Somers’ $D$ is well-behaved, it is difficult to understand for non-statisticians, who would usually like to be able to convert it to a scale of median differences or ratios, as this would probably be more useful for making monetary or other practical decisions. Unfortunately, there is no unique conversion formula. However, if an outcome variable $Y$ (such as blood pressure) has a Normalizing transformation $g(Y)$, which is Normally distributed within each of two subpopulations being compared (such as males and females), and if $g(Y)$ has mean $\mu_A$ and variance $\phi_A$ in Population $A$ and has mean
$\mu_B$ and variance $\phi_B$ in Population $B$, then the Somers’ $D$ of $Y$, with respect to a binary variable $X$ equal to 1 for Population $A$ and 0 for Population $B$, is given by
$$D_{Y,X} = 2\Phi \left[ (\mu_A - \mu_B) / \sqrt{\phi_A + \phi_B} \right] - 1$$ \hspace{1cm} (25)
where $\Phi(\cdot)$ is the standard Normal cumulative distribution function. Under these assumptions, Somers’ $D$ has the same sign as $\mu_A - \mu_B$, and therefore the same sign as the Hodges-Lehmann median difference, but the conversion curves between the scales depend both on the function $g(Y)$ and on the sum of the subpopulation variances.
As stated in Subsection 3.1, the key to a power calculation formula for a parameter is a formula for the SD of its influence function. Somers’ $D$ is defined as a ratio of two $U$-statistics, in the terminology of Hoeffding (1948). Analytical standard error formulas can therefore be derived from the theory introduced there and discussed further in Serfling (1980), subject to making distributional assumptions. However, we will not use these methods here, but instead use as a pilot study the data set `bpwide`, distributed with official Stata, which contains one observation for each of 120 fictional patients and data on their genders and blood pressures. These blood pressures are in unstated units, but that is not a problem for us, as Somers’ $D$ is scale-invariant. (See [R] `sysuse` for more information about the datasets shipped with official Stata.) Instead of using the SD of the influence function for Somers’ $D$ itself, we will use the SD of the influence function for the hyperbolic arctangent (or $z$-transform) of Somers’ $D$, as this transformation is variance-stabilizing, making the SD of the parameter influence function less dependent on the value of the parameter itself. The formulas are discussed in detail in the manual `somersd.pdf`, which is distributed with the `somersd` package, and is a post-publication update of Newson (2000).
In the following advanced example, we load the `bpwide` data and use `somersd` together with the `parmby` program from the `parmest` package (also downloadable from SSC and discussed in Newson (2003)). The results from these programs are used to calculate the SD of the influence function of the $z$-transformed Somers’ $D$, for input into `powercal`. The variables created by `powercal` are plotted in Figures 5 and 6.
```
. sysuse bpwide, clear
(fictional blood-pressure data)
. gene byte male=1-sex
. lab var male "Male patient"
. parmby "somersd male bp_before, tr(z) td", norestore ///
> escal(N) rename(es_1 N)
Command: somersd male bp_before, tr(z) td
Somers’ D with variable: male
Transformation: Fisher’s z
Valid observations: 120
Degrees of freedom: 119
Symmetric 95% CI for transformed Somers’ D
__________________________________________________________________________
male | Coef. Std. Err. t P>|t| [95% Conf. Interval]
---------+-------------------------------------------------
bp_before | .3086041 .110782 2.79 0.006 .0892446 .5279636
```
Asymmetric 95% CI for untransformed Somers' D
| bp_before | .29916667 | .08900846 | .4838229 |
|-----------|-----------|-----------|-----------|
. list N parm estimate stderr min* max* p, clean noobs
| N | bp_before | .3086041 | .11078202 | .08924464 | .52796357 | .00621746 |
. scal sdinf=stderr[1]*sqrt(N[1])
. disp _n as text "SD of influence function for z-transformed Somers' D: " ///
> as result sdinf
SD of influence function for z-transformed Somers' D: 1.2135563
. drop _all
. set obs 1000
obs was 0, now 1000
. gene int npat=_n
. lab var npat "Number of patients"
. foreach X in 05 01 001 0001 {
2. powercal detz'X', power(0.9) alpha(0.'X') sdinf(sdinf) ///
> nunit(npat) tdf(npat-1)
3. gene detz'X'=exp(2*detz'X')
4. replace detz'X'=(detz'X'-1)/(detz'X'+1)
5. lab var detz'X' "z-transformed Somers' D (P<=0.'X')"
6. lab var detz'X' "Somers' D (P<=0.'X')"
7. }
Result to be calculated is delta in variable: detz05
(3 missing value generated)
(999 real changes made)
Result to be calculated is delta in variable: detz01
(2 missing values generated)
(998 real changes made)
Result to be calculated is delta in variable: detz001
(2 missing values generated)
(997 real changes made)
Result to be calculated is delta in variable: detz0001
(2 missing values generated)
(996 real changes made)
. line detz$ npat, ///
> xlab(0(100)1000, grid gmin gmax) ///
> ylab(0(0.05)1, angle(0) grid gmin gmax) ///
> ytitle("Detectable Somers' D")
. more
. graph export figseq5.eps, replace
(file figseq5.eps written in EPS format)
. foreach X in 10 15 20 30 {
2. scal z=0.5*log((1+0.'X')/(1-0.'X'))
3. powercal alpha'X', power(0.9) delta(z) sdinf(sdinf) ///
> nunit(npat) tdf(npat-1)
4. lab var alpha'X' "Alpha (Somers' D = 0.'X')"
5. format alpha'X' %8.2g
6. }
Result to be calculated is alpha in variable: alpha30
line alpha* npat, ///
> xlab(0(100)1000, grid gmin gmax) ///
> yscale(log reverse) ///
> ylab(1 0.05 0.01 1e-3 1e-4 1e-5 1e-6 1e-7 1e-8 1e-9 1e-10 1e-11, ///
> format(%8.2g) angle(0) grid gmin gmax) ///
> ytitle("Minimum alpha")
graph export figseq6.eps, replace
(file figseq6.eps written in EPS format)
Figure 5: Detectable Somers’ $D$ by number of patients for 90% power.
We first load the bpwide data, then add a variable male indicating male gender, and then use parmby and somersd to estimate Somers’ $D$ and to create a second data set in memory, with 1 observation and data on the sample number, estimate, standard error, confidence limits and $p$-values for the $z$-transformed Somers’ $D$. We find that the untransformed Somers’ $D$ is 0.29916667, so it is about 30% more likely for a man to have a higher blood pressure than a woman than vice versa. The standard error stored in the variable stderr is multiplied by the square root of the sample number stored in the variable N to give the SD of the influence function, which is stored in the scalar sdinf and equal to 1.21355653 $z$-units. We then create a third data set in memory, with 1000 observations (one for each possible sample number from 1 to 1000), and a variable npat, containing the number of patients. Then we use powercal in a loop to add to this data set 4 new variables detz05, detz01, detz001 and detz0001, containing $z$-transformed Somers’ $D$ values detectable with 90% power at $p$-values 0.05, 0.01, 0.001 and 0.0001, respectively, and use the hyperbolic tangent or inverse $z$ transform to derive detectable untransformed Somers’ $D$ values in detd05, detd01, detd001 and detd0001, respectively. These are line-plotted against npat to create Figure 5. After this, we use powercal in another loop to add to the data set 4 new variables alpha10, alpha15, alpha20 and alpha30, containing the significance level attainable with 90% power,
assuming population Somers’ $D$ values of 0.10, 0.15, 0.20 and 0.30, respectively. These are line-plotted against $\text{npat}$ in Figure 6. Note that the alpha-values are plotted on a reverse log ordinate, so that the higher they are, the more statistically significant they are. The reverse log ordinate makes the attainable alpha-curves very nearly linear in the number of patients, indicating that the attainable $p$-value decreases approximately exponentially as patient numbers (and presumably costs) are increased.
5 Acknowledgements
I would like to thank Stanislav Kolenikov of the University of North Carolina, Chapel Hill, for posting some very useful formulas about the lognormal distribution on his website at http://www.komkon.org/~tacik/.
6 References
Aitchison, J. and J. A. C. Brown. 1963. *The Lognormal Distribution*. Cambridge, UK: Cambridge University Press.
Clayton, D. and P. M. McKeigue. 2001. Epidemiological methods for studying genes and environmental factors in complex diseases. *Lancet* 358: 1356–1360.
Colhoun, H. M., P. M. McKeigue, and G. Davey-Smith. 2003. Problems of reporting genetic associations with complex outcomes. *Lancet* 361: 865–872.
Hampel, F. R. 1974. The influence curve and its role in robust estimation. *Journal of the American Statistical Association* 69: 383–393.
Hampel, F. R., E. M. Ronchetti, P. J. Rousseeuw, and W. A. Stahel. 1986. *Robust statistics. The approach based on influence functions*. New York, NY: Wiley.
Hardin, J. and J. Hilbe. 2001. *Generalized Linear Models and Extensions*. College Station, TX: Stata Press.
Hoeffding, W. 1948. A class of statistics with asymptotically normal distribution. *Annals of Mathematical Statistics* 19: 293–325.
Huber, P. J. 1981. *Robust statistics*. New York, NY: Wiley.
Kendall, M. G. and J. D. Gibbons. 1990. *Rank Correlation Methods*. 5th ed. Oxford, UK: Oxford University Press.
Kirkwood, B. R. and J. A. C. Sterne. 2003. *Essential Medical Statistics*. 2nd ed. Oxford, UK: Blackwell Science.
McCullagh, P. and J. A. Nelder. 1989. *Generalized Linear Models*. 2nd ed. London, UK: Chapman & Hall.
Newson, R. 2000. `snp15: somersd` – Confidence intervals for nonparametric statistics and their differences. *Stata Technical Bulletin* 55: 47–55. In *Stata Technical Bulletin Reprints*, vol. 10, 312–322. College Station, TX: Stata Press. Post-publication update downloadable from Roger Newson’s website at http://www.kcl-phs.org.uk/rogernewson/.
—. 2002. Parameters behind “nonparametric” statistics: Kendall’s tau, Somers’ $D$ and median differences. *The Stata Journal* 2(1): 45–64. Pre-publication draft downloadable from Roger Newson’s website at http://www.kcl-phs.org.uk/rogernewson/.
—. 2003. Confidence intervals and $p$-values for delivery to the end user. *The Stata Journal* 3(3): 245–269. Pre-publication draft downloadable from Roger Newson’s website at http://www.kcl-phs.org.uk/rogernewson/.
Serfling, R. J. 1980. *Approximation Theorems of Mathematical Statistics*. New York, NY: Wiley.
**About the Author**
Roger Newson is a Lecturer in Medical Statistics at King’s College London, UK, working principally in asthma research. He wrote the packages `powercal`, `parmest` and `somersd`.
|
References
1. Adams, R.A.: Sobolev Spaces. Academic Press, New York (1975)
2. Adams, R.A., Fournier, J.J.F.: Sobolev Spaces, 2nd edn. Pure and Applied Mathematics Series, vol. 140. Elsevier/Academic Press, Amsterdam (2003)
3. Amosov, A.A.: Global solvability of a nonlinear nonstationary problem with a nonlocal boundary condition of radiation heat transfer type. Differ. Equ. **41**(1), 96–109 (2005)
4. Amosov, A.A., Zlotnik, A.A.: Difference schemes of second-order of accuracy for the equations of the one-dimensional motion of a viscous gas. USSR Comput. Math. Math. Phys. **27**(4), 46–57 (1987)
5. Barbeiro, S.: Supraconvergent cell-centered scheme for two dimensional elliptic problems. Appl. Numer. Math. **59**, 56–72 (2009)
6. Barbeiro, S., Ferreira, J.A., Grigorieff, R.D.: Supraconvergence of a finite difference scheme for solutions in $H^s(0, L)$. IMA J. Numer. Anal. **25**, 797–811 (2005)
7. Barbeiro, S., Ferreira, J.A., Pinto, L.: $H^1$-second order convergent estimates for non Fickian models. Appl. Numer. Math. **61**, 201–215 (2011)
8. Bartle, R.G.: The Elements of Integration and Lebesgue Measure. Wiley, New York (1995)
9. Bergh, J., Löfström, J.: Interpolation Spaces, an Introduction. Grundlehren der mathematischen Wissenschaften, vol. 228. Springer, Berlin (1976)
10. Berikelashvili, G.: The convergence in $W_2^2$ of the difference solution of the Dirichlet problem. USSR Comput. Math. Math. Phys. **30**(2), 89–92 (1990)
11. Berikelashvili, G.: Construction and analysis of difference schemes for some elliptic problems, and consistent estimates of the rate of convergence. Mem. Differ. Equ. Math. Phys. **38**, 1–131 (2006)
12. Berikelashvili, G., Gupta, M.M., Mirianashvili, M.: Convergence of fourth order compact difference schemes for three-dimensional convection-diffusion equations. SIAM J. Numer. Anal. **45**(1), 443–455 (2007)
13. Besov, O.V., Il’in, V.P., Nikol’skii, S.M.: Integral Representations of Functions and Imbedding Theorems. Nauka, Moscow (1975). (Russian)
14. Blanc, X., Le Bris, C., Lions, P.-L.: From molecular models to continuum mechanics. Arch. Ration. Mech. Anal. **164**(4), 341–381 (2002)
15. Blanc, X., Le Bris, C., Lions, P.-L.: Atomistic to continuum limits for computational materials science. M2AN Math. Model. Numer. Anal. **41**(2), 391–426 (2007)
16. Bojović, D., Jovanović, B.S.: Application of interpolation theory to determination of convergence rate for finite difference schemes of parabolic type. Mat. Vestn. **49**, 99–107 (1997)
17. Bojović, D., Jovanović, B.S.: Convergence of finite difference method for the parabolic problem with concentrated capacity and variable operator. J. Comput. Appl. Math. **189**(1–2), 286–303 (2006)
18. Bondesson, M.: An interior a priori estimate for parabolic difference operators and an application. Math. Comput. **25**, 43–58 (1971)
19. Bondesson, M.: Interior a priori estimates in discrete $L_p$ norms for solutions of parabolic and elliptic difference equations. Ann. Mat. Pura Appl. **95**(4), 1–43 (1973)
20. Bramble, J.H., Hilbert, S.R.: Estimation of linear functionals on Sobolev spaces with application to Fourier transform and spline interpolation. SIAM J. Numer. Anal. **7**, 112–124 (1970)
21. Bramble, J.H., Hilbert, S.R.: Bounds for a class of linear functionals with application to Hermite interpolation. Numer. Math. **16**, 362–369 (1971)
22. Brandt, A.: Interior Schauder estimates for parabolic differential- (or difference-) equations via the maximum principle. Isr. J. Math. **7**, 254–262 (1969)
23. Brenner, S.C., Scott, L.R.: The Mathematical Theory of Finite Element Methods, 3rd edn. Texts in Applied Mathematics, vol. 15. Springer, New York (2008)
24. Brenner, P., Thomée, V., Wahlbin, L.B.: Besov Spaces and Applications to Difference Methods for Initial Value Problems. Lecture Notes in Mathematics, vol. 434. Springer, Berlin (1975)
25. Carter, R., Giles, M.B.: Sharp error estimates for discretizations of the 1D convection-diffusion equation with Dirac initial data. IMA J. Numer. Anal. **27**, 406–425 (2007)
26. Ciarlet, P.G.: The Finite Element Method for Elliptic Problems. North-Holland, Amsterdam (1978)
27. Datta, A.K.: Biological and Bioenvironmental Heat and Mass Transfer. Dekker, New York (2002)
28. Dauge, M.: Elliptic Boundary Value Problems on Corner Domains. Lecture Notes in Mathematics. Springer, Berlin (1988)
29. Dobson, M., Luskin, M.: Analysis of a force-based quasicontinuum approximation. M2AN Math. Model. Numer. Anal. **42**(1), 113–139 (2008)
30. Douglas, J., Dupont, T.: A finite element collocation method for quasilinear parabolic equations. Math. Comput. **27**, 17–28 (1973)
31. Douglas, J., Dupont, T., Wheeler, M.F.: A quasi-projection analysis of Galerkin methods for parabolic and hyperbolic equations. Math. Comput. **32**, 345–362 (1978)
32. Dražić, M.: Convergence rates of difference approximations to weak solutions of the heat transfer equation. Technical Report 86/22, Oxford University Computing Laboratory, Numerical Analysis Group, Oxford (1986)
33. Drenska, N.T.: Convergence of a difference scheme of the finite element method for the Poisson equation in the $L_p$ metric. Vestn. Mosc. Univ. Ser. XV Vychisl. Mat. Kibernet. **1984**(3), 19–22 (1984). (Russian)
34. Drenska, N.T.: Convergence in the $L_p$ metric of a difference scheme of the finite element method for an elliptic equation with constant coefficients. Vestn. Mosc. Univ. Ser. XV Vychisl. Mat. Kibernet. **1985**(4), 9–13 (1985). (Russian)
35. Druet, P.E.: Weak solutions to a time-dependent heat equation with nonlocal radiation condition and right hand side in $L^p$ ($p \geq 1$). WIAS Preprint 1253 (2008)
36. Dunford, N., Schwartz, J.T.: Linear Operators. Part I: General Theory. Interscience, New York (1957)
37. Dupont, T., Scott, L.R.: Polynomial approximation of functions in Sobolev spaces. Math. Comput. **34**, 441–463 (1980)
38. Durán, R.: On polynomial approximation in Sobolev spaces. SIAM J. Numer. Anal. **20**(5), 985–988 (1983)
39. D’yakonov, E.G.: Difference Methods for Solving Boundary Value Problems. Publ. House of Moscow State Univ., Moscow (1971) (vol. I), 1972 (vol. II) (Russian)
40. Dyda, B.: A fractional order Hardy inequality. Ill. J. Math. **48**(2), 575–588 (2004)
41. Dzhuraev, I.N., Moskal’kov, M.N.: Convergence of the solution of a weighted difference scheme to a generalized solution in $W^2_2(Q_T)$ of the vibrating-string equation. Differ. Equ. **21**, 1453–1459 (1985)
42. Dzhuraev, I.N., Kolesnik, T.N., Makarov, V.L.: Accuracy of the method of straight lines for second-order quasilinear hyperbolic equations with a small parameter at the highest time derivative. Differ. Equ. **21**, 784–789 (1985)
43. Edwards, R.E.: Fourier Series: A Modern Introduction vol. 2, 2nd edn. Springer, New York (1982)
44. Emmrich, E.: Supraconvergence and supercloseness of a discretisation for elliptic third-kind boundary-value problems on polygonal domains. Comput. Methods Appl. Math. **7**, 135–162 (2007)
45. Emmrich, E., Grigorieff, R.D.: Supraconvergence of a finite difference scheme for elliptic boundary value problems of the third kind in fractional order Sobolev spaces. Comput. Methods Appl. Math. **6**, 154–177 (2006)
46. Federer, H.: Geometric Measure Theory. Die Grundlehren der mathematischen Wissenschaften, vol. 153. Springer, Berlin (1969)
47. Ferreira, J.A., Grigorieff, R.D.: Supraconvergence and supercloseness of a scheme for elliptic equations on nonuniform grids. Numer. Funct. Anal. Optim. **27**(5–6), 539–564 (2006)
48. Gagliardo, E.: Caratterizzazioni delle tracce sulla frontiera relative ad alcune classi di funzioni in \( n \) variabili. Rend. Semin. Mat. Univ. Padova **27**, 284–305 (1957)
49. Gavril yuk, I.P., Sazhenyuk, V.S.: Estimates of the rate of convergence of the penalty and net methods for a class of fourth-order elliptic variational inequalities. USSR Comput. Math. Math. Phys. **26**(6), 20–26 (1986)
50. Gavril yuk, I.P., Lazarov, R.D., Makarov, V.L., Pirmazarov, S.I.: Estimates of the rate of convergence of difference schemes for fourth-order elliptic equations. USSR Comput. Math. Math. Phys. **23**(2), 64–70 (1983)
51. Gavril yuk, I.P., Prikazchikov, V.G., Himich, A.N.: The accuracy of the solution of a difference boundary value problem for a fourth-order elliptic operator with mixed boundary conditions. USSR Comput. Math. Math. Phys. **26**(6), 139–147 (1986)
52. Gel’fand, I.M., Shilov, G.E.: Generalized Functions, Vol. I: Properties and Operations. Academic Press, New York (1964)
53. Gilbarg, D., Trudinger, N.S.: Elliptic Partial Differential Equations of Second Order. Springer, Berlin (2001)
54. Giusti, E.: Minimal Surfaces and Functions of Bounded Variation. Birkhäuser, Boston (1984)
55. Givoli, D.: Exact representation on artificial interfaces and applications in mechanics. Appl. Mech. Rev. **52**, 333–349 (1999)
56. Givoli, D.: Finite element modeling of thin layers. Comput. Model. Eng. Sci. **5**(6), 497–514 (2004)
57. Godev, K.N., Lazarov, R.D.: On the convergence of the difference scheme for the second boundary problem for the biharmonic equation with solution from \( W_p^k \). In: Samarskij, A.A., Katai, I. (eds.) Mathematical Models in Physics and Chemistry and Numerical Methods of Their Realization, Proc. Seminar held in Visegrád, Hungary, 1982. Teubner Texte zur Mathematik, vol. 61, pp. 130–141
58. Godev, K.N., Lazarov, R.D.: Error estimates of finite-difference schemes in \( L_p \)-metrics for parabolic boundary value problems. C. R. Acad. Bulgare Sci. **37**, 565–568 (1984)
59. Grafakos, L.: Classical Fourier Analysis, 2nd edn. Graduate Texts in Mathematics, vol. 249. Springer, New York (2008)
60. Grisvard, P.: Équations différentielles abstraites. Ann. Sci. Éc. Norm. Super. **4**(2), 311–395 (1969). (French)
61. Grisvard, P.: Behaviour of the solutions of an elliptic boundary value problem in polygonal or polyhedral domain. In: Hubbard, B. (ed.) Numerical Solution of Partial Differential Equations, III, Proc. Third Sympos. (SYNAPDE), Univ. Maryland, College Park, Md., 1975. pp. 207–274. Academic Press, New York (1975)
62. Grisvard, P.: Elliptic Problems in Non-smooth Domains. Pitman, London (1985)
63. Grisvard, P.: Singularities in Boundary Value Problems. Research in Applied Mathematics, vol. 22. Masson, Paris (1992)
64. Gustafsson, B., Kreiss, H.-O., Oliger, J.: Time Dependent Problems and Difference Methods. Pure and Applied Mathematics. A Wiley-Interscience Publication. Wiley, New York (1995)
65. Hackbusch, W.: Optimal $H^{p,p/2}$ error estimates for a parabolic Galerkin method. SIAM J. Numer. Anal. **18**, 681–692 (1981)
66. Hackbusch, W.: On the regularity of difference schemes. Ark. Mat. **19**(1), 71–95 (1981)
67. Hackbusch, W.: On the regularity of difference schemes. II. Regularity estimates for linear and nonlinear problems. Ark. Mat. **21**(1), 3–28 (1983)
68. Hackbusch, W.: Elliptic Differential Equations. Theory and Numerical Treatment. Springer Series in Computational Mathematics, vol. 18. Springer, Berlin (2010). Translated from the 1986 corrected German edition by Regine Fadiman and Patrick D.F. Ion, Reprint of the 1992 English edition
69. Haroske, D.D., Triebel, H.: Distributions, Sobolev Spaces, Elliptic Equations. European Mathematical Society, Zürich (2008)
70. Hell, T.: Compatibility conditions for elliptic boundary value problems on non-smooth domains. Master’s Thesis in Mathematics, Faculty of Mathematics, Computer Science and Physics of the University of Innsbruck, January 2011
71. Hörmander, L.: Lectures on linear partial differential operators. Mimeographed Notes, Stanford University (1960)
72. Hörmander, L.: Linear Partial Differential Operators 4th edn. Die Grundlehren der mathematischen Wissenschaften, vol. 116. Springer, Berlin (1969)
73. Iovanovic, B.S., Matus, P.P.: Estimation of the convergence rate of difference schemes for elliptic problems. Comput. Math. Math. Phys. **39**, 56–64 (1999)
74. Ivanović, L.D., Jovanović, B.S., Suli, E.: On the rate of convergence of difference schemes for the heat transfer equation on the solutions from $W_2^{3,\alpha/2}$. Mat. Vesn. **36**, 206–212 (1984)
75. Ivanović, L.D., Jovanović, B.S., Suli, E.: On the convergence of difference schemes for the Poisson equation. In: Vrdoljak, B. (ed.) IV Conference on Applied Mathematics, Proc. Conf. Held in Split 1984, University of Split, Split, pp. 45–49 (1985)
76. Ivanovich, L.D., Jovanovich, B.S., Shili, E.: The convergence of difference schemes for a biharmonic equation. USSR Comput. Math. Math. Phys. **26**, 87–90 (1986)
77. Jacobsen, N.: Basic Algebra, vol. II. Freeman, San Francisco (1980)
78. Jovanović, B.S.: Convergence of projection-difference schemes for the heat equation. Mat. Vesn. **6**(19) (34), 279–292 (1982). (Russian)
79. Jovanović, B.S.: A generalization of the Bramble-Hilbert lemma. Zb. Rad. (Kragujevac) **8**, 81–87 (1987). (Serbian)
80. Jovanović, B.S.: Sur la méthode des domaines fictifs pour une équation elliptique quasilinéaire du quatrième ordre. Publ. Inst. Math. **42**(56), 167–173 (1987)
81. Jovanović, B.S.: Schéma aux différences finies pour une équation elliptique quasilinéaire dans un domaine arbitraire. Mat. Vestn. **40**, 31–40 (1988)
82. Jovanović, B.S.: On the convergence of finite-difference schemes for parabolic equations with variable coefficients. Numer. Math. **54**, 395–404 (1989)
83. Jovanović, B.S.: Finite-difference approximations of elliptic equations with non-smooth coefficients. In: Sendov, B., Lazarov, R., Dimov, I. (eds.) Numerical Methods and Applications, Proc. Conf. Held in Sofia 1988, Bulgar. Acad. Sci., Sofia pp. 207–211 (1989)
84. Jovanović, B.S.: Optimal error estimates for finite-difference schemes with variable coefficients. Z. Angew. Math. Mech. **70**, 640–642 (1990)
85. Jovanović, B.S.: Convergence of finite-difference schemes for parabolic equations with variable coefficients. Z. Angew. Math. Mech. **71**, 647–650 (1991)
86. Jovanović, B.S.: Convergence of finite-difference schemes for hyperbolic equations with variable coefficients. Z. Angew. Math. Mech. **72**, 493–496 (1992)
87. Jovanović, B.S.: On the convergence rate of finite-difference schemes for hyperbolic equations. Z. Angew. Math. Mech. **73**, 656–660 (1993)
88. Jovanović, B.S.: On the estimates of the convergence rate of the finite difference schemes for the approximation of solutions of hyperbolic problems (Part II). Publ. Inst. Math. **55**(69), 149–155 (1994)
89. Jovanović, B.S.: Interpolation technique and convergence rate estimates for finite difference method. In: Numerical Analysis and Its Applications, Rousse, 1996. Lecture Notes in Comput. Sci., vol. 1196, pp. 200–211. Springer, Berlin (1997)
90. Jovanović, B.S.: Finite difference approximation of a hyperbolic transmission problem. In: Gautschi, W., Mastroianni, G., Rassias, T. (eds.) Approximation and Computation. Springer Optimization and Its Applications, vol. 42, pp. 319–329 (2010)
91. Jovanović, B.S., Ivanović, L.D.: On the convergence of the difference schemes for the equation of vibrating string. In: Milovanović, G.V. (ed.) Numerical Methods and Approximation Theory, Proc. Conf. Held in Niš 1984. University of Niš, Niš, pp. 155–159 (1984)
92. Jovanović, B.S., Popović, B.Z.: Convergence of a finite difference scheme for the third boundary value problem for elliptic equation with variable coefficients. Comput. Methods Appl. Math. 1(4), 356–366 (2001)
93. Jovanović, B.S., Vulkov, L.G.: On the convergence of difference schemes for the string equation with concentrated mass. In: Ciegis, R., Samarskiiand, A., Sapagovas, M. (eds.) Finite-Difference Schemes: Theory and Applications, Proc. of 3rd Int. Conf. Held in Palanga (Lithuania) 2000, IMI, Vilnius, pp. 107–116 (2000)
94. Jovanović, B.S., Vulkov, L.G.: Operator approach to the problems with concentrated factors. In: Numerical Analysis and Its Applications, Rousse, 2000. Lecture Notes in Comput. Sci., pp. 439–450. Springer, Berlin (2001)
95. Jovanović, B.S., Vulkov, L.G.: On the convergence of finite difference schemes for the heat equation with concentrated capacity. Numer. Math. 89(4), 715–734 (2001)
96. Jovanović, B.S., Vulkov, L.G.: On the convergence of difference schemes for hyperbolic problems with concentrated data. SIAM J. Numer. Anal. 41(2), 516–538 (2003)
97. Jovanović, B.S., Vulkov, L.G.: Stability of difference schemes for parabolic equations with dynamical boundary conditions and conditions on conjugation. Appl. Math. Comput. 163, 849–868 (2005)
98. Jovanović, B.S., Vulkov, L.G.: Energy stability for a class of two-dimensional interface linear parabolic problems. J. Math. Anal. Appl. 311, 120–138 (2005)
99. Jovanović, B.S., Vulkov, L.G.: On the convergence of difference schemes for parabolic problems with concentrated data. Int. J. Numer. Anal. Model. 5(3), 386–406 (2008)
100. Jovanović, B.S., Vulkov, L.G.: Numerical solution of a hyperbolic transmission problem. Comput. Methods Appl. Math. 8(4), 374–385 (2008)
101. Jovanović, B.S., Vulkov, L.G.: Finite difference approximations for some interface problems with variable coefficients. Appl. Numer. Math. 59(2), 349–372 (2009)
102. Jovanović, B.S., Vulkov, L.G.: Numerical solution of a two-dimensional parabolic transmission problem. Int. J. Numer. Anal. Model. 7(1), 156–172 (2010)
103. Jovanović, B.S., Vulkov, L.G.: Numerical solution of a two-dimensional hyperbolic transmission problem. J. Comput. Appl. Math. 235, 519–534 (2010)
104. Jovanović, B.S., Vulkov, L.G.: Numerical solution of a parabolic transmission problem. IMA J. Numer. Anal. 31, 233–253 (2011)
105. Jovanović, B.S., Ivanović, L.D., Suli, E.: On the convergence rate of difference schemes for the heat transfer equation. In: Vrdoljak, B. (ed.) IV Conference on Applied Mathematics, Proc. Conf. Held in Split 1984, University of Split, Split pp. 41–44 (1985)
106. Jovanović, B.S., Ivanović, L.D., Suli, E.: Convergence of difference schemes for the equation $-\Delta u + cu = f$ on generalized solutions from $W^s_{2,*} (-\infty < s < +\infty)$. Publ. Inst. Math. 37(51), 129–138 (1985). (Russian)
107. Jovanović, B.S., Ivanović, L.D., Suli, E.: Sur la convergence des schémas aux differences finies pour l’équation des ondes. Z. Angew. Math. Mech. 66, 308–309 (1986)
108. Jovanović, B.S., Suli, E., Ivanović, L.D.: On finite difference schemes of high order accuracy for elliptic equations with mixed derivatives. Mat. Vestn. 38, 131–136 (1986)
109. Jovanović, B.S., Ivanović, L.D., Suli, E.: Convergence of a finite-difference scheme for second-order hyperbolic equations with variable coefficients. IMA J. Numer. Anal. 7, 39–45 (1987)
10. Jovanović, B.S., Ivanović, L.D., Stilić, E.: Convergence of finite-difference schemes for elliptic equations with variable coefficients. IMA J. Numer. Anal. 7, 301–305 (1987)
11. Jovanovich, B.S.: Convergence of discrete solutions to generalized solutions of boundary value problems. In: Bakhvalov, N.S., Kuznetsov, Yu.A. (eds.) Variational-Difference Methods in Mathematical Physics, vol. 2, Proc. conf. held in Moscow 1983, Dept. Numer. Math. Acad. Sci. USSR, Moscow, pp. 120–129 (1984). (Russian)
12. Junkosa, M.L., Young, D.M.: On the order of convergence of solutions of a difference equation to a solution of the diffusion equation. SIAM J. 1, 111–135 (1953)
13. Kačur, J., Van Keer, R., West, J.: On the numerical solution to a semi-linear transient heat transfer problem in composite media with nonlocal transmission conditions. In: Lewis, R.W. (ed.) Numerical Methods in Thermal Problems, VIII, pp. 1508–1519. Pineridge Press, Swansea (1993)
14. Kalinin, V.M., Makarov, V.L.: An estimate, in the $L_2$-norm, of the rate of convergence, for solutions in $W^1_2(\Omega)$, of a difference scheme for a third boundary-value problem in axially symmetric elasticity theory. Differ. Equ. 23(7), 811–819 (1987)
15. Kreiss, H.-O., Thomée, V., Widlund, O.: Smoothing of initial data and rates of convergence for parabolic difference equations. Commun. Pure Appl. Math. 23(2), 241–259 (1970)
16. Kufner, A., John, O., Fučík, S.: Function Spaces. Nordhoff International Publ., Leyden (1977)
17. Kuzik, A.M., Makarov, V.L.: The rate of convergence of a difference scheme using the sum approximation method for generalized solutions. USSR Comput. Math. Math. Phys. 26(3), 192–196 (1986)
18. Ladyzhenskaya, O.A., Ural’tseva, N.N.: Linear and Quasilinear Elliptic Equations. Nauka, Moscow (1964). (Russian); English edn.: Academic Press, New York (1968)
19. Lazarov, R.D.: Convergence of finite-difference schemes for generalized solutions of Poisson’s equation. Differ. Equ. 17, 829–836 (1982)
20. Lazarov, R.D.: Convergence of finite-difference solutions to generalized solutions of a biharmonic equation in a rectangle. Differ. Equ. 17, 836–843 (1982)
21. Lazarov, R.D.: On the convergence of the finite difference schemes for the Poisson’s equation in discrete norms $L_p$. Wiss. Beitr. IH Wismar 7(1/82), 86–90 (1982)
22. Lazarov, R.D.: Convergence of difference method for parabolic equations with generalized solutions. Pliska Stud. Math. Bulg. 5, 51–59 (1982). (Russian)
23. Lazarov, R.D., Makarov, V.L.: Difference scheme of second order of accuracy for the axisymmetric Poisson equation in generalized solutions. USSR Comput. Math. Math. Phys. 21(5), 95–107 (1981)
24. Lazarov, R.D., Mokin, Yu.I.: On the convergence of difference schemes for Poisson equations in $L_{\mu}$-metrics. Sov. Math. Dokl. 24, 590–594 (1981)
25. Lazarov, R.D., Makarov, V.L., Samarskii, A.A.: Application of exact difference schemes to the construction and study of difference scheme for generalized solutions. Math. USSR Sb. 45, 461–471 (1983)
26. Lazarov, R.D., Makarov, V.L., Weinelt, W.: On the convergence of difference schemes for the approximation of solutions $u \in W^m_2$ ($m > 0.5$) of elliptic equations with mixed derivatives. Numer. Math. 44, 223–232 (1984)
27. Lions, J.L., Magenes, E.: Problèmes aux limites non homogènes et applications. Dunod, Paris (1968)
28. Lizorkin, P.I.: Generalized Liouville differentiation and the functional spaces $L^r_p(E_n)$. imbedding theorems. Mat. Sb. (N. S.) 60(102)(3), 325–353 (1963). (Russian)
29. Makarov, V.L., Kalinin, V.M.: Consistent estimates for the rate of convergence of difference schemes in $L_2$-norm for the third boundary value problem of elasticity theory. Differ. Uravnen. 22, 1265–1268 (1986). (Russian)
30. Makarov, V.L., Ryzhenko, A.I.: Matched estimates of the rate of convergence of the net method for Poisson’s equation in polar coordinates. USSR Comput. Math. Math. Phys. 27(3), 147–152 (1987)
131. Makarov, V.L., Ryzhenko, A.I.: Matched convergence-rate estimates of the mesh method for the axisymmetric Poisson equation in spherical coordinates. USSR Comput. Math. Math. Phys. 27(4), 195–197 (1987)
132. Makridakis, C., Suli, E.: Finite element analysis of Cauchy–Born approximations to atomistic models. Arch. Ration. Mech. Anal. (2012). doi:10.1007/s00205-012-0582-8. Published electronically
133. Makridakis, C., Ortner, C., Suli, E.: A priori error analysis of two force-based atomistic/continuum models of a periodic chain. Numer. Math. 119, 83–121 (2011)
134. Marletta, M.: Supraconvergence of discretisations methods on nonuniform meshes. M.Sc. Thesis, University of Oxford (1988)
135. Massey, W.S.: Singular Homology Theory. Graduate Texts in Mathematics, vol. 70. Springer, New York (1980)
136. Maz’ya, V.G.: Sobolev Spaces with Applications to Elliptic Partial Differential Equations. Grundlehren der mathematischen Wissenschaften, vol. 342. Springer, Heidelberg (2011)
137. Maz’ya, V.G., Shaposhnikova, T.O.: Theory of Multipliers in Spaces of Differentiable Functions. Monographs and Studies in Mathematics, vol. 23. Pitman, Boston (1985)
138. Maz’ya, V.G., Shaposhnikova, T.O.: Multiplikatory v prostranstvakh differentsiruemykh funktsii. Leningrad. Univ., Leningrad (1986)
139. Melenk, J.M.: On approximation in meshless methods. In: Craig, A.W., Blowey, J.F. (eds.) Frontiers of Numerical Analysis, Durham, 2004. Universitext. Springer, Berlin (2005)
140. Mokin, Yu.I.: Discrete analog of the multiplicator theorem. USSR Comput. Math. Math. Phys. 11(3), 253–257 (1971)
141. Mokin, Yu.I.: A network analogue of the imbedding theorem for classes of type $W$. USSR Comput. Math. Math. Phys. 11(6), 1361–1373 (1971)
142. Narasimhan, R.: Several Complex Variables. Chicago Lectures in Mathematics. University of Chicago Press, Chicago (1995). Reprint of the 1971 original
143. Nečas, J.: Les méthodes directes en théorie des équations elliptiques. Academia, Prague (1967)
144. Nikol’skii, S.M.: Approximation of Functions of Several Variables and Imbedding Theorems. Nauka, Moscow (1977). (Russian)
145. Oberguggenberger, M.: Multiplication of Distributions and Applications to Partial Differential Equations. Pitman Research Notes on Mathematics Series, vol. 259, pp. 269–3674. Longman Scientific and Technical, Harlow (1992)
146. Oberguggenberger, M.: Generalized functions in nonlinear models—a survey. Nonlinear Anal. 47(8), 5029–5040 (2001)
147. Oevermann, M., Klein, R.: A Cartesian grid finite volume method for elliptic equations with variable coefficients and embedded interfaces. J. Comp. Physiol. 19, 749–769 (2006)
148. Oganesyan, L.A., Rukhovets, L.A.: Variational-Difference Methods for Solving Elliptic Equations. Publ. House of Armenian Acad. Sci. Erevan (1979). (Russian)
149. Ortner, C.: A priori and a posteriori analysis of the quasinonlocal quasicontinuum method in 1D. Math. Comput. 80(275), 1265–1285 (2011)
150. Ortner, C., Suli, E.: Analysis of a quasicontinuum method in one dimension. M2AN Math. Model. Numer. Anal. 42(1), 57–91 (2008)
151. Prikazchikov, V.G., Himich, A.N.: The eigenvalue difference problem for the fourth order elliptic operator with mixed boundary conditions. USSR Comput. Math. Math. Phys. 25(5), 137–144 (1985)
152. Qatanani, N., Barham, A., Heeh, Q.: Existence and uniqueness of the solution of the coupled conduction-radiation energy transfer on diffusive-gray surfaces. Surv. Math. Appl. 2, 43–58 (2007)
153. Rannacher, R.: Finite element solution of diffusion problems with irregular data. Numer. Math. 43, 309–327 (1984)
154. Reed, M., Simon, B.: Methods of Modern Mathematical Physics. I: Functional Analysis. Academic Press, San Diego (1980)
155. Renardy, M., Rogers, R.C.: An Introduction to Partial Differential Equations, 2nd edn. Springer, New York (2004)
156. Richtmyer, R.D., Morton, K.W.: Difference Methods for Initial-Value Problems. Krieger, Melbourne (1994)
157. Rudin, W.: Real and Complex Analysis, 3rd edn. McGraw-Hill, New York (1986)
158. Rudin, W.: Functional Analysis, 2nd edn. International Series in Pure and Applied Mathematics. McGraw–Hill, New York (1991)
159. Samarskii, A.A.: The Theory of Difference Schemes. Nauka, Moscow (1983). (Russian); English edn.: Monographs and Textbooks in Pure and Applied Mathematics, vol. 240. Dekker, New York (2001)
160. Samarskii, A.A., Lazarov, R.D., Makarov, L.: Finite Difference Schemes for Differential Equations with Weak Solutions. Visshaya Shkola Publ., Moscow (1987). (Russian)
161. Samarskii, A.A., Iovanovich, B.S., Matus, P.P., Shcheglik, V.S.: Finite-difference schemes on adaptive time grids for parabolic equations with generalized solutions. Differ. Equ. 33, 981–990 (1997)
162. Schmeisser, H.J., Triebel, H.: Topics in Fourier Analysis and Function Spaces. Wiley, Chichester (1987)
163. Schwartz, L.: Théorie des distributions I, II. Herman, Paris (1950/1951)
164. Scott, J.A., Seward, W.L.: Finite difference methods for parabolic problems with nonsmooth initial data. Technical Report 86/22, Oxford University Computing Laboratory, Numerical Analysis Group, Oxford (1987)
165. Seward, W.L., Kasibhatla, P.S., Fairweather, G.: On the numerical solution of a model air pollution problem with non-smooth initial data. Commun. Appl. Numer. Methods 6, 145–156 (1990)
166. Shreve, D.C.: Interior estimates in $L^p$ for elliptic difference operators. SIAM J. Numer. Anal. 10, 69–80 (1973)
167. Stein, E.M.: Singular Integrals and Differentiability of Functions. Princeton Univ. Press, Princeton (1970)
168. Stein, E.M., Weiss, G.L.: Introduction to Harmonic Analysis on Euclidean Spaces. Princeton Mathematical Series. Princeton Univ. Press, Princeton (1971)
169. Strang, G., Fix, G.: An Analysis of the Finite Element Method. Prentice Hall, New York (1973)
170. Strikwerda, J.: Finite Difference Schemes and Partial Differential Equations. SIAM, Philadelphia (2004)
171. Süli, E.: Convergence of finite volume schemes for Poisson’s equation on non-uniform meshes. SIAM J. Numer. Anal. 28, 1419–1430 (1991)
172. Süli, E., Mayers, D.F.: An Introduction to Numerical Analysis, 2nd edn. Cambridge University Press, Cambridge (2006)
173. Süli, E., Jovanović, B.S., Ivanović, L.D.: Finite difference approximations of generalized solutions. Math. Comput. 45, 319–327 (1985)
174. Süli, E., Jovanović, B.S., Ivanović, L.D.: On the construction of finite difference schemes approximating generalized solutions. Publ. Inst. Math. 37(51), 123–128 (1985)
175. Thomée, V.: Discrete interior Schauder estimates for elliptic difference operators. SIAM J. Numer. Anal. 5, 626–645 (1968)
176. Thomée, V.: Negative norm estimates and superconvergence in Galerkin methods for parabolic problems. Math. Comput. 34, 93–113 (1980)
177. Thomée, V.: Galerkin Finite Element Methods for Parabolic Problems, 2nd edn. Springer Series in Computational Mathematics, vol. 25, Springer, Berlin (2006)
178. Thomée, V., Wahlbin, L.B.: Convergence rates of parabolic difference schemes for non-smooth data. Math. Comput. 28(125), 1–13 (1974)
179. Thomée, V., Westergren, B.: Elliptic difference equations and interior regularity. Numer. Math. 11, 196–210 (1968)
180. Tikhonov, A.N.: On functional equations of the Volterra type and their applications to some problems of mathematical physics. Byull. Mosc. Gos. Univ., Ser. Mat. Mekh. 1(8), 1–25
181. Triebel, H.: Fourier Analysis and Function Spaces. Teubner, Leipzig (1977)
182. Triebel, H.: Interpolation Theory, Function Spaces, Differential Operators. Deutscher Verlag der Wissenschaften, Berlin (1978)
183. Triebel, H.: Theory of Function Spaces. Monographs in Mathematics, vol. 78. Birkhäuser, Basel (1983)
184. Vladimirov, V.S.: Equations of Mathematical Physics, 2nd English edn. Monographs and Textbooks in Pure and Applied Mathematics, vol. 3. Dekker, New York (1971). 2nd English edn.: Mir, Moscow (1983)
185. Vladimirov, V.S.: Generalized Functions in Mathematical Physics. Mir, Moscow (1979). (English edn.)
186. Voitsekhovskii, S.A., Gavrilyuk, I.P.: Convergence of finite-difference solutions to generalized solutions of the first boundary-value problem for a fourth-order quasilinear equation in regions of arbitrary shape. Differ. Equ. 21, 1081–1088 (1985)
187. Voitsekhovskii, S.A., Kalinin, V.M.: Limit of the rate of convergence of difference schemes for the first boundary-value problem of elasticity theory in the anisotropic case. USSR Comput. Math. Phys. 29(4), 87–91 (1989)
188. Voitsekhovskii, S.A., Novichenko, V.N.: Justification of a difference scheme of an increased order of accuracy for the Dirichlet problem for the Poisson equation in classes of generalized solutions. Differ. Uravnen. 24, 1631–1633 (1988). (Russian)
189. Voitsekhovskii, S.A., Makarov, V.L., Shablii, T.G.: The convergence of difference solutions to the generalized solutions of the Dirichlet problem for the Helmholtz equation in a convex polygon. USSR Comput. Math. Math. Phys. 25(5), 36–43 (1985)
190. Voitsekhovskii, S.A., Gavrilyuk, I.P., Sazhenyuk, V.S.: Estimates of the rate of convergence of difference schemes for variational elliptic second-order inequalities in an arbitrary domain. USSR Comput. Math. Math. Phys. 26(3), 113–120 (1986)
191. Voitsekhovskii, S.A., Gavrilyuk, I.P., Makarov, V.L.: Convergence of finite-difference solutions to generalized solutions of a first boundary-value problem for a fourth-order elliptic operator in a domain of arbitrary form. Differ. Equ. 23(8), 962–966 (1987)
192. Voitsekhovskii, S.A., Makarov, V.L., Rybak, Yu.I.: Estimates of the convergence rate of difference approximations of the Dirichlet problem for the equation $-\Delta u + \sum_{|\alpha| \leq 1} (-1)^{|\alpha|} D^\alpha q_\alpha(x)u = f(x)$ when $q_\alpha(x) \in W^{\lambda, |\alpha|}_\infty(\Omega)$, $\lambda \in (0, 1]$. Differ. Equ. 24(11), 1338–1344 (1988)
193. Volkov, E.A.: On differential properties of solutions of boundary value problems for the Laplace and Poisson equations on a rectangle. Tr. Mat. Inst. Steklova 77, 89–112 (1965). (Russian)
194. Weinan, E., Ming, P.: Cauchy–Born rule and the stability of crystalline solids: static problems. Arch. Ration. Mech. Anal. 183(2), 241–297 (2007)
195. Weintelt, W.: Untersuchungen zur Konvergenzgeschwindigkeit bei Differenzenverfahren. Z. THK 20, 763–769 (1978)
196. Weintelt, W., Lazarov, R.D., Makarov, V.L.: Convergence of finite-difference schemes for elliptic equations with mixed derivatives having generalized solutions. Differ. Equ. 19, 838–843 (1983)
197. Weintelt, W., Lazarov, R.D., Streit, U.: Order of convergence of finite-difference schemes for weak solutions of the heat conduction equation in an anisotropic inhomogeneous medium. Differ. Equ. 20, 828–834 (1984)
198. Wheeler, M.F.: $L_\infty$ estimates of optimal orders for Galerkin methods for one dimensional second order parabolic and hyperbolic problems. SIAM J. Numer. Anal. 10, 908–913 (1973)
199. Wloka, J.: Partial Differential Equations. Cambridge Univ. Press, Cambridge (1987)
200. Zlamal, M.: Finite element methods for parabolic equations. Math. Comput. 28, 393–404 (1974)
201. Zlotnik, A.A.: Convergence rate estimate in $L_2$ of projection-difference schemes for parabolic equations. USSR Comput. Math. Math. Phys. 18(6), 92–104 (1978)
202. Zlotnik, A.A.: Estimation of the rate of convergence in $V_2(Q_T)$ of projection-difference schemes for parabolic equations. Mosc. Univ. Comput. Math. Cybern. **1**, 28–38 (1980). 1980
203. Zlotnik, A.A.: Some finite-element and finite-difference methods for solving mathematical physics problems with non-smooth data in an $n$-dimensional cube, Part I. Sov. J. Numer. Anal. Math. Model. **6**(5), 421–451 (1991)
204. Zlotnik, A.A.: Convergence rate estimates of finite-element methods for second-order hyperbolic equations. In: Marchuk, G.I. (ed.) Numerical Methods and Applications, pp. 155–220. CRC Press, Boca Raton (1994)
205. Zlotnik, A.A.: On superconvergence of a gradient for finite element methods for an elliptic equation with the nonsmooth right-hand side. Comput. Methods Appl. Math. **2**, 295–321 (2002)
206. Zygmund, A.: Trigonometric Series, 2nd edn. Cambridge University Press, Cambridge (1988), vols. 1 and 2 combined
## Index
**Symbols**
$\theta$-scheme, 267
**A**
A priori estimate, 248
Atlas, 49
**B**
Bijection, 6
Bilinear
- form, 17
- functional, 17, 95
- polynomial, 164
Bochner integral, 252
Boundary, 44
Boundary condition, 44, 92
- Dirichlet, 44, 93
- Neumann, 93
- nonlocal Robin–Dirichlet, 306, 379
- oblique derivative, 93
- Robin, 93
Boundary-value problem, 17, 92
- variational formulation, 17
**C**
Cartesian product, 3
Category, 19, 20
- composition of morphisms, 20
- morphism, 20
- objects, 19
Cauchy
- Cauchy sequence, 4
- Cauchy–Schwarz inequality, 3
Cell
- cell-average, 132
- elementary cell, 317
Closure, 2
Co-normal derivative, 93
Compactness, 5
Compatibility
- with the smoothness of data, 202
Compatibility conditions, 101
Complement, 2
Completeness, 5
Concentrated singularity, 230
Consistency, 122
Continuous
- function, 3
- functional, 7
- Hölder-continuous, 23
- Lipschitz-continuous, 23
- operator, 6
Convergence, 4
- convergence analysis, 144
- convergence by concentration, 30
- convergence by oscillation, 30
- convergence rate, 143
- convergence rate estimate
- optimal, 142
- sharp, 104
pointwise convergence, 30
weak convergence, 9
Convolution, 34
Crank–Nicolson scheme, 267, 272
Cube, 83
**D**
Derivative, 31
- distributional, 31
- partial, 50
Differentiation, 31
Dirac distribution, 27
discrete Dirac delta-function, 233
Discretization, 106
discretization parameter, 106
Dissipation, 258
rate of dissipation, 258
Distribution, 26
Banach-space-valued, 54
Dirac, 27
order of, 27
periodic, 85
regular, 29
singular, 29
tempered, 36
Divided difference, 36
backward, 108
central, 108, 116
forward, 108
second, 116
Domain, 6
Lipschitz, 45
of an operator, 6
with finite width, 48
Duality, 49
argument, 211, 218
pairing, 49
E
Ellipticity, 91
uniform, 91
Embedding, 7
Energy, 176
energy norm, 248
energy space, 248
Equation
elliptic equation, 94
evolution equation, 245
generalized Poisson equation, 126
heat equation, 263
hyperbolic equation, 327
parabolic equation, 245
partial differential equation, 1
Poisson equation, 153
wave equation, 337
Error
error analysis, 9
error bound, 13
global error, 119
truncation error, 119
Essential supremum, 24
Euler scheme
explicit, 267, 272
implicit, 267, 272
Extension
extension of a distribution, 28
extension of a function, 182
extension of a functional, 8
odd extension, 186
periodic extension, 182
F
Finite difference operators, 108
backward difference, 108
central difference, 108
forward difference, 108
Finite difference scheme, 105
consistent, 121
explicit, 264
factorized, 296, 303, 369, 377
five-point scheme, 128
implicit, 264
operator-difference scheme, 258, 333
seven-point scheme, 204
three-level scheme, 333
two-level scheme, 258
Finite element method, 154
Petrov–Galerkin method, 154
Finite volume method, 134
Formula
Fourier inversion formula, 40
discrete Fourier inversion formula, 177
Leibniz’s formula, 22
Newton–Leibniz formula, 139
Fourier series, 84
Fourier series expansion, 182
Fourier transform, 40
discrete Fourier transform, 176
inverse discrete Fourier transform, 177
inverse Fourier transform, 40
Fourier–Laplace transform, 42
Function
characteristic function, 70
entire function, 42
function space, 21
Heaviside function, 29
holomorphic function, 42
integrable function, 1
Lebesgue-integrable function, 1
locally integrable function, 25
mesh-function, 106
rapidly decreasing function, 37
test function, 23
Functional, 7
bilinear, 17
bounded, 17
coercive, 17
continuous, 7
real-valued, 17
sublinear, 9
Functor, 20
G
Gelfand triple, 246, 248
Grid, 105
H
Hyperbolic
hyperbolic equation, 327
hyperbolic problem, 327
Hyperplane, 46
I
Identity
parallelogram identity, 3
Parseval identity, 43, 109
Image, 6
inverse image, 6
Inequality
Bernstein’s inequality, 179
Carlson–Beurling inequality, 68
Cauchy–Schwarz inequality, 3
first fundamental inequality, 211
Friedrichs inequality, 48
discrete Friedrichs inequality, 110, 269
Gårding’s inequality, 246
Hadamard inequality, 249
Hölder’s inequality, 179
inverse inequality, 164, 210, 216
second fundamental inequality, 210
triangle inequality, 2
Young’s inequality, 35
Initial condition, 245
Initial-boundary-value problem, 245
Injection, 6
Inner product, 3
Interface, 228
interface problem, 228
Interpolant, 161
trigonometric interpolant, 178
Interpolation, 19
complex method of interpolation, 387
interpolation functor, 20
interpolation of Banach spaces, 19
interpolation pair, 19
interpolation space, 20
$K$-method of interpolation, 19
Isometry, 9
K
Kernel, 10
Kronecker delta, 51
L
Laplace operator, 125
discrete, 108
Laplacian, 170
discrete Laplacian, 170
Lebesgue
differentiation theorem, 89
integrable function, 24
measurable function, 24
point, 89
Lemma
Bramble–Hilbert lemma, 144, 148–150
du Bois-Reymond’s lemma, 25
Limit, 4
weak limit, 9
Linear
linear combination, 14
linear functional, 1
linear operator, 6
linear space, 2
linear subset, 6
M
Matrix, 117
banded matrix, 129
matrix notation, 117
sparse matrix, 129
tridiagonal, 117
Measure, 24
Borel measure, 38
Lebesgue measure, 24
Mesh, 105
boundary mesh-points, 105
interior mesh-points, 105
mesh-dependent norm, 104
mesh-function, 106
nonuniform mesh, 153
quasi-uniform mesh, 159
uniform mesh, 71, 106
Midpoint rule, 133, 145, 146, 148
Mollifier, 66
family of order $(\mu, \nu)$, 72
order of smoothing, 73
order or approximation, 73
Steklov mollifier, 70, 287, 310, 359
Morphism, 20
Multi-index, 21
length of, 22
Multiplier, 58, 66
discrete Fourier multiplier, 176, 180
Fourier multiplier, 66
N
Neighbourhood, 2
Norm
discrete, 104
discrete Bessel-potential norm, 196
Norm (cont.)
discrete Sobolev norm, 110
dual norm, 8
energy norm, 248
mesh-dependent norm, 104
Sobolev norm, 43
Object, 20
Operator, 6
adjoint operator, 16
averaging operator, 156
bounded operator, 6
compact operator, 7
continuous operator, 6
densely defined, 16
differential operator, 61
principal part, 91
domain of an operator, 6
elliptic operator, 91
uniformly elliptic operator, 91
embedding operator, 7
extension operator, 46
identity operator, 7
inverse operator, 6
linear operator, 6
nonnegative, 111
parabolic operator, 256
uniformly parabolic operator, 256
positive definite, 109, 248
uniformly, 258
range of an operator, 6
rotated discrete Laplace, 171
selfadjoint operator, 16
smoothing operator, 42
symmetric, 17
unbounded, 16
uniformly bounded family, 70
Order
order of accuracy, 121
order of approximation, 73
order of convergence, 122
order of mollification, 73, 77
Ordering, 22
lexicographical ordering, 22
Orthogonal
orthogonal co-ordinate system, 44
orthogonal complement, 14
orthogonal vectors, 3
Parabolic
parabolic equation, 245
parabolic problem, 246
Partial undivided difference, 88
Partition
partition of unity, 49
Piecewise linear basis, 169
Poisson’s equation, 94
generalized Poisson’s equation, 126
Polynomial, 38
bilinear polynomial, 164
polynomial growth, 38
trigonometric polynomial, 84
R
Reflection, 32
Riemann sum, 115
S
Schwartz class, 37
Seminorm, 2
Sequence, 4
Cauchy sequence, 4
of at most polynomial growth, 85
Set
bounded set, 2
closed set, 2
compact set, 5
convex set, 3
countable set, 2
dense set, 2
open set, 2
relatively compact set, 5
Sobolev, 43
discrete Sobolev norm, 110
Sobolev embedding theorem, 46
Sobolev norm, 43
Sobolev seminorm, 44
Sobolev space, 43
anisotropic Sobolev space, 51
fractional-order Sobolev space, 47
Solution
classical solution, 94
strong solution, 95
weak solution, 96
Space
Banach space, 5
Besov space, 56
Bessel-potential space, 77
complete space, 5
dual space, 8
energy space, 248, 329
Euclidean space, 3
function space, 21
Hilbert space, 5
inner product space, 3
Lebesgue space, 44
Space (cont.)
locally compact space, 6
multiplier space, 58
normed linear space, 2
reflexive space, 9
rigged Hilbert space, 246
separable space, 2
Sobolev space, 43
space of continuous functions, 3
subspace, 8
Spline, 76
B-spline, 76
Stability, 122
conditional, 270
unconditional, 269
Stencil, 128
Superadditivity, 152
Superconvergence, 159
Support, 23
compact support, 23
support of a continuous function, 23
support of a distribution, 27
Surjection, 6
T
Taylor series, 76
expansion, 116
Tensor product, 32
Theorem
discrete Marcinkiewicz multiplier theorem, 180
discrete Sobolev embedding, 110
divergence theorem, 154
extension theorem, 46
Fubini’s theorem, 33
Hahn–Banach theorem, 8
Hellinger–Toeplitz theorem, 17
Jordan–Brouwer theorem, 45
Lax–Milgram theorem, 18
Lebesgue’s differentiation theorem, 25, 89
Lizorkin’s multiplier theorem, 69
Marcinkiewicz multiplier theorem, 89
Paley–Wiener theorem, 43
Plancherel’s theorem, 43
Rademacher’s theorem, 45
Rellich–Kondrashov theorem, 47
Riesz representation theorem, 15
Sobolev embedding theorem, 46
trace theorem, 50
Thomas algorithm, 267
Torus, 83
Total variation, 88
Trace, 47
trace operator, 50
trace theorem, 50
Translation, 32
Transmission
transmission conditions, 230, 298
transmission problem, 228
V
Variation, 67
total variation, 67
W
Wave-number, 66
Well-posedness, 136
in the sense of Hadamard, 104
## List of Symbols
### General and Miscellaneous
| Symbol | Page |
|--------|------|
| $\mathbb{R}$, 2 | |
| $\mathbb{R}_+$, 2, 51 | |
| $\mathbb{C}$, 2 | |
| $A + B$, 2, 19 | |
| $\mathcal{U} \times \mathcal{V}$, 3 | |
| $D(A)$, 6 | |
| $R(A)$, 6 | |
| $A^{-1}$, 6 | |
| $\hookrightarrow$, 7 | |
| $\hookleftarrow \hookrightarrow$, 7 | |
| $\text{Ker}(\cdot)$, 10 | |
| $\mathcal{S}^\perp$, 14 | |
| $\mathcal{S} \oplus \mathcal{R}$, 14 | |
| $\Re z$, 14 | |
| $A^*$, 16 | |
| $\mathbb{N}$, 21 | |
| $|\alpha|$, $\alpha \in \mathbb{N}^n$, 21 | |
| $\alpha!$, $\alpha \in \mathbb{N}^n$, 22 | |
| $\partial^\alpha$, $\alpha \in \mathbb{N}^n$, 22 | |
| $x^\alpha$, $\alpha \in \mathbb{N}^n$, 22 | |
| $\mathbb{Z}$, 22 | |
| $\binom{\alpha}{\beta}$, $\alpha, \beta \in \mathbb{N}^n$, 22 | |
| $\text{supp } u$, 23 | |
| $\langle u, \varphi \rangle$, 26 | |
| $\delta$, 27 | |
| $H(x)$, 29 | |
| $\tau_\alpha u$, 32 | |
| $u_-$, 32 | |
| $u \times v$, 33 | |
| $u * v$, 34 | |
| $F \varphi$, 40 | |
| $F^{-1} \varphi$, 40 | |
| $\Im z$, 43 | |
| $[\alpha]$, 51 | |
| Symbol | Page |
|--------|------|
| $|\alpha|$, 51 | |
| $[\alpha]$, 51 | |
| $\delta_{ij}$, 51 | |
| $\Delta_h u$, 52 | |
| $B^{(0)}_h$, 70 | |
| $B^{(1)}_h$, 71 | |
| $B^{(\ell)}_{h,\alpha}$, 72 | |
| $\mathbb{T}^n$, 83 | |
| $\hat{u}$, 84 | |
| $a^\vee$, 84 | |
| $\Delta_j a(k)$, 88 | |
| $\text{Var}(a)$, 88 | |
| $\mathcal{P}_m$, 145 | |
| $\mathcal{P}_B$, 149 | |
| $\mathbb{I}$, 176 | |
| $\mathcal{F} V$, 176 | |
| $\mathcal{F}^{-1} a$, 177 | |
| $T_V$, 177 | |
| $\text{var}(a)$, 179 | |
| $\mathcal{F}_\sigma V$, 187 | |
| $\mathcal{F}_\sigma^{-1} V$, 187 | |
| $I_{s,h} V$, 196 | |
| $\Omega_{h,i}$, 219 | |
| $\delta_\Sigma$, 228 | |
| $\Omega^\pm$, 230 | |
| $[u]_\Sigma$, 230 | |
| $\delta_{\Sigma h}$, 233 | |
| $\|u\|_A$, 248 | |
| $(\cdot, \cdot)_A$, 248 | |
| $Q^-$, 298 | |
| $\Gamma_i$, 307 | |
| $A^*_h$, 358 | |
Function Spaces and Spaces of Distributions
\((A_1, A_2)_{\theta, q}\), 21
\(C^k(\Omega)\), 22
\(C(\Omega)\), 22
\(BC(\Omega)\), 22
\(C^k(\overline{\Omega})\), 22
\(C^{k,\lambda}(\overline{\Omega})\), 23
\(C^k_0(\Omega)\), 23
\(C^\infty_0(\Omega)\), 23
\(L_p(\Omega)\), 24
\(L_{1,loc}(\Omega)\), 25
\(D'(\Omega)\), 26
\(D'(\Omega)\), 26
\(E'(\Omega)\), 28
\(E'(\Omega)\), 28
\(S(\mathbb{R}^n)\), 37
\(S'(\mathbb{R}^n)\), 37
\(W^s_p(\Omega)\), 43, 47, 48
\(\dot{W}^s_p(\Omega)\), 48
\(W^A_p(\Omega)\), 52
\(L_p((c, d); U)\), 53
\(C^k((c, d); U)\), 54
\(C^k([c, d]; U)\), 54
\(W^r_p((c, d); U)\), 55
\(W^{s,r}_p(Q)\), 55
\(W^{s,s/2}_p(Q)\), 56
\(\dot{W}^{s,s/2}_2(Q)\), 56
\(B^s_{p,q}(\Omega)\), 56
\(M(V \to W)\), 58
\(W^s_{p,unif}\), 59
\(B^s_{q,p,unif}\), 61
\(M_p(\mathbb{R}^n)\), 67
\(M_p(\Omega)\), 69
\(H^s_p(\mathbb{R}^n)\), 77
\(D'(\mathbb{T}^n)\), 83
\(D'(\mathbb{T}^n)\), 83
\(L_p(\mathbb{T}^n)\), 83
\(W^m_p(\mathbb{T}^n)\), 87
\(H^s_p(\mathbb{T}^n)\), 87
\(m_p(\mathbb{T}^n)\), 88
\(\dot{W}^s_2(\Omega)\), 230
\(\dot{W}^s_2(\Omega)\), 230, 233
\(\dot{L}_2(\Omega)\), 233
\(W(0, T)\), 247, 308
\(\mathcal{H}_A\), 248
\(\dot{W}^{s,s/2}_2(Q)\), 298
\(L\), 306
\(W^k_2\), 307
\(\dot{W}^1_2\), 307
\(W^{-1,1/2}_2\), 309
\(\dot{W}^{-1/2}_2(0, T)\), 309
\(\mathfrak{W}^1_2(0, 1)\), 356
Mollifiers
\(T_h f\), 69
\(T^{h,v}_h f\), 74
\(T^v_h f\), 76, 185
\(T^{11}_h f\), 133, 195
\(T^1_h f\), 147
\(T^{01}_h f\), 160
\(T^{-10}_h f\), 160, 161
\(T_i f\), 204
\(T^\pm_i f\), 205
\(S_i f\), 217
\(T^{2\pm}_i f\), 227, 233, 234
\(T^2_i f\), 276
\(T^\pm_i f\), 277
\(T_{x,i} f_i\), 310
\(T^\pm_{x,i} f_i\), 310
\(T_{y,i} f_i\), 310
\(T^\pm_{y,i} f_i\), 310
\(T_i f_i\), 310
\(T^\pm_i f_i\), 310
\(T_{x,i}^{2\pm} f_i\), 310
\(T_x f\), 347
\(T_i f\), 347
\(T^\pm_i f\), 347
Differential Operators
\nabla u\), 63
\(P(x, \partial)u\), 91
\(P_0(x, \partial)u\), 91
\(M_j(u)\), 92
\(\partial_v u\), 93
\(\Delta u\), 94
\(\mathcal{L}u\), 105, 202, 220, 358
Meshes
\(\Omega^\mu\), 105, 123, 184
\(\overline{\Omega}^\mu\), 105, 123, 184
\(\Gamma^h\), 105, 123, 184
\(\Omega^h_\mu\), 107, 185
\(\Gamma^h_{ij}\), 123
\(\overline{\Gamma}^h_{ij}\), 123
\(\Gamma^h_*\), 123
\(\Omega^h_i\), 123
| Symbol | Page |
|--------|------|
| $\Omega^h_{\Gamma}$ | 123 |
| $\Omega^h_{\chi}$ | 130, 185 |
| $\Omega^h_{\gamma}$ | 130, 185 |
| $\mathbb{R}^n_h$ | 176 |
| $\Gamma^h_{\gamma}$ | 184 |
| $\Gamma^h_{\chi}$ | 184 |
| $\Gamma^h_{\pm}$ | 185 |
| $\overline{\Omega}^{\tau}$ | 258 |
| $\Omega^{\tau}$ | 258 |
| $\Omega^{\pm}_{\tau}$ | 258 |
| $\overline{Q}^{\tau}_h$ | 264, 287 |
| $t_{m+\theta}$ | 267 |
| $Q^{\tau}_h$ | 287 |
| $Q^{\tau \pm}_h$ | 287 |
| $\overline{\Omega}^{\pm}_h$ | 310 |
| $\Omega^{\pm}_h$ | 310 |
| $\Omega^{\pm}_{i+1}$ | 310 |
| $\overline{\Omega}^{k_j}_i$ | 310 |
| $\Omega^{k_j}_i$ | 310 |
| $\Omega^{\pm}_{i \pm}$ | 310 |
**Mesh-Dependent Norms**
| Symbol | Page |
|--------|------|
| $\|U\|_h$ | 107, 123 |
| $\|U\|_{L_2(\Omega^h)}$ | 107, 123, 147 |
| $\|U\|_h$ | 107, 124 |
| $\|U\|_{L_2(\overline{\Omega}^h)}$ | 107 |
| $\|U\|_h$ | 107 |
| $\|U\|_h$ | 107 |
| $\|U\|_{L_2(\Omega^h_\gamma)}$ | 107, 147 |
| $|U|_{k,h}$ | 110 |
| $\|U\|_{k,h}$ | 110 |
| $\|U\|_{W^1_2(\Omega^h)}$ | 110, 124, 204 |
| $\|U\|_{\infty,h}$ | 110, 163 |
| $\|U\|_{k,h}$ | 112 |
| $\|U\|_{W^1_2(\overline{\Omega}^h)}$ | 112 |
| $\|U\|_{W^1_2(\Omega^h)}$ | 112, 126 |
| $B_T(U)$ | 113 |
| $N_r(U)$ | 114 |
| $A_T(U)$ | 115 |
| $\|U\|_{W^1_2(\overline{\Omega}^h)}$ | 115 |
| $\|U\|_{L_2(\Omega^h)}$ | 123 |
| $\|U\|_{L_2(\Omega^h_\gamma)}$ | 123 |
| $\|U\|_{L_2(\Omega^h_\gamma)}$ | 130 |
| $\|U\|_{L_2(\Omega^h_\gamma)}$ | 130 |
| $\|U\|_{W^{-1}_2(\Omega^h)}$ | 136 |
| $\|U\|_{L_p(\omega^h)}$ | 177 |
| $\|U\|_{L_p(\Omega^h)}$ | 185, 210 |
| $\|U\|_{L_p(\Omega^h_\gamma)}$ | 186 |
| $\|U\|_{W^1_p(\Omega^h)}$ | 186, 210 |
| $\|U\|_{H^1_p(\Omega^h)}$ | 196 |
| $\|U\|_{L_2(\Sigma^h)}$ | 234 |
| $|U|_{W^{1/2}_2(\Sigma^h)}$ | 234 |
| $\|U\|_{L_2(Q^h_\gamma)}$ | 278, 288 |
| $\|U\|_{W^{1/2}_2(Q^h_\gamma)}$ | 278, 288 |
| $\|U\|_{W^{1,1/2}_2(Q^h_\gamma)}$ | 278, 288 |
| $\|U\|_{W^{2,1}_2(Q^h_\gamma)}$ | 278 |
| $\|U\|_{H^1}$ | 288 |
| $\|U\|_{l,h\tau}$ | 288 |
| $\|U\|_{L_2(\Sigma^h \times \Omega^{\tau})}$ | 299 |
| $\|U\|_{L_2(\Omega^{\tau}; W^{1/2}_2(\Sigma^h))}$ | 299 |
| $\|U\|_{W^{1/2}_2(\Omega^{\tau}; L_2(\Omega^h))}$ | 299 |
| $\|U\|_{W^{1/2}_2(\Omega^{\tau}; L_2(\Sigma^h))}$ | 299 |
| $\|U\|_{\dot{W}^{1/2}_2(\Omega^{\tau}; L_2(\Omega^h))}$ | 299 |
| $\|U\|_{\dot{W}^{1/2}_2(\Omega^{\tau}; L_2(\Sigma^h))}$ | 299 |
**Difference Operators**
| Symbol | Page |
|--------|------|
| $\mathcal{L}_h U$ | 106, 204, 221, 359 |
| $D^\pm_i U$ | 108, 128, 147 |
| $D^\pm_{x_i} U$ | 128 |
| $D^0_x U$ | 108 |
| $AU$ | 108, 125 |
| $\overline{A}U$ | 111 |
| $D^\pm_{x_i} U$ | 123 |
| $D^0_{x_i} U$ | 123 |
| $U^{\pm i}$ | 124 |
| $\overline{A}_i U$ | 125 |
| $\Delta_h U$ | 125 |
| $m_i(U)$ | 221, 222 |
| $D^\pm_i U$ | 258, 310 |
| $\hat{U}, \check{U}$ | 258 |
| $U_{m+\theta}$ | 268 |
| $D^\pm_{x_i} U_i$ | 310 |
| $\Lambda_{ij} U_i$ | 321 |
**Spaces of Mesh Functions**
| Symbol | Page |
|--------|------|
| $S^h$ | 107 |
| $S^h_0$ | 107 |
| $L_p(\omega^h)$ | 177 |
| $L_p(\Omega^h)$ | 185 |
| $L_p(\Omega^h_+)$ | 186 |
\begin{align*}
\|U\|_{\tilde{W}^{1,1/2}_2(\Omega^h_k)} & , 299 \\
\|U\|_{L_h} & , 311 \\
\|U\|_{L_h^\tau} & , 311 \\
\|U\|_{L_h^{\tau'}} & , 311 \\
\|U\|_{L_2(\Omega^\pm_h)} & , 311 \\
\|U\|_{L_2(\Omega^h)} & , 311 \\
\|U\|_{L_2(\Omega^\pm_h;H)} & , 311 \\
\|U\|_{\tilde{L}_2(\Omega^\tau,L_h)} & , 311 \\
|U|_{W^{1/2}_2(\overline{\Omega}^\tau,L_h)} & , 311 \\
\|U\|_{\tilde{W}^{1/2}_2(\overline{\Omega}^\tau,L_h)} & , 311 \\
\|U\|_{W^{1,1/2}_2} & , 311 \\
\|U\|_{(0)_{2,\infty,h\tau}} & , 348 \\
\|U\|_{(i)_{2,\infty,h\tau}} & , 348, 361, 383 \\
\|U\|_{2,\infty,h\tau} & , 372 \\
\|U\|_{L_2(\Omega^h \setminus \Sigma^h)} & , 373 \\
\|U\|_{W^{1}_{2,h}} & , 383
\end{align*}
|
The influence of (central) auditory processing disorder in speech sound disorders
Tatiane Faria Barrozo\textsuperscript{a}, Luciana de Oliveira Pagan-Neves\textsuperscript{b}, Nadia Vilela\textsuperscript{a}, Renata Mota Mamede Carvallo\textsuperscript{b}, Haydée Fiszbein Wertzner\textsuperscript{b,*}
\textsuperscript{a} Sciences of Rehabilitation Program, Faculdade de Medicina, Universidade de São Paulo (USP), São Paulo, SP, Brazil
\textsuperscript{b} Speech-Language Pathology and Audiology Program, Department of Physical Therapy, Speech-Language and Audiology and Occupational Therapy, Faculdade de Medicina, Universidade de São Paulo (USP), São Paulo, SP, Brazil
Received 24 April 2014; accepted 28 January 2015
Available online 20 October 2015
KEYWORDS
Articulation disorders;
Auditory perception;
Speech perception;
Evaluation
Abstract
Introduction: Considering the importance of auditory information for the acquisition and organization of phonological rules, the assessment of (central) auditory processing contributes to both the diagnosis and targeting of speech therapy in children with speech sound disorders.
Objective: To study phonological measures and (central) auditory processing of children with speech sound disorder.
Methods: Clinical and experimental study, with 21 subjects with speech sound disorder aged between 7.0 and 9.11 years, divided into two groups according to their (central) auditory processing disorder. The assessment comprised tests of phonology, speech inconsistency, and metalinguistic abilities.
Results: The group with (central) auditory processing disorder demonstrated greater severity of speech sound disorder. The cutoff value obtained for the process density index was the one that best characterized the occurrence of phonological processes for children above 7 years of age.
Conclusion: The comparison among the tests evaluated between the two groups showed differences in some phonological and metalinguistic abilities. Children with an index value above 0.54 demonstrated strong tendencies towards presenting a (central) auditory processing disorder, and this measure was effective to indicate the need for evaluation in children with speech sound disorder.
© 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
\textsuperscript{*} Please cite this article as: Barrozo TF, Pagan-Neves LO, Vilela N, Carvallo RMM, Wertzner HF. The influence of (central) auditory processing disorder in speech sound disorders. Braz J Otorhinolaryngol. 2016;82:56–64.
* Corresponding author.
E-mail: email@example.com (H.F. Wertzner).
http://dx.doi.org/10.1016/j.bjorl.2015.01.008
1808-8694/© 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Introduction
Several aspects have been explored in the studies of children with speech sound disorder (SSD), which is a speech disorder characterized by an inadequate use of phonological rules of language (DSM-IV F80.0 – 315.39). The dynamic models that attempt to explain the development of speech production indicate an interaction between auditory perception, sound production, and sound representation.\textsuperscript{1,2} Thus, a detailed observation of the performance of children with SSD regarding central auditory processing skills can contribute a great deal to the understanding of speech and language manifestations. The central question of this study was to investigate whether children with SSD who were diagnosed late (between 7 and 9 years, 11 months) also have (central) auditory processing disorders.
An impairment in the phonological system is the main feature of SSD and may stem from specific difficulties related to cognitive-linguistic processing (organization of phonological rules), auditory processing, and/or speech production. The interrelationship among these three processings has been the subject of several studies\textsuperscript{3–5} that sought to improve the understanding of the manifestations observed in children with SSD.
Regardless of the SSD classification system used, the literature points to the existence of subtypes,\textsuperscript{6,7} demonstrating a variety of difficulties that may exhibit different manifestations and varying degrees of expression. Such manifestations can be identified by various tests complementary to phonological tests; for instance, speech inconsistency, metalinguistic skills, and those involving auditory organization, assessed in (central) auditory processing (CAP) tests.
The classification of SSD severity is a complex task, because the clinician must consider the phonological changes, speech intelligibility, and age of the child, among other factors. Some severity index classifications can be found in the literature, such as the Percentage of Consonants Correct (PCC),\textsuperscript{8} its revised version, the PCC-R,\textsuperscript{9} and the Process Density Index (PDI).\textsuperscript{10}
Both PCC and PCC-R are intended to indicate the percentage of correct consonants in a conventional speech sample. The main difference between these tools is the fact that PCC considers substitutions, omissions, and distortions as speech errors, while PCC-R considers only consonant substitutions and omissions as errors.
PDI verifies the occurrence of phonological processes, and is distinct from PCC and PCC-R, inasmuch as these two latter indexes account for the correct consonants of speech samples. PDI is inversely proportional to PCC and to PCC-R, in that the lower the value of PCC (or of PCC-R), the higher the value of PDI; that is, the lower the percentage of correct consonants employed in speech, the higher the frequency of use of phonological processes.\textsuperscript{11}
PCC-R and PDI have been applied on Brazilian Portuguese (BP)-speaking children with SSD in the same region of the country where the children of this study live. Studies show that this is an efficient index for classifying SSD severity.\textsuperscript{12–14}
Intelligible speech depends on efficient phonological programming, which reflects the individual’s ability to select the target phoneme and organize the sounds in the correct sequence.\textsuperscript{15} Difficulty in phonological programming can be evaluated by the speech inconsistency test,\textsuperscript{2,15} which indicates a possible deficit in cognitive-linguistic processing that interferes with the internalization of phonological rules of the language the child is exposed to.
An important feature of school-age children with SSD is that they usually show difficulties related to phonological processing; therefore, analyzing the awareness of smaller units that make up speech is an important step in evaluating these children. Several metalinguistic skills are assessed in school-age children, including rhyme and alliteration. The Phonological Sensitivity Test (PST)\textsuperscript{16} checks the phonological coding strategies of the child, through an evaluation of metalinguistic skills of rhyme and alliteration (equal and different). This test has two versions: auditory (with auditory support) and visual (with both auditory and visual support), which was designed to verify if children with SSD benefit from visual support.
To further complement the identification of difficulties faced by children with SSD and considering the interaction of auditory information received – in association with the acquisition and organization of phonological rules in this population – the evaluation of CAP brings significant contributions to the diagnosis of SSD and targeting for phonological intervention. This contribution occurs because (central) auditory processing disorder (CAPD), defined as a difficulty in processing sound information, may result in a difficulty in language development and in learning.\textsuperscript{17}
Although auditory difficulties are the primary complaints of children with CAPD, one study\textsuperscript{18} observed that other impairments can be identified, such as those related to language, to reading and writing, and also to learning difficulties.
Importantly, there are few studies that correlate SSD with CAP. This may be justified by the fact that SSD diagnosis is most often established in children aged 5–7 years, and also because the application of the CAP test is performed only after 7 years of age, due to necessity for the maturation of the structures involved.
Studies\textsuperscript{19,20} in children with other language impairments – for instance, specific language impairment and dyslexia – showed the importance of CAP assessment for complementing diagnosis, as children with these disorders may exhibit CAPD with an impairment in skills involving discrimination of speech sounds, and that may result in impaired and/or less stable neural representations of the sound, perhaps interfering with perception and speech production.
Several tests that assess auditory skills and that are already adapted to the Brazilian Portuguese language are available. Some of them have more use in the evaluation of CAP in children with SSD, by assessing specific speech understanding skills, without their results suffering interference from the phonological change presented by the child. Among these tests, the following are notable: Picture Identification with White Noise, which evaluates auditory closure ability; Dichotic Digit Test, which evaluates figure-ground ability; and Frequency Pattern and Duration Pattern Tests, which assess the hearing abilities of temporal ordering and interhemispheric transfer.
Faced with the diversity of correlated causes and of phonological manifestations found in children with SSD, it is critical to obtain more detailed descriptions on aspects that complement the phonological evaluation in these children.
The aim of this study was to investigate phonological measures and (central) auditory processing of children with speech sound disorder.
**Methods**
The study was approved by the Research Ethics Committee of the university where the study was conducted (No. 201/11). An informed consent was signed by the parent or guardian of each child.
Twenty-one subjects (both genders) diagnosed with SSD, with ages between 7.0 and 9.11 years, were included in this study. The diagnosis was established in a specialized laboratory linked to the university where the study was conducted.
According to CAP evaluation results, the subjects were allocated to the control group (CG), with 10 subjects without CAPD, or to the study group (SG), with 11 subjects with CAPD. As inclusion criteria, the child needed to have speech errors in the phonological test\textsuperscript{21} and an age appropriate performance on vocabulary, fluency, and pragmatic evaluations of the ABFW Child Language Test in phonology, vocabulary, fluency, and pragmatic areas\textsuperscript{22}; needed to have completed the (central) auditory processing (CAP) examination\textsuperscript{23}; needed to have his/her hearing thresholds within the normal range; and could not have undergone speech therapy. In addition, the Speech Inconsistency Test (SIT)\textsuperscript{12} and the PST were applied.\textsuperscript{16} All participants were Brazilian Portuguese-speaking subjects.
The phonology evaluation of the ABFW\textsuperscript{21} consists of the picture naming task (N) that includes 34 figures with 90 correct consonants, and the imitation of words task (I) with 39 words totaling 107 correct consonants. The two phonology tests were transcribed twice by phonology researchers. The agreement between the transcripts was 90%. From the phonological tests, PCC,\textsuperscript{8} PCC-R,\textsuperscript{9} and PDI\textsuperscript{10} severity indexes, the number of different types of phonological processes, as well as the occurrence of each process, were calculated. The following phonological processes were analyzed: syllable reduction (SR); consonant harmony (CH); stopping (S); velar backing (VB); palatal backing (PB); velar fronting (VF); palatal fronting (PF); liquid simplification (LS); cluster reduction (CR); final consonant deletion (FCD); stop voicing (SV); fricative voicing (FV); stop devoicing (SD); and fricative devoicing (FD).
SIT\textsuperscript{12} consists of 25 pictures named three times in different sequences, interspersed by distracting activities. The three namings of each word were analyzed and classified, and the subject was considered as consistent when he/she named the picture equally in three different times, and as inconsistent when at least one of the namings was performed differently. The speech inconsistency index\textsuperscript{12} represents the percentage of inconsistent words in the test and was analyzed according to the criteria established by the authors, determining who is consistent or inconsistent, considering the established cut-off values according to gender and age. Subjects were considered inconsistent when they achieved inconsistency index rates below the cutoff values. For girls aged 5.0–7.6 years, the cutoff value is $\geq 21.5\%$; for the age group above 7.6 years, it is $\geq 14.5\%$. For boys aged 5.0–7.6 years, the cutoff value is $\geq 31.9\%$; for the age group above 7.6 years, it is $\geq 17.6\%$.
For the evaluation of metalinguistic skills (rhyme and alliteration, equal and different), the PST\textsuperscript{16} in its auditory and visual versions was applied. This is a test divided into four parts that verify the performance in rhyme (equal and
different) and alliteration (equal and different) skills. Each test consists of 15 items; the first three items are used to explain the test and the following 12 items, for application and analysis of responses. In the alliteration parts, the subject is requested to say which of the three words begins equally or differently from the target word; in the rhyme test, the subject is asked to say which of the three words ends equally or differently from the target word. The maximum value for correct answers for each test is 12.
The present study used both the Auditory Version (PST-A), wherein the subject has only auditory support when answering, and the Visual Version (PST-V), in which the subject has auditory support associated to visual support (pictures). The application of the test was divided into four sessions, to prevent interference of the subject’s fatigue with the results, which were analyzed according to the criteria established in the literature.\textsuperscript{16}
The Picture Identification Tests with White Noise,\textsuperscript{24} Dichotic Digit Test,\textsuperscript{25} Frequency Pattern Test,\textsuperscript{26} and Duration Pattern Test\textsuperscript{26} were employed for the assessment of CAP. The criterion for identifying APD in the tested subjects was an observed change in at least two of the four administered tests.\textsuperscript{27}
The evaluation of CAP was carried out in a specialized laboratory from the same university where the study was conducted. To perform this test, we used a GSI-61 Grason-Stadler audiometer with frequency range from 125 to 12,000 Hz, and an intensity variation of 10–110 dB HL for pure tone in the frequencies of 125 Hz and 12,000 Hz, and of –10 to 120 dB HL for the frequencies of 500, 750, 1000, 2000, 3000, 4000, 5000, and 6000 Hz. The calibration was carried out according to ANSI S3 6-1989; ANSI S3 43; IEC 645-1 (1992); IEC 645-2 (1993) and ISO 389; UL 544, and was conducted in a soundproof booth (Siemens) calibrated according to ANSI S3 1-1991 standards.
**Statistical method**
The following statistical tests were used: Fisher’s exact test, Student’s $t$-test, and the Mann–Whitney test. As to hypothesis tests, the significance level was set at 0.05. The analysis was performed using Minitab (version 16) and SPSS (version 18) statistical programs.
**Results**
In the analysis performed by gender in both groups, the results showed that although there was no difference between percentage distribution for this variable ($p > 0.999$, Fisher’s exact test), it was noted that most of the subjects were male, both in the CG (7) and the SG (8).
Regarding the subjects’ age, the values were CG (8.5) and SG (7.10); no significant differences between means of the groups were observed ($p = 0.131$, Student’s $t$-test).
As to the number of different types of phonological processes in phonological tests, the results showed that SG participants used a mean of four types of phonological processes in each of the tasks. Conversely, CG participants used a mean of three types of phonological processes. Although the SG had a mean higher number of phonological processes regardless of phonological test, this difference was not significant with respect to imitation of words ($p = 0.458$) or to picture naming ($p = 0.538$).
The descriptive values for the percentage of occurrence of each phonological process in both phonological tasks (Table 1) show that the phonological processes with the
| Phonological process | Phonology task |
|----------------------|-------------------------|
| | Imitation of words | Picture naming |
| LS | 0.88 | 0.555 |
| CR | 0.041\textsuperscript{a} | 0.079 |
| FCD | 0.231 | 0.123 |
| SD | 0.852 | 0.938 |
| FD | 0.879 | 0.754 |
LS, liquid simplification; CR, cluster reduction; FCD, final consonant deletion; SD, stop devoicing; FD, fricative devoicing.
\textsuperscript{a} Significant difference.
---
**Table 2** $p$-Values obtained in the comparison of distribution of LS, CR, FCD, SD, and FD processes between Control Group and Study Group.
**Table 3** Descriptive statistics for Percentage of Consonants Correct (PCC), Percentage of Consonants Correct-Revised (PCC-R), and Process Density Index (PDI) in the control group (CG) and study group (SG).
| Group | $n$ | Mean | Standard deviation | Minimum | Median | Maximum | $p$-Value |
|-------|-----|------|--------------------|---------|--------|---------|-----------|
| PCC | | | | | | | |
| CG | 10 | 82.9 | 7.5 | 70.1 | 84.6 | 90.7 | 0.031\textsuperscript{a} |
| SG | 11 | 74.7 | 11.1 | 62.6 | 73.8 | 95.3 | |
| PCC-R | | | | | | | |
| CG | 10 | 88 | 7.5 | 72.9 | 89.3 | 98.1 | 0.014\textsuperscript{a} |
| SG | 11 | 78.6 | 10.3 | 64.5 | 74.8 | 95.3 | |
| PDI | | | | | | | |
| CG | 10 | 0.33 | 0.21 | 0.1 | 0.3 | 0.7 | 0.007\textsuperscript{a} |
| SG | 11 | 0.64 | 0.32 | 0 | 0.65 | 1 | |
$n$, number of subjects.
Statistics: Student’s $t$-test.
\textsuperscript{a} Significant difference.
highest mean percentage of occurrence were LS, CR, FCD, SD, and FD. Depending on these results, the distributions of these processes were compared between the two groups (CG and SG; Table 2), and a difference was found only for CR in the word imitation test, indicating a higher occurrence of this process in SG.
The descriptive values of PCC, PCC-R, and PDI in each group (Table 3) showed that, for all severity indexes, there was a difference in the comparison between groups. It can be observed that the mean values of the PCC and PCC-R were lower in SG, and those of the PDI were higher in this group (Fig. 1).
For the PDI index, a receiver operating characteristic (ROC) curve was drawn (Fig. 2), indicating the nearest point of the upper left corner corresponded to the higher sensitivity (0.73) and specificity (0.90) values. A cutoff value of 0.54 was associated with this point. The area under the curve (AUC) of 0.79 confirmed the discriminatory power of the PDI. Thus, subjects with PDI $\geq$ 0.54 are most likely to belong to SG, i.e., to present an AP change.
The results of SIT classification suggested that, in both groups, the number of consistent children was higher: six in the CG and nine in the SG. However, no difference between the percentages of occurrence of inconsistent subjects in both groups ($p = 0.268$, Fisher’s exact test) was found.
Distributions of frequencies and percentages of PST-A and PST-V in CG and SG (Table 4) showed no difference between percentages of impaired results in the four tests in both versions, both in the CG ($p = 0.504$) and the SG ($p = 0.772$, analysis of variance); however the percentage of impaired results was greater in the SG ($p < 0.001$, Fisher’s exact test) in all PST-A tasks. In PST-V, the percentages of impaired results were higher in the SG ($p < 0.001$, Fisher’s exact test), and this result was independent of the task ($p = 0.196$, analysis of variance).
Comparing PST-A vs. PST-V, a $p$-value of 0.095 was found; the lack of statistical power can be attributed to the small sample size in both groups.
Figure 1 Individual and mean values of Percentage of Consonants Correct (PCC), Percentage of Consonants Correct-Revised (PCC-R), and Process Density Index (PDI) in the control group (CG) and study group (SG).
Figure 2 Receiver operating characteristic (ROC) curve for PDI.
Table 4 Frequency distributions and percentages of Phonological Sensitivity Test-Auditory (PST-A) and Phonological Sensitivity Test-Visual (PST-V) in the control group (CG) and study group (SG).
| | CG Impaired | CG Normal | SG Impaired | SG Normal |
|----------------------|-------------|-----------|-------------|-----------|
| | n | % | n | % | n | % | n | % |
| **PST-A** | | | | | | | | |
| *Initial sound* | | | | | | | | |
| Equal | 1 | 10 | 9 | 90 | 7 | 63.6 | 4 | 36.4 |
| Different | 0 | 0 | 10 | 100 | 7 | 63.6 | 4 | 36.4 |
| *Final sound* | | | | | | | | |
| Equal | 1 | 10 | 9 | 90 | 9 | 81.8 | 2 | 18.2 |
| Different | 1 | 10 | 9 | 90 | 8 | 72.7 | 3 | 27.3 |
| **PST-V** | | | | | | | | |
| *Initial sound* | | | | | | | | |
| Equal | 0 | 0 | 10 | 100 | 6 | 54.6 | 5 | 45.4 |
| Different | 0 | 0 | 10 | 100 | 3 | 27.3 | 8 | 72.7 |
| *Final sound* | | | | | | | | |
| Equal | 2 | 20 | 8 | 80 | 7 | 63.6 | 4 | 36.4 |
| Different | 0 | 0 | 10 | 100 | 8 | 72.7 | 3 | 27.3 |
n, number of subjects.
Discussion
The manifestations of SSD are heterogeneous, which makes their classification difficult. Thus, the present study described the performance of children diagnosed with SSD in various phonological (number of different types of phonological processes, SSD severity, and speech inconsistency) and metalinguistic (PST-V and PST-A) skills, depending on the presence/absence of impairment in CAP.
The description of both groups, CG and SG, indicated no difference regarding the subjects’ age; a male gender predominance was noted in both groups.
Among the phonological measures examined, the number of different types of phonological processes has shown no evidence of difference between CG and SG in both phonological tasks. Nevertheless, SG presented higher means for this variable, both in imitation of words as well as in picture naming tasks.
As for phonological process types with higher incidence, in general, the same findings already noted in previous studies with Brazilian Portuguese-speaking children were observed: SD, FD, LS, and CR. Considering that the present study compared the performance of children with SSD with and without CAPD, we found that CR was observed most frequently among these phonological processes, both in the CG and the SG. In addition, CR was the only phonological process that showed difference only in imitation of words test between the two groups, with greater occurrence in the SG, suggesting that a CAPD can hinder the phonological organization of complex structures.
Nonetheless, in the analysis of the three severity indices (PCC, PCC-R, and PDI), SG subjects presented higher SSD severity compared to subjects without CAPD, i.e., SG subjects had fewer correct consonants and higher incidence of phonological processes. In a study conducted among Brazilian Portuguese-speaking children with SSD and without CAPD, higher and more homogeneous values were found in the PCC-R index, suggesting that children with SSD and CAPD show higher SSD severity, i.e., greater phonological difficulty.
Evidence arising from the comparison of severity measures and phonological processes in children with SSD with and without CAPD is critical because, in general, very little is discussed about CAP in these children. Whereas studies using dynamic models related to the development of speech are advancing, the interaction between auditory perception and the production and phonological organization of speech sounds are becoming more appreciated. Accordingly, a verification of the relationship among phonological and CAP measurements in children over 7 years of age with SSD shows that searching for this relationship is not only appropriate but also necessary.
According to the literature, close relationships are observed between speech impairment and CAPD, since CAP hinders the formation of phonemic representation in the brain, thus interfering with the learning of the rules of phonology, syntax, and semantics. The fact that SG subjects with CAPD employed a greater number of different types of phonological processes indicates greater difficulty in phonological representation, perhaps due to the difficulty that these subjects seem to have in retrieving the phonological representations through auditory feedback during speech production.
Among the indexes that characterize SSD severity, PDI was the one that best characterized the occurrence of all phonological processes in speech. Due to this finding, a ROC curve was constructed for PDI, in order to explore in greater detail the occurrence of phonological processes due
to CAPD. The cutoff value identified suggests that children with SSD aged 7 years and presenting PDI ≥ 0.54 may present CAPD. This cutoff value provides evidence that PDI, applied to speech samples at the time of the SSD diagnosis for children over 7 years of age, is effective to identify children in need of referral, on a priority basis, for evaluating CAP. By evaluating CAP, additional information can be obtained, greatly assisting the planning and execution of the treatment of each child.
The last phonological measure evaluated in the study – the speech inconsistency classification – was analyzed in order to determine whether children with inconsistent SSD had CAPD. Our study revealed no difference between groups and that, independently of the group, the number of consistent subjects was greater than the number of inconsistent ones. This result indicates that, for this sample of subjects, phonological programming did not suffer interference of CAPD. Studies\textsuperscript{12,29} that have already applied the speech inconsistency classification indicate that most children with SSD are consistent, i.e., they do not experience difficulty in phonological programming.
Regarding tests evaluating the metalinguistic skills of rhyme and alliteration, the study showed that the level of impaired results was higher in the SG in the four subtests and in both versions of the PST. This finding suggests a relationship between the skills involved in CAP with those of PST, i.e., if the child shows a CAPD, it is more likely that difficulties will occur in the metalinguistic skills of rhyme and alliteration. This can be explained by the greater difficulty of perception and of auditory organization presented by SG subjects, which interferes with the ability to answer each item correctly, as in the case of the PST, the child needs to retain the stimulus in its working memory and recognize if the initial or final sound is equal or different. This finding suggests an interrelationship of the processings involved in speech.\textsuperscript{1,3} Concerning PST, the present study compared the two versions of the test (visual and auditory), and found no difference between these versions in the two groups. Thus, the visual support provided by the pictures presented in PST-V did not help the performance of children with SSD and CAPD.
This study was designed to be able to ascertain if the phonological and metalinguistic manifestations of children with SSD differed according to the presence of a CAPD. The study of such relationship contribute to a more accurate diagnosis of SSD and to more effective interventions for children with this disorder. The analyses performed for phonological and metalinguistic measures indicated that children with SSD and CAPD have a more severe condition, present greater use of the CR phonological process, have more difficulty in rhyme and alliteration skills, and also did not benefit from visual cues for these skills.
An important finding of this study was the cutoff point established for the PDI, which effectively differentiated children with SSD and CAPD from those with SSD without CAPD. Therefore, PDI, which is an index that measures the occurrence of phonological processes in a speech sample, can be applied to the evaluation of SSD diagnosis, suggesting the need to evaluate the CAP in children with SSD aged over 7 years.
The principal limitation of this study was the relatively small number of participants, since the age required for assessing CAP is over 7 years, and the diagnosis of SSD is most often made before that age. That is the reason why the number of children who met the inclusion criterion of age was reduced.
This study is innovative because it demonstrated that children with SSD with a PDI value > 0.54 exhibit a strong tendency to present CAPD.
**Conclusion**
A comparison of performances of children with SSD with and without CAPD showed evidence of differences among them on some phonological and metalinguistic skills. Children with SSD and CAPD showed higher occurrence of the phonological process of CR, greater difficulty in rhyme and alliteration tests and, in addition, were not benefited by the pictures provided at the PST-V. They had lower PCC-R and higher PDI values. In addition, children with SSD with a PDI value above 0.54 demonstrated a strong tendency to have CAPD, and this measure was effective to identify children with SSD in need of a CAP evaluation.
**Funding**
This study was funded by CAPES – Institutional Quota (Social Demand) – Universidade de São Paulo.
**Conflicts of interest**
The authors declare no conflicts of interest.
**Acknowledgement**
CAPES scholarship.
**References**
1. Guenther FH. Cortical interactions underlying the production of speech sounds. *J Commun Dis.* 2006;39:350–65.
2. Smith A. Development of neural control of orofacial movements for speech. In: Hardcastle WJ, Laver J, Gibbon, editors. *The handbook of phonetic sciences.* 2nd ed. New York: Wiley-Blackwell; 2010.
3. Dodd B, McIntosh B. The input processing, cognitive linguistic and oro-motor skills of children with speech difficulty. *Int J Speech Lang Pathol.* 2008;10:169–78.
4. Shriberg LD, Fourakis M, Hall SD, Karlsson HB, Lohmeier HL, McSweeny JL, et al. Extensions to the speech disorders classification system (SDCS). *Clin Linguist Phon.* 2010;24:795–824.
5. Rvachew S, Brosseau-Lapré F. An input-focused intervention for children with developmental phonological disorders. *Perspect Lang Learn Educ.* 2012;19:31–5.
6. Shriberg LD, Lewis BD, Tomblin JB, Mcseeny JL, Karlsson BK, Scheer AR. Toward diagnostic and phenotype markers for genetically transmitted speech delay. *J Speech Hear Res.* 2005;48:834–52.
7. Betz SK, Stoel-Gammon C. Measuring articulatory error consistency in children with developmental apraxia of speech. *Clin Ling Phon.* 2005;19:53–6.
8. Shriberg LD, Kwiatkowsky J. Phonological disorders I: a diagnostic classification system. *J Speech Hear Dis.* 1982;46:197–204.
9. Shriberg LD, Austin D, Lewis BA, McSweeny JL. The percentage of consonants correct metric: extensions and reliability data. J Speech Lang Hear Res. 1997;40:708–22.
10. Edwards ML. Clinical forum: phonological assessment and treatment in support of phonological processes. Lang Speech Hear Serv Sch. 1992;23:233–40.
11. Wertzner HF, Amaro L, Galea DES. Phonological performance measured by Speech Severity Indexes related to correlated factors. São Paulo Med J. 2007;125:309–14.
12. Castro MM, Wertzner HF. Speech inconsistency index in Brazilian Portuguese-speaking children. Folia Phoniatr Logop. 2011;63:237–41.
13. Wertzner HF, Pagan LO, Galea DES, Papp ACCS. Características fonológicas de crianças com transtorno fonológico com e sem histórico de otite média. Rev Soc Bras Fonoaudiol. 2007;12:41–7.
14. Wertzner HF, Claudino GL, Galea DES, Patah LK, Castro MM. Medidas fonológicas em crianças com transtorno fonológico. Rev Soc Bras Fonoaudiol. 2012;17:189–95.
15. Dodd B. Evidence-Based practice and speech-language pathology: strengths, weaknesses, opportunities and threats. Folia Phoniatr Logop. 2007;59:118–29.
16. Herrero SF [dissertação] Desempenho de crianças com distúrbio fonológico: teste de sensibilidade fonológica e de leitura e escrita. São Paulo: Faculdade de Filosofia, Letras e Ciências Humanas, Universidade de São Paulo; 2007.
17. Jerger J, Musiek FE. Report of consensus conference on the diagnosis of auditory processing disorders in school-aged children. J Am Acad Audiol. 2000;11:467–74.
18. Bellis TJ. Historical foundations and the nature of (central) auditory processing disorder. In: Chermak GD, Musiek FE, editors. Handbook of (central) auditory processing disorder: auditory neuroscience and clinical diagnosis. 1st ed. San Diego: Plural Publishing; 2007. p. 119–36.
19. Flitch RH, Tallal P. Neural mechanisms of language-based learning impairments: insights from human populations and animal models. Behav Cogn Neurosci Ver. 2003;2:155–73.
20. McArthur GM, Bishop DV. Speech and non-speech processing in people with specific language impairment: a behavioral and electrophysiological study. Brain Lang. 2005;94:260–73.
21. Wertzner HF. Fonologia. In: Andrade CRF, Belfi-Lopes DM, Fernandes FDM, Wertzner HF, editors. ABFW: Teste de linguagem infantil nas áreas de Fonologia, Vocabulário Fluência e Pragmática. 2nd ed. Carapicuíba: Pró-Fono; 2004, 98 pp.
22. Andrade CRF, Belfi-Lopes DM, Fernandes FDM, Wertzner HF. ABFW: Teste de linguagem infantil nas áreas de Fonologia, Vocabulário Fluência e Pragmática. 2nd ed. Carapicuíba: Pró-Fono; 2004. p. 5–32.
23. Pereira LD, Schochat E. Processamento auditivo central: manual de avaliação. 1st ed. São Paulo: Lovise; 1997, 231 pp.
24. Almeida CIR, Caetano MHU. Logoaudiometria utilizando sentenças sintéticas. Rev Bras Otorrinolaringol. 1988;54:68–72.
25. Santos MFC, Pereira LD. Escuta com dígitos. In: Pereira LD, Schochat E, editors. Processamento auditivo central: manual de avaliação. 1st ed. São Paulo: Lovise; 1997. p. 147–9.
26. Auditec. Evaluation manual of pitch patter sequence and duration pattern sequence. Missouri, USA: Auditec; 1997, 26 pp.
27. Sharma M, Purdy SC, Kelly AS. Comorbidity of auditory processing, language and reading disorders. J Speech Lang Hear Res. 2009;52:706–22.
28. Vilela N, Wertzner HF, Sanches GSG, Neves-Lobo IF, Carvalho RMM. Processamento temporal de crianças com transtorno fonológico submetidas ao treino auditivo: estudo piloto. J Soc Bras Fonoaudiol. 2012;24:42–8.
29. Dodd B, Holm A, Crosbie S, McComarck P. Differential diagnosis of phonological disorders. 2nd ed. London: Whurr; 2005, 400 pp.
|
ARTE E PAROLA
Inverno / Winter 2022 - 23
/ A /
IL FREGIO E LA BEFFA, GIORGIONE E CANOVA
di Matteo Melchiorre
Si chiude un anno molto speciale per il Museo Casa Giorgione. Da un lato la ripresa dei temi giorgioneschi e le importanti novità relative al Fregio delle Arti Liberali e Meccaniche, emerse e presentate al pubblico durante l'iniziativa Dar Voce al Fregio. Racconto in tre atti (giugno-ottobre 2022). Dall'altro un sentito omaggio al Canova, nell'anno delle ricorrenze per i duecento anni dalla sua morte, con una mostra che presenta un poco noto ma affascinante legame tra Antonio Canova e Giorgione.
Il Fregio delle Arti Liberali e Meccaniche, custodito nel Museo Casa Giorgione di Castelfranco Veneto, è una sfida ancora aperta e un campo in massima parte da esplorare, un'opera dagli stimoli molteplici ma non semplice e non immediatamente intuitiva. Da qui la volontà di "dargli voce" che ha mosso le iniziative di valorizzazione intitolate appunto Dar Voce al Fregio. Racconto in tre atti. Non è stata una mostra tradizionale ma una progressiva opera
60
di sedimentazione di contenuti relativi al Fregio, che sono entrati a far parte stabilmente dell'allestimento del museo e restano pertanto disponibili, ora e in futuro, alla pubblica fruizione.
La prima iniziativa (La Testa mancante) ha portato alla ricostruzione multimediale del profilo di imperatore (o dell'antico musico Orfeo), strappato dal Fregio nel corso del XIX secolo e ora conservato presso collezione privata. È stato prodotto un contenuto multimediale per la ricostruzione virtuale della porzione mancante.
La seconda iniziativa, invece, Giorgione, Il Fregio, è la pubblicazione di un volume fotografico dedicato al Fregio. Si tratta di un prodotto tipograficamente qualitativo, con grandi fotoriproduzioni dell'opera, generali e particolari, accompagnate da una guida alla lettura che unisce i contenuti delle più aggiornate interpretazioni dell'opera medesima e un linguaggio capace di raggiungere un pubblico più largo rispetto a quello dei soli specialisti.
Dar voce al Fregio si è conclusa con l'iniziativa Enigma su pietra, la presentazione di un'inedita testimonianza epigrafica su pietra, in occasione della sua entrata all'interno dell'allestimento permanente del Museo (bene pervenuto al Museo mediante donazione privata). Si tratta di una piccola epigrafe incisa che riporta uno dei motti del Fregio di Giorgione, e che si inserisce a tutti gli effetti nel clima umanistico e nel gusto antiquario che alimentò il progetto artistico del Fregio stesso.
La mostra La Beffa. Canova e Giorgione, storia di un autoritratto (2 dicembre 2022 – 10 aprile 2023) è un punto di incontro tra il percorso di ricerca e studio proprio del Museo Casa Giorgione di Castelfranco Veneto e le celebrazioni che, nel 2022, vedono impegnato il Museo Gypsoteca Antonio Canova di Possagno e altre istituzioni nella ricorrenza dei 200 anni dalla morte dello scultore.
L'iniziativa, ideata dal Museo Casa Giorgione, si inscrive nella convergenza di visioni che animano l'Accordo "Tiziano Canova Giorgione. Terre natìe" sottoscritto tra Regione Veneto, Museo Casa Giorgione, Museo Gypsoteca Antonio Canova e Casa Natale di Tiziano di Pieve di Cadore.
Spunto della mostra è la ricorrenza dei 200 anni dalla morte di Canova, cui il Museo Casa Giorgione intende offrire un omaggio sostanziale e pienamente coerente con la missione del Museo medesimo.
È stata individuata una significativa connessione tra lo scultore di Possagno e il pittore castellano nel dipinto di Antonio Canova noto come "Autoritratto di Giorgione", realizzato nel 1792
/ C /
/ B /
e ora presso collezione privata in Roma. L'incrocio di cui il dipinto in causa è espressione, infatti, è assai rilevante a livello storico-artistico e su di esso la mostra La Beffa sarà focalizzata, graviterà e convergerà: frutto di una beffa vera e propria che Antonio Canova concepì d'intesa con il proprio grande protettore e mecenate Abbondio Rezzonico, senatore romano. L'allestimento espositivo della vicenda che generò l'Autoritratto di Giorgione svelerà il dialogo intrattenuto da Antonio Canova con un artista, Giorgione, del quale anch'egli avvertì l'irresistibile, benché sfuggente, fascinazione.
La mostra è visitabile presso il Museo Casa Giorgione fino al 10 aprile 2023, nei seguenti giorni e orari: martedì e mercoledì dalle 10 alle 13 da giovedì a domenica dalle 10 alle 18 Visite guidate e laboratori didattici su prenotazione.
A / AUTORITRATTO DI GIORGIONE / ANTONIO CANOVA / 1792 olio su tela – oil on canvas – Roma, Antonacci Lapiccirella Fine Art
B / IL FREGIO / GIORGIONE / 2022
printed volume volume a stampa –
– Castelfranco Veneto
C / RITRATTO DI ANTONIO CANOVA / ANTONIO D'ESTE / 1795
scultura in marmo – marble sculpture – Tempio Canoviano, Possagno
MUSEO CASA GIORGIONE
PIAZZA SAN LIBERALE, 31033 CASTELFRANCO VENETO TV WWW.MUSEOCASAGIORGIONE.IT
61
THE FRIEZE AND THE MOCKERY, GIORGIONE AND CANOVA
by Matteo Melchiorre
A very special year closes for the Museo Casa Giorgione. On the one hand, the revival of Giorgioneschi's themes and the important innovations related to the Frieze of Liberal and Mechanical Arts, emerged and presented to the public during the Dar Voce al Fregio. Tale in three acts (June-October 2022). On the other, a heartfelt tribute to Canova, in the year of the celebrations for the two hundred years since his death, with an exhibition that presents a little known but fascinating link between Antonio Canova and Giorgione.
The Frieze of Liberal and Mechanical Arts, housed in the Museum Casa Giorgione in Castelfranco Veneto, is a challenge still open and a field mostly to explore, a work of multiple stimuli but not simple and not immediately intuitive. Hence the desire to "give him voice" that has moved the initiatives of valorization entitled precisely Dar Voce al Fregio. Tale in three acts.
It was not a traditional exhibition but a progressive work of sedimentation of contents related to the Frieze, which have become part of the permanent layout of the museum and therefore remain available, now and in the future, for public use.
The first initiative (The Head Missing) led to the multimedia recon- struction of the profile of emperor (or the ancient musician Orpheus), torn from the frieze during the nineteenth century and now preserved in private collection. Multimedia content was produced for the virtual reconstruction of the missing portion.
The second initiative, however, Giorgione, The Frieze, is the publication of a photographic volume dedicated to the Frieze. It is a typographically qualitative product, with large photoreproductions of the work, general and particular, accompanied by a reading guide that combines the contents of the most up-to-date interpretations of the work itself and a language capable of reaching a wider audience than that of specialists alone.
Dar Voce al Fregio ended with the initiative Enigma on stone, the presentation of an unpublished epigraphic testimony on stone, on the occasion of its entry into the permanent layout of the Museum (property received by private donation). It is a small engraved inscription that bears one of the mottos of the Giorgione Frieze, and that fits in all respects in the humanistic climate and in the antique taste that fueled the artistic project of the Frieze itself.
The exhibition La Beffa. Canova e Giorgione, storia di un autoritratto (2nd December 2022 - 10th April 2023) is a meeting point between the research and study of the Museum Casa Giorgione of Castelfranco Veneto
/ D /
62
/ E /
and the celebrations that In 2022, the Museo Gypsoteca Antonio Canova of Possagno and other institutions are involved in the celebration of the 200 years since the death of the sculptor.
The initiative, conceived by the Casa Giorgione Museum, is part of the convergence of visions that animate the "Tiziano Canova Giorgione. Terre natìe" agreement signed between Regione Veneto, Museum Casa Giorgione, Museum Gypsoteca Antonio Canova and Birthplace of Tiziano di Pieve di Cadore.
The starting point of the exhibition is the anniversary of the 200th anniversary of Canova's death, to which the Casa Giorgione Museum intends to offer a substantial and fully coherent tribute to the Museum's mission.
It was identified a significant connection between the sculptor of Possagno and the painter castellano in the painting by Antonio Canova known as "Self-portrait of Giorgione", made in 1792 and now in private collection in Rome. The intersection of which the painting in question is an expression, in fact, is very relevant at the historical-artistic level and on it the exhibition The Mockery will be focused, gravity and converge: Antonio Canova conceived it in agreement with his great patron and patron Abbondio Rezzonico, a Roman senator. The exhibition setting of the story that generated the Self-Portrait of Giorgione will reveal the dialogue entertained by Antonio Canova with an artist, Giorgione, of whom he also felt the irresistible, though elusive, fascination.
D / TRE FILOSOFI / JAN VAN TROYEN / 1660 CA.
incisione – engraving – Civiche Collezioni Museali, Castelfranco Veneto
E / LAPIDE CON ISCRIZIONE GIORGIONESCA
pietra incisa, primi decenni del XVI secolo – plaque with Giorgionesca inscription, engraved stone, early 16th century – Museo Casa Giorgione, Castelfranco Veneto
F / MUSEO CASA GIORGIONE
G / RICOSTRUZIONE VIRTUALE DELL'AFFRESCO STRAPPATO DAL FREGIO DELLE ARTI LIBERALI E MECCANICHE / 2022
virtual reconstruction of the fresco torn from the Frieze of Liberal and Mechanical Arts – Museo Casa Giorgione, Castelfranco Veneto
MUSEUM CASA GIORGIONE
PIAZZA SAN LIBERALE, 31033 CASTELFRANCO VENETO TV WWW.MUSEOCASAGIORGIONE.IT
/ F /
/ G /
63
|
The Constant Entropy Path for a Chemical Reaction
Jose Iniguez
ABSTRACT
The internal energy minimum as a criterion for equilibrium is here discussed in reference to a chemical reaction. That a non-isolated system at constant entropy and volume heads for a state of minimum internal energy is here shown via a thermodynamic analysis of the para-ortho isomerization reaction of hydrogen. The analysis brings forward the equations for the heat flow rate and temperature decreasing regime the reaction system has to comply with if a constant entropy path is to be accessed.
KEYWORDS: hydrogen, ortho-para isomerization, thermodynamics, constant entropy, minimum internal energy principle
RESUMEN (La trayectoria a entropía constante para una reacción química)
El mínimo de energía interna como criterio de equilibrio es aquí discutido con relación a una reacción química. El que un sistema no aislado, a entropía y volumen constantes, tienda a un estado de energía interna mínima se muestra aquí a través de un análisis termodinámico de la reacción de isomerización orto-para del hidrógeno. El análisis permite identificar las ecuaciones para los regímenes de flujo de calor y descenso de temperatura a los que debe someterse el sistema de reacción a efecto de acceder a una trayectoria evolutiva a entropía constante.
PALABRAS CLAVE: hidrógeno, isomerización orto-para, termodinámica, entropía constante, principio de energía interna mínima
1. Introduction
1.1 Motivation
Classroom discussions on spontaneity and equilibrium commonly revolve around conditions of constant temperature and pressure, and constant temperature and volume, with little attention, if any, devoted to conditions involving a constant entropy restriction. That our students of thermodynamics deserve a complete as possible exposure to its fundamental concepts is beyond discussion. The accomplishment of this objective demands, however, an increased availability in the published literature of pertinent discussions that might be used to that end. Even if an excellent discussion could be found illustrating the constant entropy path for gas expansions (Velasco and Fernandez, 2002), apparently none is available applying this criterion to a chemical reaction, a most important subject indeed for chemistry students. It is with these considerations as a basis that we offer here a discussion on the thermodynamic conditions to be satisfied for the constant entropy evolution of the chemical reaction shown below. Now, even if as Emanuel (1987, p. 66) correctly points out, the statement ‘constant entropy’ can be associated to any process with identical entropy values for its initial and final conditions, here however, that statement is taken to mean constant entropy all along the path connecting those conditions
\[ \text{para} - \text{H}_2 = \text{ortho} - \text{H}_2 \]
(1)
1.2 Thermodynamic background
The equilibrium state accessed by reaction (1) at temperature \( T \) is characterized by the thermodynamic equilibrium constant (\( K \)), defined as the ratio of the equilibrium activities (\( a \)) of ortho-hydrogen and para-hydrogen, \( K = a_{\text{ortho}}/a_{\text{para}} \). These activities, in essence non-ideality corrected equilibrium partial pressures or concentrations (Maron and Prutton, 1965, p. 203), make of \( K \) a true equilibrium constant, dependent only on temperature (Maron and Prutton, 1965, p. 231). In the case of gas-phase reactions taking place at moderate pressures and temperatures — where the assumption of ideal gas behavior is reasonable — no significant error is introduced in replacing activities by partial pressures (\( p \)) (Maron and Prutton, 1965, p. 205; Bevan Ott and Boerio-Goates, 2000, p. 438). Since, as will be seen below, this is the case for reaction (1), we can then write \( K \) in the following form: \( K = p_{\text{ortho}}/p_{\text{para}} \). A simple application of the ideal gas law can be used to re-express this last equation as the ratio of equilibrium weight
percentages \((w)\) of the indicated species, i.e. \(K = w_{ortho}/w_{para}\). Under the assumption of ideal gas behavior for the reacting species, the three different expressions for \(K\), previously written, are equivalent.
\(K\) can also be defined in terms of reaction’s (1) associated standard Gibbs free energy change \((\Delta G^0)\) at the same temperature, \(\Delta G^0 = -RT \ln K\) (Bevan Ott and Boerio-Goates, 2000, p. 437). This equation can be rewritten in terms of the standard enthalpy and entropy changes via the defining equation for the Gibbs free energy, \(G = H - TS\), as follows: \(\Delta G^0 = \Delta H^0 - T \Delta S^0 = -RT \ln K\).
Under the assumption of ideality, allowing the identification of \(K\) with the equilibrium weight percentage ratio, the following is true: \(\Delta H^0 - T \Delta S^0 = -RT \ln \left[ \frac{w_{ortho}}{w_{para}} \right]\).
The effect of temperature on \(K\) can be obtained by differentiating equation \(\Delta G^0 = -RT \ln K\) with respect to temperature. Performance of this operation, combined with the fact that \(d (\Delta G^0 / T) / dT = -\Delta H^0 / T^2\), leads to the following expression for the temperature variation of the equilibrium constant: \(d \ln K / dT = \Delta H^0 / RT^2\) (Bevan Ott and Boerio-Goates, 2000, p. 446).
Reaction (1) was chosen for this study because of its equilibrium constant having practically the same magnitude in the temperature interval 300 K–600 K. This temperature independence of the equilibrium constant allows us to conclude, in light of that expressed in the above written equation for the temperature variation of \(K\), that in this temperature interval \(\Delta H^0 = 0\). Introduction of this fact in the previously written equation relating \(\Delta H^0\), \(\Delta S^0\), and \(K\), leads, in turn, to the following equation: \(T \Delta S^0 = RT \ln \left[ \frac{w_{ortho}}{w_{para}} \right]\), valid for reaction (1) under the assumptions of ideality and temperature independence of \(K\) previously discussed. That this is indeed the case is shown in section 2.3.
### 1.3 Internal energy minimum as equilibrium criterion
The entropy maximum principle as a criterion of equilibrium has been re-expressed through a number of thermodynamic functions which even if less fundamental and less general than the entropy law, are of more practical convenience in the study of some concrete problems (Lewis and Randall, 1961, p. 138). The Gibbs \((G)\) and Helmholtz \((A)\) free energy functions are among these alternative functions. The criterion of equilibrium for systems evolving at constant temperature and pressure is that \(G\) has reached its minimum possible value (Denbigh, 1968, p. 83). For those evolving at constant temperature and volume the criterion of equilibrium is that \(A\) has reached its minimum possible value (Denbigh, 1968, p. 82). Next to these, we find the minimum possible value of the internal energy \((E)\) acting as criterion for the equilibrium state of systems evolving at constant entropy and volume. This lesser known criterion (Denbigh, 1968, p. 83; Richet, 2001, p. 35) is to be here discussed in reference to reaction (1). The essence of this discussion centers on the fact that if the reaction system is to be able to evolve at constant entropy, a way has to be found to transfer outside the reaction system the entropy increase associated with the reaction process. As will be seen below, this is here accomplished by coupling the reaction system with a cold reservoir whose function is to extract heat from the reaction system — and thus diminish its entropy — at precisely the same rate that entropy is created by the reaction. It is through the coupling of these two processes that the constant entropy path for reaction (1) will be accessed, a path leading the system to a state of equilibrium characterized by a minimum of its internal energy.
Still another equilibrium criterion can be found in the literature next to those already mentioned. This one associates a minimum value of the enthalpy function \((H)\) with the state of equilibrium of systems evolving at constant entropy and pressure. Chemistry oriented exemplifications of this principle suitable for classroom presentations are also scarce or non-existent.
### 2. The isomerization of hydrogen
#### 2.1 The key assumption
The work of Woolley et al. (1948, pp. 379–475) shows that in the interval 300 K–600 K, the equilibrium constant \((K)\) for the para-ortho isomerization of hydrogen is, for all practical purposes, independent of temperature (the equilibrium composition up to 500 K can be read directly from Table 12 on p. 395. The equilibrium constant at 600 K can be calculated from the data in Table 4, p. 387). When expressed as the ratio of the percentages of ortho-hydrogen to para-hydrogen, the equilibrium constant changes from a value of 2.988 at 298.16 K to a value of 3.000 at 600 K. The percentage variation referred to the high temperature value amounts to 0.4 %. In what follows, a number of considerations will be derived from what will be assumed to be a perfect temperature independence of \(K\) in the indicated temperature interval.
2.1(a). The fact that \(K \neq f(T)\) leads to \(d \ln K / dT = 0\), and this, in turn, through equation \(d \ln K / dT = \Delta H^0 / RT^2\), to the realization that \(\Delta H^0 = 0\) in this temperature interval. This fact, combined with the assumption of ideal gas behavior for ortho-hydrogen and para-hydrogen — a reasonable assumption at the specified temperature interval and pressures near atmospheric — as well as with the fact that this reaction takes place with no change in total number of moles \((\Delta n = 0)\), leads, through equation \(\Delta H = \Delta E + \Delta(PV)\), properly modified for the case at hand as \(\Delta H^0 = \Delta E^0 + RT \Delta n\), to the result \(\Delta H^0 = \Delta E^0\). The already proven fact that in the case at hand we have that \(\Delta H^0 = 0\) allows us to conclude that the following also holds: \(\Delta E^0 = 0\).
2.1(b). From the data of Woolley et al. (1948, p. 387) we also learn that in the indicated temperature interval, the constant pressure heat capacities for ortho-hydrogen and para-hydrogen are not only practically identical, but also constant. From the ideal gas assumption introduced above it follows that their heat capacities at constant volume, assuming a relaThe reason behind the choice of 600K as the initial temperature for reaction (1) will be explained below.
It has to be understood that the concatenation of processes shown in Figure 1 has been constructed to serve as an analytical tool in the development some of the thermodynamic arguments to be presented, and in no way implies that the reaction and cooling processes — to be described below — take place sequentially as shown. Quite the contrary, the chemical reaction described by the combination of processes I and II, and the cooling process, shown there as process III, are to take place simultaneously if the constant entropy evolution of the reaction system is to be possible. The description of steps I and II shown in Figure 1 is done in what follows.
**Process I.** This process corresponds to the reaction taking the system from pure para-H$_2$ at 1 atmosphere and 600 K, to the indicated amounts of pure ortho-H$_2$ and para-H$_2$, each at 1 atmosphere and 600 K. This reaction is actually the $\xi$ fraction of the standard reaction converting 1 mole of pure para-H$_2$ into 1 mole of pure ortho-H$_2$, both at 1 atmosphere and 600 K, and as such conveys an entropy change of:
$$\Delta S_{\text{process I}} = \xi \Delta S^0$$
(2)
According to that discussed in section 2.1(a), the enthalpy and internal energy changes associated to this process will be written as follows
$$\Delta H_{\text{process I}} = \xi \Delta H^0 = 0$$
(3)
$$\Delta E_{\text{process I}} = \xi \Delta E^0 = 0$$
(4)
**Process II.** Here the pure isomers in the amount and conditions shown as the final state of process I are brought together to produce what would be the actual reaction mixture at the given degree of advancement. The entropy change associated to this process corresponds to the entropy of mixing of the indicated amounts of para-H$_2$ and ortho-H$_2$, and as such, given by the following expression (Castellan, 1974, p. 231)
$$\Delta S_{\text{process II}} = -R \left[ \xi \ln \xi + (1 - \xi) \ln (1 - \xi) \right]$$
(5)
Due to the fact that the mixture here being formed is an ideal mixture, the enthalpy change for this step amounts to zero (Castellan, 1974, p. 231). A parallel argument to that advanced in section 2.1(a) allows us to conclude that here the internal energy of mixing is also zero. Therefore
$$\Delta H_{\text{process II}} = \Delta E_{\text{process II}} = 0$$
(6)
By virtue of that expressed by equations (3), (4), and (6) it can be concluded that the reaction under consideration takes place without any thermal interaction with its surroundings, i.e. that no heat is at all exchanged between them as consequence of the occurrence of the chemical reaction, and consequently — as already mentioned — that this reaction is
driven solely by entropic effects intrinsic to it. But if this is so then the universe of this reaction — the universe of steps I and II in Figure 1 — is the reaction system itself. That this is so is the matter of the following argument, through which the assumption introduced in section 2.1(a) is to be tested.
2.3 Feasibility of the key assumption
From the data of Woolley et al. (1984, p. 387) for the para-H$_2$ = ortho-H$_2$ conversion, the value of $\Delta S^0 = 2.2$ cal/degree-mole, corresponding to 600 K, can be taken as representative for the previously indicated temperature interval. Being this so, the following expressions — produced via combination of equations (2) and (5) — can be written for the entropy change ($\Delta S_{I+II}$) of the indicated universe. The value for the ideal gas constant $R$ has been taken as 1.99 cal/degree-mole, with $\xi$, as previously stated, representing the number of moles of ortho-H$_2$.
$$\Delta S_{I+II} = \Delta S_{\text{process I}} + \Delta S_{\text{process II}} =$$
$$\xi \Delta S^0 - R \left[ \xi \ln \xi + (1 - \xi) \ln (1 - \xi) \right]$$
(7)
$$\Delta S_{I+II} = 2.2 \xi - 1.99 \left[ \xi \ln \xi + (1 - \xi) \ln (1 - \xi) \right]$$
(8)
A simple application of L’Hopital’s rule to equation (8) produces, as should be expected, a value of $\Delta S_{I+II} = 0$ for the limit $\xi \to 0$. The behavior of $\Delta S_{I+II}$ for larger values of $\xi$ can be ascertained by taking the first derivative of equation (8) as follows
$$\partial \Delta S_{I+II} / \partial \xi = 2.2 - 1.99 \ln \left[ \xi / (1 - \xi) \right]$$
(9)
When this equation is put equal to zero, and the resulting expression solved for $\xi$, we identify an extremum for the entropy change of reaction (1) at $\xi = 0.75$. A simple application of the second derivative rule will confirm that this is a maximum, an entropy maximum, and as such it identifies the equilibrium condition for the reaction being considered. The equilibrium constant associated to the just determined equilibrium conversion for ortho-H$_2$ can be obtained dividing it by the corresponding equilibrium conversion for para-H$_2$, as follows: $K = 0.75 / (1 - 0.75) = 3.0$. The value obtained is similar to that already quoted of Woolley et al.
Further confirmation of the feasibility of our key assumption can be obtained calculating $K$ via the equation introduced in section 2(c) and the previously quoted values for $R$, and $\Delta S^0$, as follows: $K = \exp (\Delta S^0 / R) = \exp (2.2 / 1.99) = 3.0$. Again, the value produced this way is also similar to that also already quoted of Woolley et al. A similar analysis, leading to similar results, can be performed by the interested reader for the ortho – H$_2$ → para – H$_2$ reaction.
The previous analysis allows us to realize that the entropy change sustained by the universe for reaction (1), as given by equation (8) and represented by the concatenation of processes I and II of Figure 1, starts with a value of zero at $\xi = 0$ and increases monotonically until it reaches a maximum at an ortho-H$_2$ conversion equal to $\xi = 0.75$. Being this so, it can only follow that any attempt to conduct this reaction through a constant entropy path will require a way to produce in the reaction system an entropy change of the same magnitude, but opposite sign, to the one quantified by equation (8). This effect will be brought about by coupling the unfolding of the chemical reaction — steps I and II — with the cooling of the reaction mixture, represented as process III in Figure 1. This cooling process will require the reaction mixture to be put in contact with a heat bath of low enough temperature to produce the desired effect. The particulars of this coupling are the matter of the following discussion.
3. The constant entropy path
In what follows, the concatenation of processes I, II and III, as graphically represented in Figure 1, will be designated as ‘the coupling’. Let us then start by agreeing that the indicated isomerization reaction is to take place at constant volume, and let us further define the reaction mixture as the system of interest ($\alpha$). In thermal contact with the system we find a constant temperature bath ($\beta$). System and bath, combined, define the universe of the coupling. Let us further assume that the temperature of the bath is lower than that of the system. If this is so, a cooling process will take place alongside the chemical reaction. From the perspective of the system, this cooling process is an entropy reducing process. Thus, while the unfolding of the reaction increases the entropy of the system, the unfolding of the cooling process decreases it. These considerations allow us to realize that if we could couple the reaction and cooling processes in a way such that at every moment along their evolution every entropy increase produced by the reaction is met with an entropy decrease of the same magnitude produced by the cooling process, then our system of interest will be evolving along a constant entropy path. It has to be here recognized that even if a decreased rate of reaction is expected as a consequence of the cooling process, given the thermodynamic characteristics of the reaction system — discussed in sections 2(a) through 2(c) — as long as the cooling process is restricted to the temperature interval 300K–600K, no effect whatsoever will be produced in its thermodynamics, as measured by its equilibrium conversion. An evolution under the considerations just advanced can be represented as follows:
$$dS_\alpha = dS_{I+II} + dS_{III} = 0$$
(10)
In the previous equation, $dS_\alpha$, $dS_{I+II}$ and $dS_{III}$, respectively represent the net entropy change of the system, the entropy change associated to the unfolding of the chemical reaction, and that experienced by the reaction system due to the cooling process. These entropy changes, along that of the heat bath ($dS_\beta$) allows us to write the following expression for the entropy change of the universe of the coupling ($dS_u$), i.e. the universe of processes I, II, and III, as follows
$$dS_u = dS_\alpha + dS_\beta$$
(11)
Upon substitution of equation (10) in (11) we learn that in the situation being considered, the entropy of the heat bath assumes the role of the entropy of the universe
\[ dS_u = dS_\beta \]
(12)
It was stated above that from the perspective of the system the cooling process was an entropy reducing process. From the perspective of the heat bath, however, this is an entropy increasing process. The reason is simple. The bath is the one receiving the heat lost by the system. It is precisely upon the absorption of this heat by the heat bath that compliance with the dictate of the second law is produced, as in this situation equation (12) becomes:
\[ dS_u = dS_\beta > 0 \]
(13)
Let us point out here that both, the system as well as the bath, are constant volume bodies incapable of any energy exchange in the form of work (we are assuming here that the only work interaction originally possible was of the \( PV \) kind). This consideration allows us, in turn, to write the following first law based expressions for the system and heat bath:
\[ dE_\alpha = dQ_\alpha \]
(14)
\[ dE_\beta = dQ_\beta \]
(15)
The fact that any heat lost by the system is necessarily heat gained by the bath can be represented as follows:
\[ -dQ_\alpha = dQ_\beta \]
(16)
Combination of equations (14), (15), and (16) leads us to:
\[ -dE_\alpha = dQ_\beta = dE_\beta \]
(17)
The fact shown in equation (15) stating that any heat exchanged by the heat bath is equal to the change of a function of state, allows us to write the following expression for the entropy change of the heat bath in terms of the internal energy decrease of the system
\[ dS_\beta = \frac{-dE_\alpha}{T_\beta} = k\left(-dE_\alpha\right) > 0 \]
(18)
A comparison between equations (13) and (18) leads to the realization that at the conditions at hand the statement ‘an increase in the entropy of the universe’ becomes synonymous with the statement ‘a decrease in the internal energy of the system’. Actually the former is proportional to the latter. The proportionality constant being the inverse of the temperature of the bath. It is on reason of this proportionality that the spontaneity condition for a system evolving at constant entropy and volume can, alongside that expressed by equation (13), also be expressed as:
\[ \left(dE_\alpha\right)_{S,V} < 0 \]
(19)
The message conveyed by equation (19) can be extended by saying that the equilibrium condition will correspond with a minimum in the internal energy of the system.
4. **The thermodynamic analysis**
As previously stated, access to a constant entropy path for reaction (1) will be attempted by coupling processes I and II previously discussed, with a cooling of the reaction mixture, represented as process III in Figure 1. The role to be played by this process is described in what follows.
**Process III.** At any \( \xi \) in its advancement, the chemical reaction has an associated entropy increase quantified by equation (7). If a constant entropy path is to be accessed by this reaction, then at \( \xi \) the temperature of the reaction mixture has to have fallen to a value \( T^* \) such that the entropy reduction produced by this cooling, precisely cancels the entropy increase associated to the reaction itself. The fact that no thermal interaction between the system and the bath is associated to processes I and II allows us to realize that the only heat to be removed from the system in process III is sensible heat, and due to this the entropy change sustained by the reaction mixture upon its cooling can be written as follows
\[ \Delta S_{III} = C_v \ln \left[ T * (\xi) / T \right] \]
(20)
Here \( C_v \) stands for the constant volume heat capacity of the reaction mixture, \( T \) for the initial reaction temperature, and \( T * (\xi) \) — written like this to explicit its dependence on the degree of advancement — for the temperature the reaction mixture has to attain at every \( \xi \) in order to assure that \( \Delta S_{III} \) will be equal in magnitude, but opposite in sign to that associated to processes I and II, as given by equation (7), i.e.
\[ \Delta S_{III} = -\Delta S_{I+II} \]
(21)
Fulfillment of this condition will produce — as shown in equation (10) — a combined value of zero for these two entropy changes, i.e.
\[ \Delta S_c = \Delta S_{I+II} + \Delta S_{III} = \]
\[ \xi \Delta S^0 - R \left[ \xi \ln \xi + (1 - \xi) \ln (1 - \xi) \right] + C_v \ln \left[ T * (\xi) / T \right] = 0 \]
(22)
It was through equation (22) that via a trial and error procedure, a temperature of 600 K was selected as the initial temperature for reaction (1). The selection criterion used was that of assuring the chemical reaction taking place within the 600 K–300 K chosen for this study. The equations previously developed will be used in what follows to unveil the reaction mixture temperature and heat flow rate regimes required in order for equation (21) to be satisfied.
4.1 **The temperature decreasing regime**
The substitution of equations (7) and (20) in (21) leads to
\[ C_v \ln \left[ T * (\xi) / T \right] = R \left[ \xi \ln \xi + (1 - \xi) \ln (1 - \xi) \right] - \xi \Delta S^0 \]
(23)
Solving equation (23) for \( T * (\xi) \) produces the temperature decreasing regime the reaction mixture has to comply with in order for equation (21) to be satisfied and, consequently, in order to access a constant entropy path.
\[
T^*(\xi) = T \exp \left( -\Delta S_{I+II} / C_v \right) \\
= T \exp \left\{ R \left[ \xi \ln \xi + (1-\xi) \ln (1-\xi) \right] - \xi \Delta S^\alpha \right\} / C_v \right\}
\]
(24)
The expression contained in the first equality of equation (24) comes from a combination of equations (20) and (21).
### 4.2 The heat transfer regime
Being \( T \) is the initial reaction temperature, and \( T^*(\xi) \) the temperature of the reaction mixture at \( \xi \), then the amount of heat lost by the system in its transit from \( \xi = 0 \) to \( \xi \) can be quantified as follows
\[
Q_\alpha = \Delta E_\alpha = (1) C_v \left[ T^*(\xi) - T \right]
\]
(25)
In the previous equation the factor (1) has been introduced for unit consistency. It represents the constant total number of moles of reaction mixture. Related expressions will be subsequently written without this factor.
Substitution of equation (24) in (25) produces the expression quantifying the amount of heat to be removed from the reaction system as a function of \( \xi \)
\[
Q_\alpha = C_v T \left[ \exp \left\{ R \left[ \xi \ln \xi + (1-\xi) \ln (1-\xi) \right] - \xi \Delta S^\alpha \right\} / C_v \right] - 1 \right\}
\]
(26)
### 4.3 The heat flow rate
The heat flow rate that the reaction mixture has to experience in order to follow a constant entropy path comes via the first derivative of equation (26) with respect to \( \xi \)
\[
dQ_\alpha / d\xi = T \left\{ \exp(-\Delta S_{I+II} / C_v) \right\} \left\{ R \left[ \ln \xi - \ln (1-\xi) \right] - \Delta S^\alpha \right\}
\]
(27)
If heat is removed of the reaction system at the rate mandated by equation (27), then the temperature decreasing regime embodied by equation (24) will follow, and the reaction system will be proceeding along a constant entropy path. In an experimental situation, equation (24) would provide a baseline against which the actual evolution of the system can be compared.
In what follows we will graphically display in Figure 2 the results of calculations carried on with some of the equations previously developed. In it \( \Delta S_\beta \) was calculated as follows:
\[
\Delta S_\beta = -\Delta E_\alpha / T_\beta
\]
(28)
### 5. A numerical example
The source of all the numerical data was the paper of Woolley et al. (1948, pp. 379-475). The system of units used in the said paper has been followed through. For the purpose of these calculations the magnitude of the equilibrium constant \([K = \xi_{eq} / (1 - \xi_{eq})]\) was taken to be 3.0 in the 300K–600K temperature interval. Accordingly, the equilibrium conversion of ortho-H\(_2\) amounts to \( \xi_{eq} = 0.75 \). The value of \( \Delta S^\alpha = 2.2 \) cal/degree-mole, corresponding to 600 K, was taken as representative for the indicated temperature interval. Likewise, an average value of \( C_v = 5.0 \) cal/degree-mol was selected. As previously stated, the ideal gas constant was taken as \( R = 1.99 \) cal/degree-mole.
Substitution of the appropriate values in equation (24) allowed us to calculate the final — and lowest — temperature of the reaction system along the cooling process. The value obtained was \( T^*(\xi_{equilibrium}) = 345 \) K. This temperature, it should be realized, corresponds to the highest possible temperature for the heat bath, as then the system and heat bath would reach thermal equilibrium the moment the reaction reaches the condition of chemical equilibrium. For purposes of this illustration, this temperature was taken to be the constant temperature of the heat bath. As noted previously in the text, the initial temperature selected was \( T = 600 \) K.
For a given value of \( \xi \), \( \Delta S_{I+II} \) is calculated through equation (8). The substitution of this value alongside those of \( T \) and \( C_v \) in the expression formed in the first equality of equation (24), produces the \( T^* \) corresponding to the given \( \xi \). Substitution of this \( T^*(\xi) \) in equation (20) produces \( \Delta S_{III} \). The addition of \( \Delta S_{I+II} \) and \( \Delta S_{III} \) leads to \( \Delta S_\alpha \) which, within the calculations uncertainty, should be equal to zero for all \( \xi \). Finally, the values corresponding to \( \Delta E_\alpha \) and \( \Delta S_\beta \) at the given degree of advancement are calculated through equations (25) and (28).
It is evident from the figure that under the restrictions of constant entropy and volume imposed to the evolution of the reaction system, the equilibrium condition corresponds with a maximum in the entropy of the universe — here under the guise of the entropy of the heat bath, as well as with a minimum in the internal energy of the system.

**Figure 2.** Graphical representation of equations (8), (20), (22), (24), (25), and (28) depicting the thermodynamic evolution of the isomerization reaction between para-H\(_2\) and ortho-H\(_2\), according to the data given in the text.
6. Final comment
The stated goal of this exercise was that of bringing to light the thermodynamic requirements associated to the constant entropy evolution of reaction (1). No attempt has been made to dwell into the practical aspects associated to actually carrying an experiment in this direction, nor into the thermodynamic conditions associated to the constant entropy evolution of the typical chemical reaction for which $K$ is temperature dependent. It is true that in this regard reaction (1) is exceptional. However, in concurrence with C. E. Hecht (1967), we will have to recognize that more often than not, unusual, special, and even paradoxical situations serve well in illustrating the general concept.
Hopefully, this paper will prompt future discussions in these areas.
Bibliography
Bevan Ott, J. and Boerio-Goates, J., *Chemical Thermodynamics: Principles and Applications*, London, UK: Academic Press, 2000.
Castellan, G. W., *Fisicoquímica*, México: Fondo Educativo Interamericano, SA, 1974.
Denbigh, K., *The Principles of Chemical Equilibrium*, London, UK: Cambridge University Press, 1968.
Emanuel, G., *Advanced Chemical Thermodynamics*, Washington, D.C., USA: AIAA Education Series, J. S. Przemieniecki, Series Editor, 1987.
Hecht, C. E., Negative Absolute Temperatures, *J. Chem. Ed.*, **44**, 124, 1967
Lewis, G. N. and Randall, M., (Revision of Pitzer and Brewer), *Thermodynamics*, 2\textsuperscript{nd} edition, McGraw-Hill (International Student Edition), 1961.
Maron, S. H. and Prutton, C. F., *Principles of Physical Chemistry*, 4\textsuperscript{th} edition, New York, USA: The Macmillan Company (International Student Edition), 1965.
Richet, P., *The Physical Basis of Thermodynamics: with applications to chemistry*, New York, USA: Springer-Verlag, 2001.
Velasco, S. and Fernández Pineda, C., A simple example illustrating the application of thermodynamic extremum principles, *European Journal of Physics*, **23**, 501-511, 2002.
Woolley, H. W., Scott, R. B., Brickwedde, F. G., Compilation of Thermal Properties of Hydrogen in its Various Isotopic and Ortho-Para Modifications, *Journal of Research of the National Bureau of Standards*, Research paper RP1932 1948; 41: 379-475.
|
BEFORE THE HOUSE SCIENCE COMMITTEE
SUBCOMMITTEE ON OVERSIGHT AND SUBCOMMITTEE ON ENVIRONMENT
HEARING ON:
RENEWABLE FUEL STANDARD
A TEN YEAR REVIEW OF COSTS AND BENEFITS
NOVEMBER 3, 2015 TESTIMONY OF CHARLES DREVNA
INTRODUCTION
The Renewable Fuel Standard was based on incorrect assumptions about oil production and consumption, as well as the ability of Congress and the administration to mandate and create incentives for innovation and vast technological and economic leaps in biofuel production. The RFS was intended to create greater energy and economic security, but as Milton Friedman famously posited, "One of the great mistakes is to judge policies and programs by their intentions rather than their results."
The results of the RFS are a failure for America. We have greater energy security today—not because of vast improvements in cellulosic biofuels as envisioned in 2007—but because of much greater domestic oil production coupled with a leveling off of demand. It is time we look at the actual results of the RFS and act accordingly. As a result, it is time to end the RFS and let American fuel producers focus on delivering the best products to American motorists.
HOW WE GOT HERE
In the mid 2000's, U.S. oil consumption was increasing but U.S. oil production was decreasing. It seemed to many people that these trends would continue. The Renewable Fuel Standard in the Energy Independence and Security Act of 2007 was passed to reduce our dependence on foreign oil while providing development opportunities to rural America. To achieve this, the law mandated the use of billions of gallons of cellulosic ethanol under the assumption that the technology would soon be cost competitive, that Congress and the administration could correctly predict the future, and that Congress could mandate innovation. These assumptions were very, very wrong.
These two charts are the Energy Information Administration'sMonthly Energy Reviewfor December 2007. 1 The point is clear—U.S. oil production was falling while consumption (essentially "Products Supplied") was increasing. There was no end in sight for these trends.
The RFS was seen as a way to increase domestic fuel production, and people thought that cellulosic and other exotic biofuels could be cost effective. Unfortunately, Congress and the administration believed the hype that cost-‐effective cellulosic ethanol was "just around the corner."
1 Energy Information Administration,Monthly Energy Review, December 2007, p. 42, http://www.eia.gov/totalenergy/data/monthly/archive/00350712.pdf
CELLULOSICHYPE
In 2006, the Worldwatch Institute opined the cellulosic and other biofuels would compete in the "medium term" with oil: 2
The long-‐term potential of biofuels is in the use of non-‐food feedstock that include agricultural, municipal, and forestry wastes as well as fast-‐growing, cellulose-‐rich energy crops such as switchgrass. It is expected that the combination of cellulosic biomass resources and "next-‐ generation" biofuel conversion technologies—including ethanol production using enzymes and synthetic diesel production via gasification/Fischer-‐Tropsch synthesis—will compete with conventional gasoline and diesel fuel without subsidies in the medium term.
In 2007, Bob Dinneen, the head of the Renewable Fuels Association, said that within 3 years cellulosic would be cost competitive. He said, ""I don't think anybody knows if it's going to be 18 months, or two years or three years before you see the first commercially viable plant." 3
Also in 2007, the Department of Energy announced $385 million in federal funding for six cellulosic plants. The DOE stated, "When fully operational, the biorefineries are expected to produce more than 130 million gallons of cellulosic ethanol per year. This production will help further President Bush's goal of making cellulosic ethanol cost-‐competitive with gasoline by 2012." 4 (The reality is that in 2012, cellulosic producers only produced 20,069 gallons of cellulosic biofuel.)
After EISA passed in 2007, investor Vinod Khosla said that the goals were not ambitious enough. He stated, "We can do substantially better than what's in the energy bill." 5
Within a couple years of the passage of the amendments to the RFS, the renewable fuels industry was claiming that cellulosic had arrived. Bob Dineen testified before Congress in May 2009, "It is important to understand that cellulosic ethanol and other advanced biofuels are no longer "just around the corner" or "just over the horizon" — they are here today." 6
Dineen was not alone in the ethanol industry. An Issue Brief from Ethanol Across America in fall 2009 claimed, "we are fast approaching warp speed and meeting the cellulosic ethanol targets in the nation's Renewable Fuel Standard (RFS) appears to be reachable." 7
2 Report: Biofuels Poised to Displace Oil, Worldwatch Institute, June 2006,
http://www.worldwatch.org/report-‐biofuels-‐poised-‐displace-‐oil
4 https://web.archive.org/web/20070304091902/http://www.doe.gov/news/4827.htm
3 Bush's ambitious biofuels goals hinge on cellulosic ethanol, E85 -‐ E&E News, January 24, 2007 http://www.eenews.net/greenwire/stories/50974/.
5 Congress places a big bet on cellulosic ethanol -‐ E&E News, December 14, 2007 http://www.eenews.net/greenwire/stories/59894/
http://ethanolrfa.3cdn.net/5dc24f732b2e86d45d_lym6bqjcx.pdf
6 Bob Dineen, Testimony before the House Agriculture Committee -‐ May 21, 2009
7 Ethanol Across America Issue Brief, Fall 2009
http://www.cleanfuelsdc.org/pubs/documents/CellulosicEthanolIssueBrief11109.pdf
CELLULOSICREALITY
The fuel future that Congress and President Bush envisioned in 2007 has not come to pass. U.S. oil production has dramatically increased and oil consumption has leveled off. Furthermore, the cellulosic ethanol revolution has not happened. The RFS requires the production of 3 billion gallons of cellulosic ethanol in 2015. So far this year, only 1.6 million gallons of cellulosic ethanol have been produced. 8 That is a mere 0.06 percent of the mandated volume.
The predictions made in the mid-‐2000s about oil production continuing to decline and oil consumption continuing to increase have also proven incorrect. The following chart from EIA shows what has happened with petroleum use (ie. products supplied), domestic production, and imports. 9 None of these changes were foreseen by the architects of the RFS.
THERFS WAS SUPPOSED TO BE ABOUT ENERGY SECURITY, SO WHY ARE WE IMPORTING ETHANOL FROM BRAZIL?
In 2007, Congress defined "advanced biofuel" in the RFS as biofuel other than ethanol derived from corn starch (ie. corn kernels) which EPA deems to have 50 percent lower lifecycle greenhouse gas emissions relative to gasoline. Currently, sugarcane ethanol is the only mass-‐produced product which EPA has certified that meets the definition of "advanced" biofuel. Sugarcane ethanol is also disproportionately used in the state of California for purposes of compliance with California's Low Carbon Fuel Standard. As a result, we have an absurd situation where the U.S. imports sugarcane ethanol from Brazil and exports corn ethanol or gasoline to Brazil as these charts from the EIA show:
8 EPA, 2015 Renewable Fuel Standard data, Oct. 15, 2015,http://www2.epa.gov/fuels-‐registration-‐reporting-‐and-‐ compliance-‐help/2015-‐renewable-‐fuel-‐standard-‐data. EPA also deems an additional 87 million gallons of fuel as cellulosic ethanol by counting some renewable compressed natural gas and renewable liquefied natural gas as cellulosic ethanol for purposes of the RFS.
9 EIA, Monthly Energy Review, http://www.eia.gov/totalenergy/data/monthly/pdf/sec3.pdf
As EIA explains, "U.S. obligated parties [ie. U.S. refiners] prefer sugarcane ethanol over corn ethanol" because "sugarcane ethanol counts toward the RFS advanced requirement." 10 Brazilian ethanol users do not have a preference between corn ethanol and sugarcane ethanol.
10 http://www.eia.gov/biofuels/workshop/presentations/2013/pdf/presentation-‐06-‐032013.pdf
This situation is completely absurd. First, sugarcane ethanol is not technologically "advanced." Sugarcane has been used to make ethanol in Brazil since the late 1920s. 11 The only reason sugarcane is deemed to be "advanced" is because EPA believes it has 50 percent lower lifecycle greenhouse gas emissions than gasoline. The Renewable Fuels Association, however, does not agree with EPA's assessment of 50 percent lower lifecycle greenhouse gas emissions from sugarcane ethanol. 12
Second, while sugarcane ethanol may have lower lifecycle greenhouse gas emissions, any reductions are wiped out by what happens with sugarcane ethanol in the real world. The preference that EISA sets up for sugarcane ethanol means that not only is sugarcane ethanol imported to the U.S., increasing its lifecycle greenhouse gas emissions, but corn ethanol or gasoline is then exported from the United States to Brazil to replace the fuel that was sent to the United States, further increasing the true lifecycle greenhouse gas emissions of sugarcane ethanol. When EPA deems sugarcane ethanol an advanced biofuel, they have to consider its true lifecycle greenhouse gas emissions, not only the greenhouse gas emissions required to get the sugarcane ethanol to the U.S., but also what replaces that ethanol in Brazil. Swapping Brazilian sugarcane ethanol with U.S. corn ethanol or gasoline simply wastes the energy used in transportation that would not occur in the absence of a mandate.
EIA believes that this absurd trade in ethanol will continue for the next 30 years, with imported ethanol expected to play a much more important role than cellulosic ethanol.
http://web.archive.org/web/20080319112800/http://www.aondevamos.eng.br/boletins/edicao07.htm.
MORE REALITIES OF CELLULOSIC BIOFUEL PRODUCTION
The following chart shows cellulosic ethanol production and the mandated amount of cellulosic production in the RFS. The first chart shows actual cellulosic ethanol production. 13 We were told in 2006 and 2007 that cellulosic just needed a little help and that it would be cost effective. But by 2015, we only had 1.6 million gallons of cellulosic ethanol produced.
Why has production of cellulosic and advanced ethanol lagged? The answer is cost. The closing price on October 29 for November gasoline on the New York Merchantile Exchange was $1.35 per gallon. The closing price for ethanol on that same date was $1.59 per gallon. The cost of producing cellulosic ethanol is estimated to be in the range of $6.50 per gallon. Since sugarcane ethanol from Brazil is the only mass produced advanced ethanol available today, and Brazil is net short of energy, imports from Brazil have the added cost of not only transporting the sugarcane ethanol from Brazil, but of transporting corn ethanol or gasoline back to Brazil from the United States. The mandate is being filled by and large with the most economical alternative, and that is neither cellulosic nor advanced ethanol.
Because actual cellulosic production has lagged, EPA changed the definition of what constitutes cellulosic ethanol to include some renewable compressed natural gas and renewable liquefied natural gas. As a result, there are now millions of gallons of "cellulosic" biofuel being produced, even though this isn't what the drafter of the RFS had in mind.
13 http://www2.epa.gov/fuels-‐registration-‐reporting-‐and-‐compliance-‐help/2015-‐renewable-‐fuel-‐standard-‐data
Even with EPA's redefinition of "cellulosic" biofuel, production is far from the volumes mandated by the RFS. As of October, EPA reports that 88 million gallons of "cellulosic" biofuel has been produced so far this year. The RFS, however, calls for 3 billion gallons to be produced this year.
CONCLUSION
The RFS has not worked as planned for a number of reasons. First, the assumptions made about U.S. oil production and consumption were wrong. Many in Congress and the Bush administration did not consider that the U.S. could and would increase oil production. Second, oil consumption has leveled off as the economy has cooled since the mid-‐2000s and people are driving more fuel efficient cars. Third, Congress cannot mandate innovation. Too many in Congress and the Bush administration listened to trumped up claims from the ethanol industry and people like Vinod Khosla who wanted public money to finance their products.
The RFS is based on incorrect assumptions. It is time we repeal it and let fuel producers concentrate on fulfilling the needs of American motorists instead of bureaucrats administering a fatally flawed program.
|
Biogeography of Lepidoptera on the California Channel Islands
Jerry A. Powell
Essig Museum of Entomology, University of California, Berkeley, CA 94720
Tel. (510) 642-3207; Fax (510) 642-7428
Abstract. The biota of the California Channel Islands was altered appreciably by feral ruminants and nonnative, weedy plants prior to any Lepidoptera collections. There are only a few records before 1900, and nearly all the inventory has taken place since 1927, a century after introduction of goats and sheep. Most of the data originate from 1966–1991. About 750 species of Lepidoptera are recorded on the islands; about 550 of them on Santa Cruz Island and 370 on Santa Catalina. It is likely that the species of the less surveyed islands (Anacapa, Santa Cruz, Santa Catalina) are no more than 70–75% known, those of San Miguel and Santa Rosa less than 50% known. Species numbers per island of butterflies and of Lepidoptera as a whole show correlations to island area but are more strongly predicted by numbers of vascular plants. The area/species relationships reflect habitat diversity and not an equilibrium with Lepidoptera underrepresented in areas of possible diversity without there being no perturbation of the flora. Species or subspecies representing 26 species are recognized as endemic to the islands, about 3.3% of the fauna. These are primarily vicariant derivatives of mainland species; a few are relicts associated with endemic plants that had mainland distributions in the past. There are no examples recognized as vicariance speciation among islands. In addition to endemics, there are 4 general components that contribute to the Lepidoptera fauna: (1) widespread mainland species; (2) California Province endemics; (3) coastal strand elements; (4) desert affinities, species occurring mainly in interior southern California and Baja California that are represented primarily on the southern islands; and (5) northern species, relicts of past pluvial times that range on the mainland from San Luis Obispo to Marin County northward, represented mainly on the northern islands.
Keywords: moths; butterflies; species/area and insect/plant numbers relationships; endemism; relicts; vicariance.
Introduction
The 8 continental shelf islands situated in the southern California Bight (Fig. 1) support a diverse fauna of phytophagous insects, despite a history of severe impact by domesticated and feral mammals. There were only a few records of Lepidoptera (moths and butterflies) prior to the late 1920s, about a century after the introduction of goats and sheep onto the islands (Covillentza 1980), and the vast majority of collections have originated since 1966. I presented an analysis of the Lepidoptera based on collections through mid-1981 (Powell 1985), which was obviously preliminary because there had been inadequate sampling of the island moths, because collections were not thoroughly studied, and because the lepidopterous fauna of the mainland was not well documented in most families. While all of these disclaimers remain true, in the subsequent decade there have been additional collections from nearly all of the islands, and further taxonomic study of existing specimens, increasing our knowledge of the island faunas appreciably. Moreover, I have coordinated a comprehensive inventory at a coastal locality in Monterey County that provides a much better understanding of relationships of coastal mainland and island species, and a documented idea of the effort required to census a lepidopterous fauna in this region.
Geological history and origins of the fauna
The California Channel Islands are continental in origin, but their geologic history is complex and not completely documented. The Neogene history of the California borderland provinces has been reviewed by Crouch (1979), Vedder and Howell (1980), Luyendyk et al. (1980), Horvathis et al. (1986), and Luyendyk (1991), providing a scenario for continental and later progradation of a late Pleistocene land bridge for the northern islands. Instead, any mainland connections for lands now represented by the larger islands were likely independent for the northern group, Santa Catalina, and San Clemente and not later than early Miocene. By mid Miocene (ca. 12–16 MyrBP), peak volcanic activity and the formation of disislands during periods of sea-lowering, 17,000–18,000 yr ago. At maximum Wisconsin glaciation, the northern islands were joined into a single large island that extended eastward, possibly to within 6–10 km of the expanded mainland coast (Fig. 1), at its maximum extent (Wedder and Howell 1980). This event would have opened a wide gate for immigrating Lepidoptera.
Thus the phytophagous insect fauna probably is of various origins:
1. species inhabiting the lands when they were connected to the mainland, 20–30 MyrBP. Examples include the leaf miner fauna of trees and shrubs, at least on the northern islands. Oversea immigration presumably would have resulted in a sporadic representation of the mainland fauna, rather than its nearly intact membership (Powell and Wagner 1993). Fossil mines recognizable as modern genera of Lepidoptera are known in Miocene equivalents of modern oaks (Opfer 1973).
2. species that immigrated via oversea flight or flotsam at any time during the late Miocene to recent, but especially during times of Pleistocene sea lowering when the islands were near native (e.g., flora) of the fauna of Santa Barbara and San Nicolas islands.
3. species introduced or encouraged by human activities, particularly those that feed on introduced plants (Powell 1981a, 1985), beginning several centuries ago with prehispanic natives.
**Vegetation changes and the impact of feral animals**
Most plant communities were altered appreciably by introduced animals and plants prior to records of Lepidoptera. Goats probably were released by traders in the early 1800s as a source of milk and food and as a means of avoiding duty payments (Coblentz 1980). There was extensive grazing of domesticated sheep by the 1850s (Curtis 1864 cited by Johnson 1980; Minnich 1980). Vegetation on Santa Cruz was said to be changed by 1875 (Mothner and Wheeler 1876, cited by Minnich 1980), and photographs show other evidence of vegetation stripping on Santa Catalina in the 1880s (Minnich 1980). On smaller San Miguel and San Nicolas islands, sheep were maintained in excess of capacity during periods of drought in the 1860s. The animals were forced to strip the foliage and bark and dig for roots, and all the trees and shrubs of those islands were said to be killed (Johnson 1980). All the islands have suffered from domestic and feral mammals, including goats, pigs, even buffalo and other game animals on Santa Catalina, and rabbits on Anacapa and Santa Barbara islands.
The impact of introduced weedy plants in such perturbed habitats is far greater than numbers of species indicate; the vegetation of the islands is dominated by nonnative species (Halvorson 1992). In the seasonal drought climate of coastal California, native plants cannot compete with introduced annual grasses, anise, and other weeds, and land may remain weedy for decades after release from grazing. The flora had been profoundly altered by the time of the botanical explorations conducted in the mid-1800s. For example, Rawns (1963) described the historical changes on San Clemente: by 1840, this island was densely populated with goats (Farnham 1947). It has been owned continuously by the U.S. Government, but it was leased to a sheep company from 1877 to 1934. Botanists did not visit until 1885, and it was 1903 before the whole island had been explored botanically. By that time, many species, endemic among them, survived only in steep canyon walls inaccessible to grazing animals. These conditions continued for decades via feral goats; by 1972, when I first visited the island, the terrain resembled a moonscape. Mature native trees survived in steep canyons, but without understory or leaf litter. Except in the sand dunes, native herbs were reduced to isolated specimens on vertical canyon walls or in large cactus patches.
Ruminants were excluded from the south part of Santa Catalina Island in the early 1900s and the area gradually has returned to a chaparral community. Removal efforts have greatly reduced goats from the western part of the island in recent years (Laughrin et al., this volume). During the 1940s and 1950s, grazing animals were removed from San Miguel and San Nicolas islands by the U.S. Navy, which began eradication of the goats on San Clemente in the late 1970s. Protests by animal rights activists, who evidently felt the goat to be more important than native, island endemic plants, delayed the process, but eradication of feral pigs and removal was completed by 1991 (Keegan et al., this volume). Feral sheep were excluded by fencing some parts of Santa Cruz Island, beginning in 1950, and removal from all but the east end was achieved by The Nature Conservancy by 1989 (Schuyler 1993). Cattle also have been removed except for a few head kept at the ranch headquarters. Rabbits have been exterminated from Anacapa and Santa Barbara islands in recent years.
**Inventory approaches**
Three main approaches were taken: (1) daytime searches for butterflies and diurnal moths, (2) nocturnal sampling by ultraviolet or mercury vapor lights, and (3) rearing collections of larvae. Most Lepidoptera larvae cannot be determined to species without associated adults, although larval mines and galls often are identifiable and the most easily discerned stage. Ideally, census would involve Lepidoptera specialists living at the locality for several years. This would enable sampling comprehensively through all seasons and through fluctuations in year-to-year abundance, and it would identify vagrant and migrant species that are not resident. Except for Santa Catalina,
there have been no resident lepidopterists on the islands, and visits have been sporadic in seasonal timing and in habitat coverage and inventory approaches (Powell 1985). Sampling at lights was limited to 1 or a few sites on each island, and larval collections generally have been neglected.
**Taxonomy**
We have attempted to confirm identifications of all older literature records by reexamination of specimens. This study would not have been possible without the assistance of numerous taxonomists (see acknowledgments). Nomenclature has been updated to conform with Hodges et al. (1983) or more recent publications; Meadows (1939) listed Sphingidae and Arctiidae of Santa Catalina, but prepared an unpublished list of some other macro moths, but there has been no comprehensive moth list published for the Channel Islands. Scattered descriptions in taxonomic works and other records in the literature have been reviewed by Miller and Menke (1981), Miller (1985b), and Powell (1985). Scott Miller and I developed a database of all species and their individual island occurrence. We expect to publish this data corpus in 1994.
There are, however, many species for which identifications are provisional or unknown. Species of several genera of Tineidae and Gracillariidae, all the Blastobasidae and most Coleophoridae, and several genera of Gelechiidae are insufficiently studied on the mainland to know if the island species are described and/or in some examples whether those of 2 islands are the same. They are included in counts of individual island species but are omitted from inter-island relationship calculations. As a result, the total number of species on the islands is unresolved.
**Data retrieval**
The butterfly records have been summarized (Miller 1985a). Collection records for all species are accumulated in 2 sources: (1) Island-based bird files (partly in databases of the Essig Museum of Entomology, University of California, Berkeley; and (2) databases at the Bishop Museum, Honolulu. The latter incorporate all literature records we have seen (Miller and Menke 1981; Miller 1985b; and subsequent publications through 1993) and specimen collection records from LACM, SBMNH, and USNM, but lack CAS, EME, PMY and some USNM data and contain less than half of the species-island records. The files at Berkeley are based entirely on specimens examined primarily from CAS, EME, LACM, SBMNH, SDNHM, USNM, and microlepidoptera at PMY (abbreviations in acknowledgments).
The largest numbers of specimens are deposited in EME, LACM, and PMY; less extensive collections are in CAS, SBMNH, SDNHM, and USNM. Data from the macro moths at Yale and from any Lepidoptera in CDFA, UCD, and UCR, which have had entomological survey trips to the islands that did not include lepidopterists, have not been captured.
**Composition of the Lepidoptera Fauna**
**Faunistic numbers**
There are 726–760 species of Lepidoptera recorded on the Channel Islands, ranging from 50 known on San Miguel to 543+ on Santa Cruz (Table 1). Of the total, 605+ have been recorded on the northern islands (ca 86%); about 44% are widespread, occurring on at least one of both the northern and southern islands, while 38% are restricted to the northern group, and 18% to the southern islands.
Resident species judged to be introduced by human activities and/or dependent upon introduced plants, together with suspected vagrants make up about 10% of the fauna but vary from 9–31% on individual islands (Table 2). The proportion is highest on Santa Barbara and San Nicolas (29–31%) because they are small, and their fauna consists entirely of over water colonists, richer in polyphagous, homodynamic, strong flying dispersers, analogous to the Lepidoptera of Bermuda (Ferguson 1991). On the 2 islands with the richest communities of native plants, Santa Cruz and Santa Catalina, the percent is higher on Catalina because it has a large urban settlement, more ornamental plants, and more traffic from the mainland. The pattern corresponds to the proportions of introduced plants: 100% on San Nicolas, 29% on Santa Barbara, 30% on Santa Catalina, and 18–24% on the other islands (Table 2) (Wallace 1985; Halverson 1992). The introduction of weeds has paved the way for establishment and has favored the abundance of vagile, “weedy” insects on all the islands.
**How well have the Lepidoptera been censused?**
We have summarized the history of entomological investigations on the Channel Islands (Miller and Menke 1981) and of Lepidoptera collections through 1981 (Powell 1985). During the past 12 yr there have been productive field surveys on all islands except San Nicolas (acknowledgments). These efforts increased the numbers of known species by 3–28% on the larger islands, 33 to 100+% on the smaller islands.
During 1982–1993, we carried out an extensive inventory of Lepidoptera on the coastal mainland at the University of California Big Creek Reserve, Monterey County, approximately 270 km northwest of the northern island group. The reserve encompasses 16 km², extending from 0 to 900 m elevation, and includes coastal sage scrub, riparian, redwoods, mixed hardwoods, chamise chaparral, and live oak-madrone-ponderosa pine woodlands. Thus it is less than half the area of San Miguel but much larger than Anacapa and Santa Barbara islands, and it includes a greater elevational range than the largest islands. The floral richness is comparable to San Clemente and Santa Rosa. We sampled on 180 dates in all months, made more than 260 ultraviolet light samples, and processed 1,350 larval collections and their rearing (>90% complete), with four to 3 new records of each 3-date sample during the past year (Fig. 2). This is much more comprehensive sampling effort than has been made at any one of the islands. For example, Santa Cruz, which is 15 times the size of Big Creek Reserve, has been censused for Lepidoptera on about 120–140 dates, with about 100 UV-light samples and fewer than 200 larval collections. The results at Big Creek provide us with a realistic idea of the amount of effort required to achieve a species accumulation curve that approaches an asymptote (Fig. 2).
Three comparisons of the inventories of the islands and Big Creek can be made: (1) sampling effort (number of dates); (2) taxonomic proportions of the recorded fauna; and (3) numbers of Lepidoptera species related to floral richness.
**Sampling effort**
In collection dates, Santa Catalina has been the most often sampled, with resident collectors, D. Meadows in 1927–1934 and S. Bennett in 1980–1982. However, nearly all nocturnal collections, including those of visitors, have been at Avalon, Middle Ranch, or Toyon Bay, and
---
**Table 1. Numbers of species recorded for each superfamily of Lepidoptera on the eight California Channel Islands. See Fig. 1 for island abbreviations.**
| Superfamily | Mig | Ros | Cru | Ana | Bar | Cat | Nic | Cle | Total |
|-------------------|-----|-----|-----|-----|-----|-----|-----|-----|-------|
| Eriocranioidea | | | | | | | | | 3 |
| Nepticuloidea | | 21 | | | | | | | 21 |
| Incurvarioidae | | 1 | 11 | | | | | | 11 |
| Tineoidea | 1 | 5 | 54 | 5 | 4 | 23 | 7 | 10+ | 65–72 |
| Gelechioidea | 5 | 7 | 85+ | 20 | 13 | 47+ | 9 | 30 | 127–142 |
| Copromorphoidea | | | | | | | | | 2 |
| Yponomeutoidea | 1 | 1 | 9 | 1 | 1 | 4 | 1 | 4 | 11 |
| Sesiioidea | 1 | 3 | 1 | 1 | 1 | 4 | 4 | 2 | 5 |
| Cossoidea | | | | | | | | | 2 |
| Tortricoidea | 4 | 8 | 46 | 14 | 5 | 37 | 6 | 13 | 63–65 |
| Pyraloidea | 5 | 13 | 72 | 19 | 13 | 50 | 12 | 21 | 90–92 |
| Pterophoroidea | 4 | 6 | 12 | 7 | 3 | 13+ | 4 | 9 | 20–22 |
| Geometroidea | 5 | 20 | 67 | 17 | 6 | 50 | 6 | 16 | 86–88 |
| Bombycoidae | | | | | | | | | 2 |
| Sphingoidea | 1 | 2 | 5 | 2 | 1 | 5 | 1 | 1 | 6 |
| Noctuoidea | 16 | 44 | 114 | 35 | 27 | 97 | 35 | 43 | 168–172 |
| Hesperioidea | 2 | 3 | 5 | 2 | 1 | 4 | 1 | 1 | 7 |
| Papilionoidea | 6 | 17 | 30 | 13 | 7 | 25 | 9 | 12 | 37 |
| **Totals** | 50 | 125 | 543+| 135+| 82 | 372+| 91 | 166+ | 726–760 |
---
Table 2. Geographic features, numbers, and proportions of nonnative and native species of vascular plants and Lepidoptera of the California Channel Islands. See Fig. 1 for island abbreviations.
| | Mig | Ros | Cru | Ana | Bar | Cat | Nic | Cle |
|------------------|-----|-----|-----|-----|-----|-----|-----|-----|
| Area (km²) | 37 | 217 | 249 | 2.9 | 2.6 | 194 | 58 | 145 |
| Max elev (m) | 254 | 485 | 744 | 284 | 194 | 644 | 277 | 601 |
| n plants | 221 | 450 | 604 | 204 | 101 | 592 | 180 | 342 |
| % nonnative | 23 | 18 | 23 | 20 | 29 | 30 | 37 | 24 |
| n butterfly | 8 | 20 | 35 | 15 | 8 | 29 | 10 | 13 |
| n Lepidoptera | 50 | 125 | 543 | 135 | 82 | 372 | 91 | 166 |
| % vagrant and | 20 | 15 | 9 | 19 | 29 | 16 | 31 | 19 |
| nonnative | | | | | | | | |
| Lepidoptera spp/ | | | | | | | | |
| native plant sp | 0.29| 0.38| 1.14| 0.81| 1.39| 0.89| 0.80| 0.64|
Figure 2. Species accumulation curve compiled during inventory of Lepidoptera at Big Creek Reserve, Monterey Co., CA, with percent accumulation to 900 species (believed to be > 90% of the total fauna). Horizontal bars depict estimated numbers of sampling dates and their respective levels of recorded species for selected Channel Islands (abbreviations, see Fig. 1): minimum dates (larger collections) to maximum dates (including incidental collections of 5 or fewer species).
Larval collections have not been emphasized. Santa Cruz has had the most comprehensive emphasis in sampling, but nocturnal collections are primarily from the U.C. Biological Station, with a few at 4 other sites, a sketchy census considering the size and diversity of habitats of Santa Cruz. Because they are so small (2.6, 2.9 km²), Santa Barbara and Anacapa have had the most sampling per unit area, and their Lepidoptera may be the most completely known despite lack of emphasis on larvae.
Projections based on the rate of species accumulation at Big Creek and the numbers of sampling dates (Fig. 2) suggest that San Miguel and Santa Rosa are less than 50% sampled, San Nicolas 67%, San Clemente 75%, and Santa Cruz about 85% recorded. This may be a realistic estimate for the smaller islands, but the diversity of habitats on the larger islands dictate the need for greater sampling effort than at Big Creek.
Taxonomic composition
To compare the composition of the fauna with that of the mainland and to predict expected total species numbers, we can use 4 taxonomic "guilds." While none comprises a monophyletic taxon, each forms a distinguishable biological group:
1. Microlepidoptera: a paraphyletic assemblage of primitive moths and the more ancestral Ditrysians (upper 10 superfamilies in Table 1). Larvae of Microlepidoptera are nearly all endophagous (leaf miners, stem and root borers, gall inducers or concealed feeders that make shelters of silk, such as leaf rolls; the vast majority are host plant specialists; about 10% are detritivores. They are tiny to small (FW length mostly 2–12 mm) and often maintain populations in small host plant patches; about 16% are diurnal moths.
2. Pyraloidea + Pterophoroidea: small to moderate-sized moths (FW mostly 8–18 mm); mostly endophagous or concealed feeders, including both specialists and generalists, some of which feed in fungi, dry flowers or seeds, cacti, succulents, or are scavengers in organic refuse. Almost all are nocturnal.
3. Macro moths (Geometroidea + Bombycoidea + Sphingoidea + Noctuoidea): moderate sized to large (FW mostly 12–45 mm) and almost all larvae feed exposed, although many at night, retreating to shelter by day; some are specialists, but many are relative generalists in host plant selection. Roughly 95% are nocturnal.
4. Butterflies (Hesperioidea + Papilionoidea): moderate sized to large (FW mostly 12–45 mm); larvae usually are specialists and feed exposed, remaining so by day, protected by cryptis. Butterflies are diurnal and use visual cues in the mating systems, with pheromones in close-range courtship, and hence many need larger home ranges than most moths, which use pheromone attraction, usually emitted by stationary females.
There is no comprehensive list of Lepidoptera of California, nor any of its 58 counties. There are partial lists for numerous localities, but only the inventory at the Big Creek Reserve, Monterey County, described above, is considered to be thorough. Table 3 summarizes occurrence of the major guilds on land and at Big Creek. Obviously, the representation of microlepidoptera is low and of butterflies high for most of the islands, indicating skewed emphasis in inventory.
The proportions of the known fauna represented by micro-, macro-moth, and butterfly superfamilies indicate that the 4 islands where our Berkeley group has surveyed are more comprehensively documented (Tables 1 and 3).
Table 3. "Guild" composition of the Lepidoptera fauna of the California Channel Islands and Big Creek Reserve, Monterey Co., California, expressed as percent of total species (given in parentheses). See Fig. 1 for island abbreviations.
| Locality | (Total) | Microlep | Pyral-Pter | Macromoth | Butterfly |
|----------|---------|----------|------------|-----------|-----------|
| Big Cr | (901) | 41 | 11 | 41 | 6.7 |
| Mig | (50) | 22 | 18 | 44 | 16 |
| Ros | (125) | 18 | 15 | 53 | 16 |
| Cru | (543) | 43 | 15 | 35 | 6.5 |
| Ana | (135) | 30 | 19 | 40 | 11 |
| Bar | (82) | 29 | 20 | 41 | 10 |
| Cat | (372) | 34 | 17 | 41 | 7.8 |
| Nic | (91) | 25 | 18 | 51 | 11 |
| Cle | (166) | 38 | 18 | 36 | 7.8 |
We emphasized microlepidoptera and larval collections, in surveys of Santa Cruz, Santa Catalina, San Clemente, and San Nicolas on 6, 4, 4, and 1 visits of 30, 9, 14, and 3 days respectively, with groups of 2-4 persons. As a result, Lepidoptera are better known on these islands; for example, there are 235+ microlepidoptera recorded for Santa Cruz (43% of the known fauna) and 127 for Santa Catalina (34%), but only 22 (18%) on the second largest island, Santa Rosa (Table 1). The relative proportions of these guilds show San Miguel and Santa Rosa to be the least censused and the 3 other large islands, Santa Cruz, Santa Catalina, and San Clemente to be the most comprehensively surveyed (Table 3).
**Lepidoptera/floral richness**
The numbers of Lepidoptera per native plant species (Table 2), also indicate that San Miguel and Santa Rosa islands are poorly known relative to the other 6 islands. Numbers of native plants per unit area decrease approximately logarithmically with increasing area among local floras in California. This relationship exists among the Channel Islands, with numbers fewer than expected on Santa Barbara and San Nicolas, greater than expected on Anacapa (data from Wallace 1985). Data are insufficient to confirm if the same is true of phytophagous insects, so it is unknown whether we should expect the number of Lepidoptera per plant to be comparable between localities of different sizes. There are about 3.1 Lepidoptera/plant species at Big Creek (2,600 ha), for example. If we assume that phytophagous Lepidoptera are more depauperate than leaf-miner larvae, there are many examples of mainland Lepidoptera that seem to be lacking from their host plants on the islands (Powell and Wagner 1993), we might project a ratio of 2.0 species per plant on the islands. Santa Cruz and Santa Barbara, with 1.4 and 1.39 species per plant, have about 60–70% of expected species, while other islands have only about 35–45%, and San Miguel and Santa Rosa 15–20%.
**Projected fauna based on butterfly census**
Previous projections of the Lepidoptera faunas of San Clemente, Santa Catalina, and Santa Cruz (Powell 1985) were based on the relative proportions of butterfly species recorded and an assumption that all Lepidoptera are similarly depauperate. We now realize those estimates were overly optimistic because the representation of the major groups in the islands’ faunas is disproportionately in contrast to those of the mainland. Leaf miners and other small microlepidoptera are better represented than larger Lepidoptera (Powell and Wagner 1993). The leaf-miner complex is 80–90% intact on host plants that occur on Santa Cruz, and about 70% of the coastal mainland species are known on Santa Cruz, while resident butterflies and larger moths are only about 50% as species rich as a comparable area of the mainland (Miller 1985a, Powell and Wagner 1993).
Based on our analysis (Powell and Wagner 1993), the fauna of Santa Cruz is expected to contain about 48% microlepidoptera, 12% pyraloid-periphoroids, 35% macro moths, and 5% butterflies. If the butterflies are all recorded, a total of 700 species projected, of which 77% are recorded. Santa Catalina likely has a similar composition, with 580 species projected (64% known). San Clemente may have had comparable faunal makeup but is severely depauperate (Table 1). If we assume that the 11 species of butterflies that are currently thought to be resident constitute the whole fauna, 220 species of Lepidoptera are projected (75% known). Anacapa and San Miguel should be richer than Santa Barbara and San Nicolas, having been connected to Santa Cruz subsequent to submergence. With 14 butterfly species recorded, Anacapa projects to 280 species, and only 48% recorded, which seems possible considering its flora (Table 2). However, more vagile species such as larger butterflies may not be represented owing to the islands’ proximity to Santa Cruz and the mainland. Santa Barbara and San Nicolas, having been submerged during the Pleistocene, evidently have larger, vagile species more prevalent. The census of Santa Rosa and San Miguel are insufficient to permit projections, but even if the butterflies have been completely inventoried, they may be only 30% recorded.
Santa Cruz possesses about 73% of the species known from the Channel Islands. If that proportion is reduced minimally with further survey and the above projection is reliable, a conservative estimate of about 1,000 species of Lepidoptera is projected for the islands.
**Area/Species Relationships**
Larger islands tend to have higher elevations and greater topographic relief than smaller ones and therefore more habitat and plant species. As a result, we expect animals, particularly phytophagous insects, to be more diverse on larger islands. For butterflies and the remarkably similar numbers of species in Orthoptera, there is a correlation in area-species relationships (Weissman and Rentz 1976; Powell 1985) (Table 2 and Fig. 3). The correlation coefficient and slope are lower for butterflies ($r = 0.61$, $z = 0.183$) than for grasshoppers and relatives ($r = 0.74$, $z = 0.286$), but nonetheless nearly twice as low for San Miguel, partly due to its remoteness, and high for Anacapa, as noted. The numbers of butterfly species in Table 4 exclude island records believed to be nonbreeding, transient species, but those of unknown status are included, several of which may be older records of vagrants or extinct on particular islands now (Powell 1985).
For moths and Lepidoptera as a whole, there are too few records of microlepidoptera for San Miguel and Santa Rosa to allow meaningful comparison with the other islands (Tables 1 and 3). The log-log linear regression for all species on the other 6 islands shows a better correlation and steeper slope ($r = 0.715$, $z = 0.260$) than do butterfly data alone (Fig. 4). These slopes are less than those calculated for native vascular plants of the Channel Islands ($r = 0.89$, $z = 0.38$) (Raven 1967). For this assessment, I included species thought to be vagrants because decisions must be subjective, and their numbers are negligible relative to island totals. Native species that depend upon introduced plants and nonnative species that have been introduced from other parts of the world and feed on weedy or native plants are included (Powell 1985, examples).
Maximum elevation and numbers of vascular plant species are correlated with area and are principal predictors of species richness in birds of the Channel Islands (Power 1976). Elevations of the 8 Channel Islands explain species numbers of Orthoptera better than area (Weissman and Rentz 1976). For Lepidoptera, the same is true for both plant numbers and elevation (Figs. 5, 6). I used numbers of plants given by Wallace (1985), including nonnatives because many Lepidoptera feed on them. I compared species numbers to maximum elevations, converted to metric from maps published by Menke (1968), which are based on USGS topographic sheets. Elevations given by Wallace, Weissman, and Rentz (cited by Power), and by Menke differ among the islands inconsistently and in some cases 10–12%.
If the Channel Islands were of the same age, topographically and ecologically uniform, equidistant from the mainland, undisturbed by nonnative plants and animals, and thoroughly censused, the numbers of species per island might be at equilibrium between extinction and colonization (MacArthur 1957, MacArthur and Wilson 1963). Of course the islands share none of these qualities; the area/species relationships reflect habitat diversity and not a dynamic equilibrium between extinction and colonization. Rather, I believe the islands are understaturated because there was suppression of the original numbers by
Table 4. Described and undescribed Lepidoptera that are considered to be endemic to the California Channel Islands. See Fig. 1 for island abbreviations.
Nepticuloidea
*Stigmella n. sp.* (*Lyonothamnus*) Davis (Cru, Cle)
Tineoidea
*Acrocercope insulariella* Opler, 1971 (Cru)
Gelechioidae
*Agonopterix toega* Hodges, 1974 (Cle)
*Holcocera phenacoci* Braun, 1927 (Cat)
*Chionodes n. sp.* Hodges ms (Cru, Ana, Bar, Cat)
*Coleotechnites n. sp.* (*Lyonothamnus*) Powell (Cru, Cle)
*Ephysteris n. sp.* Povolny ms (Cru)
*Scrobipaltpula n. sp.* nr. *chiquitella* Povolny ms (Cle)
*Scrobipaltpula n. sp.* (*Lycium*) Povolny ms (Cle)
*Vladimiria? n. sp.* Povolny ms (Cru, Cle)
Yponomeutoidea
*Ypsolopha lyonothamnae* (Powell), 1967 (Cru, Cle)
Tortricoidea
*Argyrotaenia franciscana insulana* Powell, 1964 (Mig, Ros, Cru, Ana, Nic)
*Argyrotaenia isolatissima* Powell, 1964 (Bar)
Pyraloidea
*Evergestis angustalis catalinae* Munroe, 1973 (Cat)
*Sosipatra proximanthophila* Neunzig, 1990 (Cru, Cat)
*Vitula insula* Neunzig, 1990 (Cru, Cat)
Geometroidea
*Pero catalina* Poole, 1987 (Cat)
*Pero n. sp.* nr. *gigantea* Grossbeck, Poole (Cle)
*Pierotaea crinigeria* Kringde, 1970 (Cle)
Noctuoidea
*Arachnis picta insularis* Clarke, 1940 (Ana)
*Arachnis picta meadowsi* Comstock, 1942 (Cat)
*Lophocampa indistincta* (Barnes & McDunnough), 1910 (Ros?, Cru, Ana, Cat)
*Feralia meadowsi* Buckett, 1968 (Cru?, Cat)
*Zosteropoda clementei* Meadows, 1942 (Ros, Cru, Cle)
Hesperioidea
*Ochlodes sylvanoides santacruza* Scott, 1981 (Cru)
Papilionoidea
*Anthocharis cethura catalina* Meadows, 1937 (Cat)
*Strymon avalona* (Wright), 1905 (Cat)
*Euphydryas editha insularis* Emmel & Emmel, 1975 (Ros)
Figure 4. Log-log correlation between number of all recorded Lepidoptera and island area for 6 California Channel Islands (abbreviations, see Fig. 1). Data for San Miguel and Santa Rosa are indicated for reference purpose (circles) but are omitted from the regression because Lepidoptera sampling is insufficient for comparison.
Endemism
There have been at least 25 species or subspecies of Lepidoptera described from type localities on the Channel Islands, including several that are now considered to be synonyms or are known on the mainland (Powell 1985 review; Poole 1987; Neunzig 1990; Landry 1991). Several undescribed species are believed to be endemic, yielding a total of 28 endemic taxa, nearly half of 60 species, about 5.3% of the fauna (Table 4). Among these, 21 are treated as distinct from any mainland species, although the mainland relatives of several have not been adequately studied. The endemics represent 10 superfamilies; apparent concentration in Gelechioidae probably is a measure of the preliminary state of knowledge of the mainland fauna. There are numerous additional undescribed species, primarily in Tineoidea and Gelechioidae, but the California fauna is too poorly known to identify mainland populations in these taxa.
Endemic taxa appear to be of 2 basic kinds: those that evolved by adaptation under island conditions, and relics that were once widespread and now are restricted to the islands because of changing conditions on the mainland. Most are closely related to mainland sister groups. There is no direct evidence as to whether they were isolated originally by vicariant separation during geologic events or whether they reached individual islands by dispersal. Differentiated endemics on separate islands are recognized in *Arachnis picta* Packard and the *Argyrotaenia franciscana* (Walsingham) complex, but there are no sympatric sister taxa as are known in flightless crickets (*Onemotettix*) (Weissman and Rentz 1976).
Presumed relicts include 3 species whose larvae feed on the endemic tree, *Lyconothamnus* (Rosaceae): an undescribed nepticulid, *Coleotechnites* n. sp., and *Ipsoptela* (Trachoma) *lyconothamnae*. There are fossil equivalents of *Lyconothamnus floribundus* in widespread mainland floras of the Miocene (Raven and Axelrod 1978). One of 2 Rosaceae-feeding species of the *Trachoma* group on the mainland is presumed to be the sister of *I. lyconothamnae* (Powell 1967). *Coleotechnites* n. sp. also feeds on other Rosaceae on San Clemente, *Prunus ilicis*, a plant represented by a similar evergreen cherry in Miocene floras of the mainland, and on *Heteromeles arbutifolia*. A sister *Coleotechnites* in the San Francisco Bay area feeds on *H. arbutifolia*.
The endemic Lepidoptera display interesting geographic patterns (Fig. 1). Most (60%) are restricted to 1 island, and all but San Miguel and Santa Nicolas have at least 1 endemic, with 6 on Santa Catalina and 5 on San Clemente, but none occurs on both of the large southern islands, reflecting their long history of isolation and more distant separation prior to mid-Miocene. Six occur on both Santa Cruz and Santa Catalina but not San Clemente (2 of them also on Anacapa and/or Santa Barbara), while conversely, 4, including the 3 *Lyconothamnus* associates, are known only from Santa Cruz or San Clemente but not Santa Catalina. The latter guild perhaps have been overlooked on Santa Catalina, but Miocene fossils allied to the typical subspecies of *L. floribundus* on Santa Catalina are coastal, west of the San Andreas fault, while those of the Santa Cruz-San Clemente subspecies, *L. f. axplemifolias*, occur to the north in the interior (Raven and Axelrod 1978).
Endemic differentiation also occurs at more subtle levels in many moths, such as in proportions of polymorphic phases in Arctiidae, size of individuals, larger or smaller, food plant preference, and seasonal phenology (Powell 1985, examples).
**Faunal Relationships**
Geographical distribution patterns of Lepidoptera of the California Channel Islands may be categorized into 5 types, in addition to insular endemism:
1. **Widespread.** Most Channel Island insects occur widely on the mainland, in California ranging from the southern California coastal area or the Peninsular Ranges to northern California and beyond. As noted, 44% of the species occur on 1 or more of both the southern and northern islands. Several others that are known only from 1 island, particularly Santa Cruz or Santa Catalina, are widespread on the mainland. An estimated 70–75% of the total fauna are widespread, including 10% that are nonnative, weedy, and/or vagrant species.
2. **Californian Province.** This region harbors many unique communities of plants and animals rich in endemic taxa (Miller 1951; Stebbins and Major 1965). It has been defined as the cismontane portion of southern California, including the Peninsular Ranges, and adjacent Baja California, extending northeastward to the Santa Lucia Mountains and Monterey area or the Hamilton Range in the San Francisco Bay area (Miller 1951; Powell and Hogue 1979). Many of its Lepidoptera occur on the Channel Islands, including the 21 endemic species; I estimate that about 10% of the island fauna consists of species limited to this region. The tortricid, *Argyrotaenia niiscana* (Kearfott), a specialist feeder on chamise (*Adenostoma fasciculatum*) (Rosaceae) is typical (Powell 1985, examples).
3. **Coastal Strand.** A small element of the insular fauna (ca 2%) consists of beach dune and coastal strand insects (Powell 1981b). Although this component consists partly of Californian Province species, others are more widely distributed along the Pacific Coast and even Gulf and East coasts. Species of microlepidoptera that feed on *Ambrosia chamissonis* exemplify the group. *Aphelia* has exceptional ability to immigrate and colonize, such as on isolated, small beaches. For example, only 14% of the species on San Nicolas, the most remote island, are oiiogophagous plant feeders, all of them on beach and coastal strand plants (Powell 1985).
4. **Desert.** As is true of the flora (Raven and Axelrod 1978), an appreciable number of Lepidoptera on the Channel Islands have affinities to the arid south or east, usually interior mountain communities. They often occur distantly away from the adjacent coast, examples include: *Lithariapteryx tuberculata* Cornstock, on Santa Cruz (Powell 1991); *Noctueliopsis grandis* Munroe, a San Clemente crambid known on the mainland only from north-central Baja California, and the endemic geometrid, *Pteroaea cringera* on San Clemente and its low desert sister species, *P. crickneri* (Sperry) (Rindge 1970). Such species may be very widespread in the arid Southwest, e.g., the lepidopteran *Eupithecia longisoma* Landry (1991), which occurs on all 4 southern islands and ranges to western Texas and the Tres Marias Islands off the coast of Nayarit, Mexico. About 6–8% of the insular fauna shows this affinity, particularly in the southern islands (14% on San Clemente). Presumably these represent relics of past arid climates, from mid-Pliocene, which was the driest part of the Tertiary in the region, into the Xerothermic Period, ~4–3,000 yr ago (Raven and Axelrod 1978).
5. **Northern.** Raven (1967) estimated that at least 10% of the vascular plants of the Channel Islands have mainland distributions not adjacent to the islands, and the
great majority of those are northern species. Based on macro moths and selected taxa of pyraloids and microlepidoptera, I estimated that about 14% of the fauna of the northern island group and 6% of the southern show this northern affinity (Powell 1985). For instance, 5 species of Incurvariidae that occur on Santa Cruz are abundant in the central Coast Ranges and Sierra Nevada foothills but are absent or persist only in a few local, favorable sites in southern California, presumably relicts of past pluvial times (Powell 1985). This relationship is reflected in the higher proportion of shared species between Big Creek, Monterey Co., and Santa Cruz (29%) than between Big Creek and Santa Catalina (20%) (Table 5).
**Inter-island similarity**
Pair-wise comparisons of the species roster between islands show low percent similarity when shared species are compared to total species (Table 5), because so many species are known only from Santa Cruz or Santa Catalina. Comparison of shared species to the smaller fauna in each pair shows 77–79% resemblance between Santa Cruz and the adjacent islands that were joined during glacial times. By contrast, among the other larger islands, Santa Catalina and San Clemente share about the same proportion of their fauna with Santa Cruz, 66% and 69% respectively, as San Clemente does with Santa Catalina (67%).
Santa Catalina and San Clemente are situated only 35 km apart and are more similar in size than either is to Santa Cruz. However, they share only 27% of their total species inventory, and this is attributable to several factors. Geological evidence reviewed above indicates much more distant origins for these 2 islands than today's geography indicates. Separate origins are suggested by marked differences in the flora (Raven 1963), in patterns of endemism in Lepidoptera (Fig. 1), and by the different relationships to Miocene fossil floras of the mainland, such as the *Lytothamnus* example noted above. San Clemente is much arid, and all this is reflected in the vegetation. Santa Catalina has 458 vascular species and 1.3% cacti woodland (Minnich 1980). It is a much richer flora (1.6X the number of species, Wallace 1985), with a long roster of species and genera that are lacking from San Clemente (Raven 1963). Included are many perennials used by Lepidoptera as larval hosts, such as *Populus*, *Salix*, *Cercocarpus*, *Holodiscus*, *Rosaceae*, *Rubus*, *Arctostaphylos*, *Brickellia*, *Solidago*, and species of many other genera that provide the framework of the Santa Catalina Lepidoptera community. Irrespective of similarity in size, San Clemente could not support the diversity and richness in phytophagous insects that its neighbor does. Finally, San Clemente had a more severe impact from ruminants, particularly goats, which fragmented many of the extant plants into tiny patches.
Butterflies show much higher percent congruence between islands than do the moths. This may be in part an artifact of sampling, since a butterfly census more complete on one island. However, the insular butterflies tend to be species that are wide ranging as adults (i.e., *Papilio*, pierids, *Junonia*, *Vanessa*, and weedy Hesperidae make up about 40% of the species), most are homodynamic (55%), and polyphagous and/or feed on weedy plants (58%) (Powell 1985). Taxa that are more niche-specific, often with univoltine species (e.g., *Speyeria*, noctuidine Nymphalidae, Lycaenidae) are rare on the islands. Homodynamic, polyphagous, and weedy species are proportionately less numerous among moths, especially Geometridae and microlepidoptera.
**Acknowledgments.** In addition to the collectors and taxonomists cited elsewhere (Powell 1985, Powell and Wagner 1993), I acknowledge the following for inventory efforts during recent years: S. Bennett (Santa Catalina 1981–1982), C. Drost (Anacapa, Santa Barbara 1985–1988), J. De Benedictis (San Clemente 1987), P. Supe (San Miguel 1987). This study has been made possible by the generosity of taxonomists; I again thank those previously credited, as well as Y.-H. Hu (Helioidiidae), J.-F. Landry (Coleophoridae, Scythrididae), D. Lafontaine and R. Robertson (Noctuidae), H. Neunzig (Pyralidae), D. Povolny (Gelechiidae). The species numbers regression calculations and figures were prepared by J. De Benedictis. Cooperation by authorities of the following institutions enabled use of collections in their care: California Academy of Sciences (CAS); California Museum of Entomology, U. Calif. Berkeley (EME); Los Angeles County Museum of Natural History (LACM); Peabody Museum, Yale University (PMY); Santa Barbara Museum of Natural History (SBMNH); San Diego Natural History Museum (SDNHM); National Museum of Natural History (USNM). Helpful comments on the manuscript were provided by Paul DaSilva and two anonymous reviewers.
**Literature Cited**
Coblentz, B. E. 1980. Effects of feral goats on the Santa Catalina Island ecosystem. In: The California Islands (edited by D. Power), Santa Barbara Museum of Natural History, Santa Barbara, California, pp. 167–170.
Crouse, J. K. 1979. Neogene tectonic evolution of the California continental borderland and western Transverse Ranges. Geological Society of America Bulletin Part 1, 90:338–345.
Farnham, J. T. 1947. Travels in Biobooks, Oakland, California.
Ferguson, O. C. 1991. Essay on the long-range dispersal and biogeography of Lepidoptera, with special reference to the Lepidoptera of Bermuda. In: Lepidoptera of Bermuda: Their Foodplants, Biogeography, and Means of Dispersal (edited by Ferguson et al.), Memoirs Entomological Society Canada 158, pp. 67–77.
Halvorson, W. L. 1992. Alien plants at Channel Islands National Park. In: Alien Plant Invasions in Native Ecosystems of Hawai'i: Management and Research (edited by C. Stiles), University of Hawaii Cooperative National Park Resources Studies Unit, Honolulu, pp. 64–96.
Hodges, R. W., et al. (editors). 1983. Check List of the Lepidoptera of America North of Mexico. E. W. Classey Ltd. and Wedge Entomological Research Foundation, London, 284 pp.
Hornafius, J. S., B. P. Luyendyk, R. R. Terres, and M. J. Kammerling. 1986. Timing and extent of Neogene tectonic rotation in the western Transverse Ranges, California. Geological Society of America, Bulletin 97:1476–1487.
Johnson, D. L. 1978. The origin of Islands mammals and the Quaternary land bridge history of the northern Channel Islands, California. Quaternary Research 10:204–225.
Johnson, D. L. 1980. Episodic vegetation stripping, soil erosion, and landscape modification in prehistoric and historic time, San Miguel Island, California. In: The California Islands (edited by D. Power), Santa Barbara Museum Natural History, Santa Barbara, California, pp. 103–120.
Keegan, D. R., A. Cubberley, and C. S. Winchell. 1994. Ecological Feral Goats Eradicated on San Clemente Island, California. In: Fourth California Islands Symposium: Update on the Status of Resources (edited by W. L. Halvorson and G. J. Maender), Santa Barbara Museum of Natural History, Santa Barbara, California (this volume).
Landry, J.-F. 1991. Systematics of Neartic Scythrididae (Lepidoptera: Gelechioidea): Phylogeny and classification of monotypic taxa, with a review of described species. Memoirs Entomological Society Canada, 160. 341 pp.
Laughlin, L., M. Carroll, A. Bronfmland, and J. Carroll. 1994. Trends in Vegetation Changes with Removal of Feral Animal Grazing Pressures on Santa Catalina Island. In: Fourth California Islands Symposium: Update on the Status of Resources (edited by W. L. Halvorson and G. J. Maender), Santa Barbara Museum of Natural History, Santa Barbara, California (this volume).
Luyendyk, B. P. 1991. A model for Neogene crustal rotations, transension, and transpression in southern California. Geological Society of America, Bulletin 103:1528–1538.
Luyendyk, B. P., M. J. Kammerling, and R. Terres. 1980. Geometric model for Neogene crustal rotations in southern California. Geological Society of America Bulletin Part 1, 91:211–217.
MacArthur, R. H., and E. O. Wilson. 1963. An equilibrium theory of insular biogeography. Evolution 17:373–380.
Meadows, D. 1939. An annotated list of the Lepidoptera of Santa Catalina Island, California. Part II. Sphingidae and Arctiidae. Bulletin Southern California Academy of Sciences 37:133–136.
Menke, A. S. 1985. Maps and place names of the California Channel Islands. In: Entomology of the California Channel Islands (edited by A. Menke and D. Miller), Santa Barbara Museum Natural History, Santa Barbara, California, pp. 171–178 + maps.
**Table 5.** Pairwise similarity in Lepidoptera species between Big Creek, Santa Cruz, Santa Catalina, and San Clemente islands, and between Santa Cruz and adjacent islands, expressed as total shared species (upper right matrix), percent shared of total fauna (left of slash), and percent of species shared in the smaller fauna of each pair (*italics*). See Fig. 1 for island abbreviations.
| | BigCr | Ros | Cru | Ana | Cat | Cle |
|-------|-------|-------|-------|-------|-------|-------|
| BigCr | — | 320 | — | 215 | 87 |
| Ros | — | 96 | — | — | — |
| Cru | 29/60 | 17/77 | 106 | 247 | 115 |
| Ana | — | — | 19/79 | — | — |
| Cat | 20/58 | — | 37/66 | — | 112 |
| Cle | 9/52 | — | 19/69 | — | 27/67 |
Ecological Monitoring in Channel Islands National Park, California
Gary E. Davis, Kathryn R. Faulkner, and William L. Halvorson
1National Biological Survey, Channel Islands Field Station, 1901 Spinnaker Drive, Ventura, CA 93001-4354 Tel. (805) 658-5707; Fax (805) 658-5799
2Channel Islands National Park, 1901 Spinnaker Drive, Ventura, CA 93001-4354 Tel. (805) 658-5707; Fax (805) 658-5709
3National Biological Survey, University of Arizona Cooperative Park Studies Unit, 125 Biological Sciences East, The University of Arizona, Tucson, AZ 85721 Tel. (602) 670-6885; Fax (602) 670-5701
Abstract. Natural resource managers need to understand the natural functioning of and threats to ecosystems under their management. They need a long-term monitoring program to gather information on ecosystem health, establish empirical limits of variation, diagnose abnormal conditions, and identify potential adaptive change. The approach used to design such a program at Channel Islands National Park, California, may be applied to other ecosystems worldwide. The design of the monitoring program began with a conceptual model of the park ecosystem. Indicator species from each ecosystem component were selected using a Delphi approach. Scientists identified parameters of population dynamics to measure, such as abundance, distribution, age structure, reproductive effort, and growth rate. Short-term design studies were conducted to develop monitoring protocols for pinnipeds, seabirds, rocky intertidal communities, kelp forest communities, terrestrial vertebrates, land birds, terrestrial vegetation, fishery harvest, visitors, weather, sand beach and coastal lagoon, and terrestrial invertebrates (indicated in priority order set by park staff). Monitoring information provides park and natural resource managers with useful products for planning, program evaluation, and critical issue identification. It also provides the scientific community with an ecosystem-wide framework of population information.
Keywords: Channel Islands National Park; natural resources monitoring; pinnipeds; seabirds; rocky intertidal; kelp forest; terrestrial vertebrates; land birds; terrestrial vegetation; fishery harvest; visitors; weather; sand beach; coastal lagoon; terrestrial invertebrates.
Introduction
How healthy are ecosystems in Channel Islands National Park? Without management intervention, are they capable of coping with altered water supplies, human consumption of “renewable” resources, accelerated invasions of nonnative species, physical impacts of visitors, and air pollution? How do we determine when to intervene in natural resource issues, and how far should we go in our remedial actions? Land managers need answers to questions like these to protect threatened ecosystems worldwide.
Ecosystems are changing in ways never before seen. Lack of historical and contemporary data makes it difficult to clearly define the nature and extent of these changes (Orians 1986). Unless we begin to gather empirical data on the health of ecosystems now, the changes may become irreversible and fatal. Alternatively we may unnecessarily impose controls on human activities out of fear of the unknown. Politically, this kind of uncertainty tends to freeze action, or fear of over-reacting or changing systems perceived as naturally static (Wurman 1990). Uncertainty about ecosystem dynamics ranges from concerns for global climate change to visitor disturbance of wildlife and trail erosion.
In this paper we present a conceptual approach to designing ecological monitoring programs to address these kinds of issues. We also describe a specific application of this concept to Channel Islands National Park and International Biosphere Reserve, California, to serve as a model for the U.S. National Park System and other protected natural areas.
Design of a Monitoring Program
An appropriately designed natural resources monitoring program can reduce uncertainty and address critical questions about system dynamics. What to monitor, and the appropriate level of accuracy, varies from area to area, but the basic reasons for monitoring are universal. They are to:
|
PROFESSOR: DR. LAURAB. FORKER
PREREQUISITE:
POM 212
EMAIL:
email@example.com
NO. OF COURSE CREDITS:
3 CREDITS
OFFICE:
CCB 205
COURSE FORMAT:
100% ONLINE
OFFICEHOURS: M-‐TH 10:00 A.M.-‐12:00 P.M. DIS DEP ’ T . SECRETARY
JUMAMILLER
F ACE -‐ TO -‐ FACE OFFICE HOURS BY APP ’ T .
firstname.lastname@example.org
COURSEBLACKBOARD
508-‐999-‐8862
ADDRESS:
https://umassd.umassonline.net/webapps/blackboard/content/listContentEditable.jsp?cont ent_id=_1266389_1&course_id=_19613_1
COURSEDESCRIPTION
Design, development, direction, and distribution methods used to deliver goods and services. Topics covered include operations strategy and the management of quality, inventory, supply, capacity and demand, and others. Conceptual, analytical, and quantitative techniques are taught to improve the efficiency and effectiveness of transformation processes in organizations.
COURSEOBJECTIVES
Upon successful completion of this course, the student will be able to:
*
Describe those aspects of operations management of particular importance to goods-‐ producing and service-‐producing organizations.
* Systematically approach problems in goods-‐ and service-‐producing organizations using appropriate analytical and quantitative techniques to solve problems.
* Identify the value-‐adding steps of a process, recognize problems in that process, and take leadership for improvement.
* Interpret data in organizational publications including forecasts, inventory and quality metrics, and utilization for use in strategy formulation and development.
REQUIREDTEXT
Fitzsimmons, J.A., Fitzsimmons, M.A., Bordoloi, S.K., (2014)Service Management: Operations, Strategy, Information Technology,8 th edition, Irwin/McGraw Hill. ISBN#: 978-‐0-‐07-‐802407-‐8.
* Available for purchase or to rent on Amazon.com at:
```
https://www.amazon.com/gp/offer-‐ listing/0077841204/ref=dp_olp_all_mbc?ie=UTF8&condition=all
```
Prices as of 4/1/19: $47.36 – $171.35 (used & new); some sellers offer free shipping but most charge shipping as an additional fee. (All prices quoted here don't include shipping.)
https://www.amazon.com/Management-‐Software-‐Mcgraw-‐hill-‐Operations-‐ Decision/dp/0077841204/ref=sr_1_fkmr1_1?keywords=Fitzsimmons%2C+J.A.%2C+Fitzsim mons%2C+M.A.%2C+Bordoloi%2C+S.K.%2C+%282014%29+Service+Management%3A+Oper ations%2C+Strategy%2C+Information+Technology%2C+8th+edition&qid=1552073196&s=g ateway&sr=8-‐1-‐fkmr1
This is Amazon's text rental address. Amazon's rental price is $67.53 (as of 4/1/19).
* E-‐book format of text available to rent for 60-‐days at a substantial discount at:
https://www.textbooks.com/Service-‐Management-‐Operations-‐Strategy-‐Information-‐ Technology-‐-‐-‐Text-‐Only-‐8th-‐Edition/9780078024078/James-‐A-‐Fitzsimmons.php
Ebook (rent): $35.28 (rental period expires after 60 days); used texts sell at $67.11.
* Available for purchase (new or used) or to rent from Barnes & Noble at:
https://www.barnesandnoble.com/w/mp-‐service-‐management-‐with-‐service-‐model-‐ software-‐access-‐card-‐james-‐
fitzsimmons/1114276011?ean=9780077841201&pcta=u&st=PLA&sid=BNB_New+Core+Sho pping+Top+Margin+EANs&sourceId=PLAGoNA&dpid=tdtve346c&2sid=Google_c&gclid=EAI aIQobChMI9ZW9xvqn3wIVjlqGCh2yngbbEAQYBSABEgJWVvD_BwE
Prices as of 4/1/19: $95.19 (used); $173.08 (new); Marketplace prices: $47.49 -‐$385.67
You do not need the access codes, CDs/DVDs, workbooks, and other supplemental items.
ELECTRONICCOMMUNICATION
* I don't send emails to my students when they submit an assignment, acknowledging that I received it. If you don't hear from me after submitting your work, consider it a good thing. You may want to recheck your email to make sure any necessary attachment(s) were included. I will email you within 24 hours after due dates for any missing work.
* I will post information about discussion groups, tests, quizzes, any changes in due dates, and other course matter on the "Announcements" board in the POM 345 myCourses site.
GRADINGSCALE
equivalent
GRADINGBREAKDOWN
Course grades will be determined by a weighted average using the quiz/test/discussion grade point equivalents and the percentages for these as the weights.
Note:No make-‐up tests or quizzes will be given. This includes taking the same test or quiz as the rest of the class at a later date. Also, no partial credit will be given for multiple choice and true/false questions.
Online decorum and courtesy to your classmatesand to your professor are order qualifiers for a high grade in online participation. Online participation will be gradedat the end of the course term. There will be no periodic participation grades posted. You must actually participate for any dimension of quality to be assessed.
If the professor determines thatany assignment, including answers to homework questions posted on the online discussion boards, has been plagiarized fromany source, the student will receive a failing grade (i.e. F). There will be no opportunity for "extra credit" or alternative assignments to replace or make up for the failing grade.
ONLINEWEEKLYSCHEDULE
```
Day 1 -‐ Monday Day 2 -‐ Tuesday Day 3 -‐ Wednesday Day 4 -‐ Thursday Day 5 -‐ Friday Day 6 -‐ Saturday Day 7 -‐ Sunday
```
Electronic weeks begin on Monday and end on Sunday. You have until midnight the day a discussion assignment is due to post your answers to the questions on your group's discussion board.Once solutions to discussion questions have been posted, no late answers will be accepted regardless of circumstances. Answers to questions /problems you are not required to post or turn in or will also be posted on the POM 345 myCourses site.
STUDENTACADEMICINTEGRITYPOLICY
Students are responsible for the content and integrity of the academic work they submit. Each student's submitted assignments, responses to discussion questions, and tests/quizzes must be that student's own work. Actions constituting misconduct include but are not limited to:
* Submitting the work of another person as your own. This includes other students' work, internal company documents or memos, advertising literature, any published information whether it appears in printed or electronic form (this includes material obtained from web sites including government and company web sites, Wikipedia, Youtube, and other Internet-‐based sources), andanyunpublished information (authored or anonymous);
* Submitting answers to homework / discussion questions and/or quizzes/tests/exams, word-‐for-‐word or reworded that may have been posted in previous semesters of this course, as your own work. The UMass Dartmouth Student Academic Integrity Policy defines plagiarism as: "Plagiarism is the representation of the words or ideas of another as one's own in any academic exercise." This includes homework / discussion questions and quizzes/tests/exams.;
* Misrepresenting your own work to an instructor; or
* Collaborating with other students during an exam, test, or quiz;
For further details, consult the UMass Dartmouth Student Academic Integrity Policy online at: http://www.umassd.edu/studentaffairs/studenthandbookintroduction/studentconductpolicies/ academicintegritypolicy/
INCOMPLETEPOLICY
According to the university catalog, an incomplete may be given only in exceptional circumstances at the instructor's discretion. The student must be passing at the time of the request. If the work is not completed within one year of the recording of the incomplete grade, the grade will become an F(I). The incomplete policy for this course is that at least 80% of the course must be already completed and an exceptional circumstance (i.e. medical issue) must exist. If you feel you require an incomplete for an exceptional reason, you should email me and state your reasons for the incomplete in writing. I will then decide on a course of action.
ACADEMICSUPPORTSERVICES
TheSTEM Learning Lab offers tutoring services for accounting, finance, and business courses (as well as engineering, math & science). (https://www.umassd.edu/arc/stem-‐learning-‐ lab/services/)
TheMultiliteracy & Communication Center (MCC) is a free tutoring service available to all members of the UMassD community. The MCC provides consultations for the following:
* brainstorming, development of arguments, organization, and clarity for written essays
* document and Web writing/design
* research and reading strategies
* professional preparation, including resumes and statements of purpose
All online tutoring sessions are conducted synchronously, meaning a student meets with his/her tutor in "real time" in a manner similar to a face-‐to-‐face session. Currently, online tutoring is only available in the MCC (LARTS, 221; Mon–Fri, 10 a.m. -‐ 3 p.m.) and the Library Learning Commons (Room 135; Sun-‐Thurs, 6 p.m. – 9 p.m.). To make an online appointment, follow these steps:
* If you've never made an appointment before, you'll need to first create a new account at: https://umassd.mywconline.net/register.php
* Once you've registered, you can log in to the schedule at: https://umassd.mywconline.net/
* Tutors available for online appointments have "face-‐to-‐face or online" listed below their name. Please click on a white (available) appointment space with one of these tutors.
* Once you click on a white space, an appointment pop up window will open. Make sure that you use the drop down in the "Meet Online?" section to select "Yes-‐Schedule an Online Appointment." Fill out the rest of the form with your assignment information.
* Five minutes prior to the start of your scheduled appointment, log back in to the scheduler and click on your appointment. The window will pop up again with a link: "Start or Join Online Consultation." Click this link to meet with your tutor.
* You'll be able to share your assignment with your tutor, but it is helpful if you can have your assignment ready in Word doc (.doc or .docx) format. Please note too that the system removes formatting, so, if you have questions about formatting, you may need to also share your assignment with your tutor via email. Tutors can chat with you via text or video, though if you wish to talk with a tutor using the video feature, please make sure that you are accessing the appointment via a strong internet connection.
Technical assistance can be found at the Help Desk in the Learning Commons, Claire Carney Library, 1st floor. Hours are M -‐ TH 10 a.m. -‐ 9 p.m., F 10 a.m. -‐ 5 p.m. Their expertise includes:
* Network connections and registration
* Virus and spyware removals
* Software installations and downloads
* Access to UMass systems and services
Contact information: 508-‐999-‐8040; https://www.umassd.edu/cits/servicecenters/students/
myCourses help: email@example.com or 508-‐999-‐8505 (M-‐F 9 a.m.-‐5 p.m.)
24-‐hour myCourses help: https://embanet.frontlinesvc.com/app/home/p/2146
24-‐hour technical assistance (off-‐hours, weekends, and holidays): http://umd.echelp.org Support information for other UMass Dartmouth technologies:
http://www.umassd.edu/extension/technicalresources/
UNIVERSITYSUPPORTSERVICES ANDPOLICIES
TheCenter for Access and Success (CAS) can provide accommodations for students with documented disabilities to assist them in their learning. When you bring proper documentation to the Center, located in Pine Dale Hall Room 7136, you can obtain the necessary paperwork to provide your instructor. (CAS phone number: 508-‐999-‐8711; http://www.umassd.edu/dss/)
Grade appeals: Information about what can be appealed, who to file a grade appeal with, and what the grade appeal process entails can be found at: https://www.umassd.edu/acadvising/grades/
The purpose of a university is to disseminate information, as well as to explore a universe of ideas, to encourage diverse perspectives and robust expression, and to foster the development of critical and analytical thinking skills. In many classes, including this one, students and faculty examine and analyze challenging and controversial topics.
If a topic covered in this class triggers post-‐traumatic stress or other emotional distress, please discuss the matter with the professor. You may also seek out resources from the Counseling Center, http://www.umassd.edu/counselling/, or the Victim Advocate in the Center for Women, Gender and Sexuality, http://www.umassd.edu/sexualviolence/. In an emergency, contact the Department of Public Safety at 508-‐999-‐9191 24 hrs/day.
UMass Dartmouth, following national guidelines from the Office of Civil Rights, requires that faculty follow UMass Dartmouth policy as a "mandated reporter" of any disclosure of sexual harassment, abuse, and/or violence shared with the faculty member in person and/or via email. These disclosures include but are not limited to reports of sexual assault, relational abuse, relational/domestic violence, and stalking. While faculty are often able to help students locate appropriate channels of assistance on campus, disclosure by the student to the faculty member requires that the faculty member inform the University's Title IX Coordinator in the Office of Diversity, Equity and Inclusion at 508-‐999-‐8008 to help ensure that the student's safety and welfare is being addressed, even if the student requests that the disclosure not be shared. You can obtain confidential counseling support and assistance at: http://www.umassd.edu/sexualviolence/
WITHDRAWALDEADLINES
A grade of "W" does not affect a student's GPA but may impact "satisfactory academic progress" requirements for financial aid (www.umassd.edu/financialaid/maintainingaid/).
SCHEDULE
W
EEK
TOPIC
SSIGNMENTS
A
|
Followers Forming Followers Harbor Mid-City, 2012 - 2013
Key concept
The Gospel
Bible study Selected verses
Memory verses Romans 3:22-‐24
Lesson 1: The Gospel
Objectives
Understand the meaning of the gospel
Explore the relationship between God and humanity
Bible Study
"The gospel" is an expression often used by Christians. Are you familiar with what they mean when they use it? Or, if you do understand what it means, could you sit down and share "the gospel" over lunch with someone? In order to address these questions, this lesson will explore the meaning of this central element of the Christian faith. Briefly stated, the gospel is a message of good news and hope, and it encompasses five major topics: God, Mankind and Sin, Christ, Repentance and Faith, and the Holy Spirit.
1. God
God tells us in His word—the Bible—that He is the Creator of the heavens and earth and all that is within them, including you and me, and that He made us in His own image so that we could declare the glory of His works (Genesis 1:1,27; Psalm 19:1). Thus, it is critical that we recognize that we are created beings who are absolutely dependent upon God, the Creator of all, and that He made us according to His plan that He established long ago. In His love, God created us in His image to know Him, honor Him, and above all else, to glorify and enjoy Him forever.
Is Creator
Is a Merciful Father
God is a loving Father who loves to show mercy and does not want to punish us. Can you remember when you were a child and your dad told you, "Son, this is going to hurt me more than it will hurt you." I am beginning to understand that my dad was not just saying that to make me feel better. While the spanking hurt me physically, my dad's heart ached when he had to discipline me. He wanted to be merciful and he did not want to punish me, but he also knew that I would never understand that there are consequences to my actions if he did not actually discipline me.
NOTES
Harbor Mid-‐City Followers Forming Followers p. 2
In the same way, God says that He takes "no pleasure in the death of the wicked, [but he desires] that they turn from their ways and live" (Ezekiel 33:11).
Is a Just Judge
The Father is also a just Judge, and therefore, He must punish those who violate His law (Exodus 34:6-‐7). Imagine that your mother was one of the victims in the World Trade Center attacks. When the criminals who planned the attack, who killed your mother and over 6,000 others, appeared before the judge, justice would demand a punishment. If the judge knew that the defendant committed the crime, then he would be acting unjustly by failing to punish the defendant. To satisfy justice, a punishment must be administered. The predicament that this guilty defendant faces leads us into our next topic: humans and their sin.
2. Mankind and Sin
According to His plan, God gave us the freedom to be obedient to Him or disobedient. Eve ate from the one tree that God explicitly said was off limits and then Adam did as well. They were disobedient and sinned against God, and their sinful nature has been passed down to each one of us for thousands of years (Genesis 3:6-‐7). In fact, God tells us that not one of us is righteous, that "all have sinned and fallen short of the glory of God" (Romans 3:23).
Humans are sinners
What is sin?
Sin is a word that has become incredibly charged in our culture. Most people that I talk to who are skeptical about Christianity have some major hang-‐ups with this term. It is a term that you're going to see quite a lot in the Bible, so we need to take a few minutes and look at what it means.
Here is how I would define it: sin is building your life on something other than God. God gave us a list of commandments, the Ten Commandments, and the very first one says this: "You shall have no other gods before me" (Exodus 20:3). Sin is taking anything (including good things like work, relationships, influence, etc.) and building your life around it rather than God.
Or, let's put it another way. We often think of sin as doing bad things, when in reality it is taking good things and making them bad things by making them your ultimate thing. According to Exodus 20:3, we should only have one ultimate thing: God Himself.
NOTES
Harbor Mid-‐City Followers Forming Followers
Let me illustrate from the classic film,Chariots of Fire. This film is the story of two runners, Harold Abrams and Eric Liddell. Harold Abrams is a Jewish young man who is trying to make it in high society in Britain. To win the gold medal in the 1924 Olympics will mean that he has made it, that he has arrived into the top tier of British society, that he'll be somebody. So he runs for fame and fortune. In many ways we can sympathize with his plight, because as a Jewish young man in Britain in the 1920s, he suffered extreme prejudice. There was a glass ceiling that would only let him rise so high, unless he could win the gold medal.
Shortly before the race, Abrams looks at his trainer and, with trepidation, says, "And now in one hour's time, I will be out there again. I will raise my eyes and look down that corridor; four feet wide, with ten lonely seconds to justify my existence. But will I?" Ten lonely seconds to justify his existence. Abrams gets his sense of self, of who he is, from winning. He has made a good thing (winning a gold medal) into a bad thing by making it his ultimate thing.
The other runner, Eric Liddell, is a Scottish missionary who runs because God made him fast, and when he runs he feels God's pleasure. He doesn't run for fame. In fact, his life isn't about his fame, but about God's fame. He is every bit as talented and competitive as Abrams. He loves to run and is passionate about it, but winning isn't ultimate for Him; God is ultimate.
So in this movie, when the event that Liddell has trained for his whole life, the 100 meters, is scheduled for a Sunday, he pulls out. He won't run on Sundays because he is being obedient to God to set that day apart as sacred for rest and worship. Do you see how loosely he holds this event? The 100-‐meter dash has a gripping control on one runner—Abrams—and it has absolutely no control over the other—Liddell— because one has made God his ultimate and the other has not.
This is what sin does to you. It has you in its grip because you are so focused on your god—the thing you must have, the thing you have built your life around. Therefore, man is in a terrible predicament. The Bible tells us that all of us have sinned; we have built our lives around something other than God. And, as we saw earlier, because God is just, he must punish us. In Romans 6:23, we see that God's punishment for us is death: "For the wages of sin is death, but the gift of God is eternal life in Christ Jesus our Lord."
NOTES
Harbor Mid-‐City Followers Forming Followers p. 4
3. Christ What He did
Jesus lived the life we should have lived, and died the death we should have died. God sent His only begotten Son, Jesus Christ, to this world. In doing so, God, through his Son, took on humanity (John 1:1,14) and lived the life that we should have lived:"For just as through the disobedience of the one man the many were made sinners, so also through the obedience of the one man the many will be made righteous" (Romans 5:19).
In fact, the Bible tells us that Jesus was tempted in every possible way, which means He faced every temptation we have ever faced. Yet, the crucial distinction is that He was without sin: "For we do not have a high priest who is unable to sympathize with our weaknesses, but we have one who has been tempted in every way, just as we are" (Hebrews 4:15). He never gave in to temptation.
Not only did Christ live the life we should have lived, but He also died the death that we should have died. As we discussed earlier, all of mankind since Adam and Eve has been found guilty before God and deserving a punishment of death, and Christ fulfilled this punishment for us. In his death, Jesus experienced far more than physical suffering and pain, but He received the full punishment that God had withheld from mankind since Adam and Eve's original sin.
Jesus gave God His perfect life record and accepted man's horrible life record. Perfection stepped into the place of imperfection, and in doing so, God magnified His justice by delivering the rightful punishment, one that had been delayed for thousands of years, to a perfect man, the God-‐man called Jesus Christ, the only one capable of paying the full penalty for sinful man (Mark 10:45; 1 Peter 3:18).
But this is where the story really gets good. After Christ's death on the cross, God raised Him from the dead three days later (1 Corinthians 15:3-‐6). This resurrection was evidence that Jesus' sacrifice was an acceptable payment for man's debt that he owed as a result of his sinful record and sinful heart (1 John 2:2), fully satisfying God's justice (Romans 3:25-‐26). Additionally, Jesus' resurrection represented His victory over all man's enemies, including death and the power of sin (Romans 6:5-‐7). And after Christ's resurrection, God exalted Him to heaven, seated Jesus at His right hand, and thereby declared Him Lord and Savior of the world (Hebrews 12:2).
Yet, the Good News goes beyond what Jesus has done. It also includes what He now promises to all those who receive Him as their Lord and Savior.
NOTES
Harbor Mid-‐City Followers Forming Followers
What He promises
New standing:God has promised that we may have a new standing before Him (2 Corinthians 5:21; Colossians 1:22). As Judge, God promises to release us from the penalty of sin, if we believe in Christ, and consider us His children who are forever adopted into His family (Galatians 4:4-‐7). He gives our bad, sin-‐stained record to Christ, and He gives us Christ's perfect record of righteousness. As Father, God promises to accept and love those who believe in Jesus just as He does His own Son, for He lives within us (1 John 3:1).
What I am describing has been referred to as the Great Exchange. Christ accepted all of our sin and in return gave us His righteousness. We exchanged our sinfulness for Christ's righteousness, so that when God looks at us, He sees His Son who lives within us, making us righteous individuals in Christ.
New Power: We also have new freedom and power to live as sons and daughters of God. Jesus gives us His power to live the way He did. This power comes in the form of the Holy Spirit, which we will discuss in more detail in next week's lesson. But how does this happen? How can these gospel promises be fulfilled in a person's life?
4. Repentance and Faith What is repentance?
Repentance involves a two-‐dimensional directional change in a person's life. First, repentance requires that a person turn away from their sin—a horizontal change. For example, imagine that you intended to drive to Mexico. When you were in San Diego, you became confused and accidentally began driving north instead of south. When you realized this fact, the best thing for you to do would be to turn your car around and head in the opposite direction.
In much the same way, repentance requires you to turn from your sin and head in the other direction, but here is where the analogy breaks down. Repentance is not telling you simply to turn in the opposite horizontal direction, but it also requires you to turn to Jesus (Mark 1:15). Thus, repentance requires a vertical directional change as well. If you do not turn to Jesus, you will be fighting a losing battle of trying to defeat the power of sin in your own strength. Jesus wants us to repent, turn from our sin and turn to Himin faith, relying upon his strength to live a life that is free from sin's dominion.
NOTES
Harbor Mid-‐City
Followers Forming Followers p. 6
What is faith?
Everyone has faith. For instance, you might say, "I have faith that my car will make this 900-‐mile journey." This may be a well-‐reasoned faith (you have taken it to a mechanic, had it serviced, and all indicators seem positive) or it may be a "blind leap of faith" (you know nothing about cars, haven't serviced it, but just think it will make it). Nevertheless, you put your faith in your car if you choose to get in it, even though, at the end of the day, you can't prove that the car will make it until it actually does. This holds true with other things in which we put our faith as well. We have faith that our job will provide for our family, that our spouse will love us, that immunizations will help our children—but we don't know for certain until these things actually happen.
Saving faith is a decision of the will to put your trust IN Jesus
Saving faith is a decision of the will to put your trust in Jesus as your Savior and Lord. Think of John 3:16: "whoever believesin Him shall not perish but have eternal life" (also Romans 3:22). So, the amount of your faith is unimportant. Quantity is irrelevant; quality, however, is extremely relevant, and quality is determined by what your faith is in.
It may be tempting to try to trust in our works. Many people think they can build a bridge that will help them cross the chasm that separates them from God, and this bridge is called the Good Works Bridge. They try desperately to serve the poor, be kind to others, and perform a host of various good works. But the chasm separating God and man is far too wide for any Good Works Bridge to span the distance.
And, what is even worse, even if a Good Works Bridge could reach across the chasm, God's holiness would destroy us once we crossed the bridge and entered His presence. Sinful beings cannot be in the presence of a Holy God. Therefore, to solve the problem of the chasm separating God and man, God laid down His own bridge in the form of the cross, and as we walk across this bridge we are made holy through the blood of Jesus. Thus, saving faith is repenting, turning from sin and self-‐trust, and trusting in Jesus Christ and His promises.
Faith isn't in conflict with reason and intellect
Often it is said, "Just have faith," and what is meant is that you need to stop thinking and just believe. But Jesus didn't ask Thomas in John 20:24-‐ 29 to turn off his brain. Instead, he encouraged him to think hard. "Thomas, put your hands in my side here and feel the wound. Think Thomas. Remember this wound that I suffered. It really is me." Jesus says believe or have faith because of the evidence that is before you. Think.
Lesson 1:
The Gospel Romans 3:22-‐24
______________
NOTES
Harbor Mid-‐City Followers Forming Followers p. 7
Yet faith isn't mere intellectual belief
You must understand that saving faith is not merely an intellectual belief. The Bible acknowledges that even demons believe that there is one true God, but they are not saved by this belief (James 2:19). But what is the difference between saving faith and belief?
Let me illustrate. The Great Blondin is often thought of as the greatest tightrope walker of all time. He was the first man to cross Niagara Falls on a tightrope. No one believed intellectually that it was possible because of the sheer force of the wind and mist coming off of the falls. Everyone thought it was a death mission. But, nevertheless, he made it, and he caused people to believe, intellectually, that he could do it.
But like any daredevil, he had to keep pushing it and making the stunt a little more unbelievable. During his subsequent performances, he crossed the falls on a bicycle, on stilts, and at night. He swung by one arm, turned somersaults, and stood on his head on a chair. Once he pushed a stove in a wheelbarrow and cooked an omelet. On one occasion, he crossed blindfolded in a heavy sack made of blankets.
But his greatest feat came when he asked for a volunteer to get on his back. Everyone in the crowd had an intellectual belief in Blondin, but they were not willing to trust their life to him. Except one man. Harry Colcord, his manager, volunteered. He climbed on his back and they made it across. That is the difference between mere intellectual belief and putting your faith in someone in a Biblical sense—you are entrusting your life to them. What you are doing is making a decision of the will based on the evidence.
When you repent of your sin and make a decision of the will to entrust your life to Jesus, you're then embarking on the journey of following Him. The beginning of this journey is characterized by a new birth in Christ Jesus, which we will discuss next week.
Memory Verses
This righteousness from God comes through faith in Jesus Christ to all who believe. There is no difference, for all have sinned and fall short of the glory of God, and are justified freely by his grace through the redemption that came by Christ Jesus. -‐Romans 3:22-‐24
Harbor Mid-‐City Followers Forming Followers p. 8
Individual Study and Group Discussion
Opening Questions
Based upon the reading, what did you find most helpful? Was anything confusing? What was most challenging?
Study and Discussion Questions
1. Describe an experience you have had of God as Creator, Merciful Father or Just Judge.
2. "Sin is building your life on something other than God."
b. Do you have a story to share of someone you know who has chosen to make God the ultimate thing in his/her life?
a. What examples can you give of other things upon which we build our lives?
3. In Romans 5:19, explain "the disobedience of the one man" and "the obedience of the one man."
4. How does Romans 3:25-‐26 explain God's justice?
5. What do 2 Corinthians 5:21 and Colossians 1:22 say about God's promise of our new standing in Him?
|
Jonathan Hassid (2015)
China’s Responsiveness to Internet Opinion: A Double-Edged Sword, in: Journal of Current Chinese Affairs, 44, 2, 39–68.
URN: http://nbn-resolving.org/urn/resolver.pl?urn:nbn:de:gbv:18-4-8483
ISSN: 1868-4874 (online), ISSN: 1868-1026 (print)
The online version of this article and the other articles can be found at:
<www.CurrentChineseAffairs.org>
Published by
GIGA German Institute of Global and Area Studies, Institute of Asian Studies and Hamburg University Press.
The Journal of Current Chinese Affairs is an Open Access publication.
It may be read, copied and distributed free of charge according to the conditions of the Creative Commons Attribution-No Derivative Works 3.0 License.
To subscribe to the print edition: <firstname.lastname@example.org>
For an e-mail alert please register at: <www.CurrentChineseAffairs.org>
The Journal of Current Chinese Affairs is part of the GIGA Journal Family, which also includes Africa Spectrum, Journal of Current Southeast Asian Affairs and Journal of Politics in Latin America. <www.giga-journal-family.org>.
China’s Responsiveness to Internet Opinion: A Double-Edged Sword
Jonathan HASSID
Abstract: Despite its authoritarian bent, the Chinese government quickly and actively moves to respond to public pressure over misdeeds revealed and discussed on the internet. Netizens have reacted with dismay to news about natural and man-made disasters, official corruption, abuse of the legal system and other prominent issues. Yet in spite of the sensitivity of such topics and the persistence of China’s censorship apparatus, Beijing usually acts to quickly address these problems rather than sweeping them under the rug. This paper discusses the implications of China’s responsiveness to online opinion. While the advantages of a responsive government are clear, there are also potential dangers lurking in Beijing’s quickness to be swayed by online mass opinion. First, online opinion makers are demographically skewed toward the relative “winners” in China’s economic reforms, a process that creates short-term stability but potentially ensures that in the long run the concerns of less fortunate citizens are ignored. And, second, the increasing power of internet commentary risks warping the slow, fitful – but genuine – progress that China has made in recent years toward reforming its political and legal systems.
Manuscript received 16 October 2014; accepted 10 April 2015
Keywords: China, Chinese media, microblogging, public opinion, Chinese politics, new media, weibo
Dr. Jonathan Hassid is an assistant professor in the Department of Political Science at Iowa State University. Prior to 2015, he was a postdoctoral research fellow at the China Research Centre within the University of Technology, Sydney. He received his Ph.D. in political science from the University of California, Berkeley, in 2010 and works mainly on the politics of the Chinese news media. His publications include articles in the Journal of Communication, China Quarterly, Comparative Political Studies, Third World Quarterly, Asian Survey and elsewhere.
E-mail: <email@example.com>
Introduction
In 2014 China ranked 175th (out of 180) in international press freedom (Reporters Without Borders 2014), boasted the world’s most sophisticated internet censorship apparatus (MacKinnon 2009) and had more journalists in prison than any other country on Earth (Reporters Without Borders 2013). Yet these facts mask the surprising reality that the Chinese Communist Party (CCP) responds quickly to public opinion, especially when expressed online. When Chinese netizens uncover and publicize official abuse of power (Wines 2010), corruption (People’s Abstracts 2004) or fatal negligence (China Daily 2011a), authorities often react quickly and decisively to resolve the exposed problems. The result is a system that strongly discourages political discussion and criticism but is highly responsive to incidents that evade censorship and capture public attention. Commentators, reporters and scholars have seen this responsiveness as a hopeful sign of political change (Wang et al. 2009; Noesselt 2013) and as a way to preserve internal stability, but, as I argue below, there are hidden dangers in authorities’ consistent bending to popular outrage.
Below, this article\(^1\) is divided into three parts. After a brief background section on the Chinese media and internet public opinion, the CCP’s surprising responsiveness to online demands is demonstrated by case studies and a quantitative analysis of international press stories. Together, these data show how, when and why the Chinese party-state reacts to internet pressure. With reference to a 2013 survey of Chinese microbloggers, the paper’s third section discusses the implications of this state responsiveness, and shows how it might undermine official efforts to build a responsible and (reasonably) effective judiciary. Ultimately, the party-state’s actions might build and reinforce a new dictatorship – not of the proletariat, but of the commentariat.
---
\(^1\) This research was generously funded by a faculty Research Development Grant at the University of Technology, Sydney. It has benefitted from the help and comments of Jon Sullivan, Wanning Sun, the participants in a panel at the Association for Asian Studies annual meeting in 2015 and several anonymous reviewers. One section is adapted from previous work done with Jennifer N. Brass, but any errors are my own.
The Chinese Media, Briefly
China closely censors its domestic media. A system of interlocking government and CCP departments, coordinated by the party’s secretive Central Propaganda Department (CPD, 中宣部, Zhongxuanbu), together ensure that most commercial media companies – including newspapers, television and radio broadcasters, book publishers, filmmakers and others – hew tightly to party-state demands (Brady 2008). News outlets are especially tightly controlled; to found a newspaper requires an official party-state sponsor, registered capital of at least 300,000 CNY (48,000 USD), a detailed feasibility study, work permits, “certificates of qualification of the editorial and publishing personnel”, various application forms in quintuplicate, and a great deal more (official regulations as translated by Chang, Wan and Qu 2006). Even a successful application, once approved by the State Administration of Press, Publication, Radio, Film and Television (SAPPRFT, formerly known as the SARFT), does not end the hassle. Once in business,
the publication of periodicals shall continue to be guided by the principles of Marxism-Leninism, Mao Zedong Thought, Deng Xiaoping Theory, and the “Three Represents”, [and] adhere to the orientation and guiding role of publishing and the mass media (Regulations for the Administration of Periodical Publication 2005, Ch. 1, Article 3, translated by Chang, Wan and Qu 2006: 429).
And if a newspaper “does not reach the prescribed” – but undefined – standard, the agency “shall revoke [its] Periodical Publication Permit” (Regulations for the Administration of Periodical Publication 2005, Ch. 3, Article 47).
The day-to-day uncertainty about where the censorship axe might next fall is even more constraining than these formal procedural requirements. Unlike Glavlit, the Soviet Union’s huge censorship apparatus, China’s CPD does not pre-screen content before publication. Instead, CPD officials punish transgressive media outlets and writers after publication, often without stating the reason for punishment, and sometimes acting days, weeks or even months after the violation. The “regime of uncertainty” created by this post hoc censorship system means that journalists and editors are often unsure about the limits of the permissible, which encourages them to be quite conservative when approaching topics with even a hint of sensitivity (Hassid 2008). After all, it pays to be careful in the world’s most prolific jailer of journalists.
A similar system of *post hoc* censorship prevails online, made possible by the world’s largest and most sophisticated internet-monitoring system. This “Great Firewall”, part of which is known in China as the “Golden Shield Project”, relies on filtering keywords, blocking IP addresses and blacklisting websites, a system backed by thousands of internet police who monitor domestic sites and discussion boards (MacKinnon 2009). The end result is a system that allows most Chinese citizens little access to information that the party-state considers suspect. And as China remains (as of 2013) the country most likely to jail netizens for online political expression, the consequences for disobedience can be severe (Reporters Without Borders 2013).
Although it is important not to downplay the role that censorship plays on the Chinese internet, in general there is much more space available for discussion of potentially sensitive political and social topics than exists in the traditional media (Yang 2009). Moreover, while many parts of this censorship apparatus are run from Beijing – especially those that completely block access to unwanted domain names – this system is, for the most part, quite decentralized. Most day-to-day decisions about deleting individual posts rest with content providers and hosting services themselves, rather than being directed from on high. For bloggers, the end result is wide variation in the aggressiveness of hosting services in censoring sensitive topics, a variation that can present opportunities to canny users (MacKinnon 2009).
**China’s Surprisingly Responsive Government**
Despite this censorship, there is no shortage of sophisticated users willing to brave the potential perils of challenging censorship authorities. Journalists, public intellectuals, writers, lawyers and ordinary citizens can have a substantial impact online, shaping the discussion of even sensitive issues in surprising ways. Although the traditional media, including the enduringly robust Chinese newspaper industry, maintain a substantial hold on shaping the agenda of Chinese internet discussion, news is increasingly broken online. Even if it is not yet quite true that, as Susan Shirk writes, “because of its speed, the internet is the first place news appears [as] it sets the agenda for other media” (Shirk 2011: 2), certainly the internet is becoming more important every day in exposing problems and shaping public policy.
But note that this paper’s discussion of government “responsiveness” in China refers to state actions to quickly punish (exposed) culpable parties, as opposed to the more general willingness to respond to public opinion described by other scholars such as Shambaugh (2008) and He and Warren (2011). Certainly there are other mechanisms of official accountability in China, such as the letters and visits system (信访制度, xinfang zhidu), the mass press, and even the foreign media, but these are outside this paper’s scope. In other words, the responsiveness I discuss here refers only to Chinese official willingness to respond to scandals quickly and decisively, usually by punishing exposed wrongdoers. Other scholars have proposed similar definitions, often depending on how state elites quell public anger (e.g. Besley and Burgess 2001; Thompson 2000). It is important not to mistake responsiveness for accountability; responsiveness refers to official response to citizens, while accountability refers to routinized citizen response to official action. Even if China is quite responsive to online public pressure in particular circumstances, this responsiveness does not imply that officials are very accountable to their local constituents.
It should also be emphasized that in general the central state has set an anti-corruption agenda and written a script for netizens to follow. For years, central officials have emphasized their desire to fight corruption in the party, and Xi Jinping has made pursuing corrupt officials a centrepiece of his administration. When statements against corruption emanate from Beijing, they create space for netizens and media figures to go after the local problems that central officials have condemned (and are often unaware of). Taking the state at its word – even when different officials or different layers of the state disagree – can be a powerful force encouraging citizen activism (O’Brien 1996). In other words, although the examples in this paper demonstrate the power of public opinion to move a reluctant state, the central state itself has set the agenda in this area and provided encouragement and cover for ordinary citizens to take it at its word. We might therefore see party-state responsiveness to uncovered scandal not as the result of a wayward citizenry but instead as a result of the desire of some top officials to get public support in policy.
fights with their colleagues at various levels of the sprawling bureaucracy. Others have similarly argued that the party-state allows some public criticism to serve as a “fire alarm” and help top officials uncover lower-level corruption (Lorentzen 2014). Turning to citizens to overcome perceived party problems is not new in China – Mao Zedong famously urged the people to attack the party in 1966 – but it does suggest that the state responsiveness to uncovered scandals is only partially “forced” by citizen pressure. But whether encouraged by top officials or not, China’s state responsiveness to citizen pressure over uncovered scandals is still noteworthy.
One of the most famous (and oft-cited) demonstrations of the power of Chinese public opinion to sway national policy came in the wake of the 2003 Sun Zhigang incident. Sun, a college-educated worker from China’s interior who had moved to the southern city of Guangzhou, was arrested by police in March of that year for not carrying his local residence permit. Sent to a detention facility for internal migrants, within 24 hours Sun was dead, beaten to death by guards and inmates at the facility (Hand 2006). This case is particularly illustrative of the connections between media online and off. An enterprising reporter at the feisty Nanfang Dushibao (南方都市报, Southern Metropolis Daily) first discovered the death through the internet postings of Sun’s anguished family members. This discovery led the paper to boldly publish a story on Sun’s death, which in turn created uproar online, leading to other articles in the mainstream press. Within weeks, Beijing had scrapped the entire system of internal detention facilities, amounting to a huge victory for the concentrated power of public opinion. Crucially, however, the online community was mobilized behind someone seen as one of them – a white collar, college-educated professional. It is unlikely that Sun’s death would have provoked any reaction if he were a more typical migrant, a point I return to later in the paper.
Although the aftermath of the Sun Zhigang case is sometimes seen as a high-water mark for CCP responsiveness to public opinion, many subsequent cases have demonstrated that when enough netizens get sufficiently angry, authorities react quickly to assuage their demands. When Niuniu, the daughter of a prominent official in the southern city of Shenzhen, released a 2004 film called Seven-Hour Time Difference (时差七小时, Shicha Qi Xiaoshi), government connections ensured that the film was made mandatory viewing in all middle
schools in the huge metropolis – forcing families to pay a fee of 20 CNY (3 USD) per student (*People’s Abstracts* 2004). While this seems like a miniscule amount of money, pirated films generally sell for less than half this amount, and Shenzhen’s population of over 10 million ensures a large number of potential student viewers and a correspondingly large profit. After an online uproar encouraged further newspaper investigation into the scandal, it emerged that much of the film’s 7.69 million CNY (1.2 million USD) in financing had mysteriously come from Li Yizhen, Niuniu’s father. Li, a public servant, most likely had an official income of only a few hundred dollars a month (Fish 2012). And once wrongdoing was exposed, the backlash became fierce. The renowned *Zhongguo Qingnianbao* (中国青年报, *China Youth Daily*) thundered:
Regardless of what happens, this papering over [of the scandal] must be exposed. If in fearing to infuriate everyone further, [the perpetrators] adopt an ostrich posture [pretend the problem does not exist], eventually they will be given even more severe punishments (quoted in *People’s Abstracts* 2004).
Eventually the film-screening plan was dropped, and the presumably corrupt Li Yizhen was removed from office (*Baidu Baike* 2012).
Public pressure, activated by an outraged traditional media, was also key in forcing an end to the 2007 brick kiln scandal that raged in central China. The Dickensian crime centred around the discovery that hundreds – perhaps as many as 1,000 – children had been kidnapped and forced to work as slaves in illegal brick kilns across Shanxi and Henan provinces. The kilns had apparently operated for years in collusion with local CCP officials, who reportedly took a share in the profits in return for providing political protection. As the research director of a Hong Kong-based labour NGO put it,
It’s inconceivable that slave labour and gross physical abuse on the scale it’s been reported could possibly have gone on without full knowledge of local officials (Ni 2007).
The issue finally received national coverage only after hundreds of distraught fathers who had already “spent all their money and risked their lives to go deep into the mountains looking for their children” posted an online petition that came to the attention of local TV reporters (Zhu 2007). The report attracted immediate attention in newspapers and on the internet; as a result, hundreds of slaves were
freed, several death sentences were handed down, and 95 local CCP officials were demoted, expelled from the party or removed from office (Ni 2007). Although many of the sentences were decried as too lenient, they still represent an unusual victory of public pressure over entrenched local power holders.
A more recent example involved Li Qiming, a 22-year-old who after a night of heavy drinking struck two students with his car on the campus of Hebei University, killing one and injuring the other. When campus police attempted to apprehend Li, he reportedly shouted “Go ahead, sue me if you dare! My dad is [local deputy police chief] Li Gang!” (China Daily 2011b) The case generated such intense interest that “My dad is Li Gang!” (我爸是李刚, Wo ba shi Li Gang) quickly became a cynical online catchphrase for those looking to avoid responsibility for problems they had caused (BBC News 2011). After a “massive outcry both online and offline” (China Daily 2011b), even an attempted payoff to the victims’ families, a tearful apology on national television (Liu 2010) and the best efforts of his “well-connected” father were not enough to keep Li out of prison (BBC News 2011).
Perhaps most emblematic of the growing power of public opinion – especially on the internet – in China is the aftermath of the July 2011 Wenzhou train crash. The crash, which killed 40 people and injured hundreds, was the first involving China’s brand-new and highly vaunted high-speed rail system. Despite both a CPD internal order that reporters “do not question, do not elaborate” on the disaster (Osnos 2012) and the hasty burial of the wrecked train cars by the powerful Railway Ministry, within days netizens “posted an astounding 26 million messages on the tragedy, including some that have forced embarrassed officials” to more thoroughly investigate (Wines and LaFraniere 2011). Ultimately, public demands for accountability led to the dismissal of the railway officials and a slowdown in the break-neck pace of (often shoddy) railway construction (Osnos 2012: 52).
Admittedly, these are unusual examples. Most official malfeasance probably goes undetected, and even cases uncovered by party-state investigators rarely result in punishments for offenders (Wedeman 2004). But while punishment for official miscreants is rare, punishment for officials caught in the public eye is swift and merciless. When netizens uncover corruption or publicize a case initially reported in the traditional media, they put pressure on the Chinese partystate to quickly punish the guilty and assuage public anger. These cases reflect, I argue, the typical official response to publically uncovered corruption. While the case selection is not random, I have aimed to pick cases from all walks of life and to choose both major and minor incidents. In my view, the end result is a selection of reasonably typical cases.
Note the limitations of this argument: I am not arguing that corruption is always uncovered, nor do I claim that uncovered corruption is always punished appropriately. But when malfeasance comes to the public eye, authorities move quickly to punish allegedly guilty parties. Although the CCP might be unable or unwilling to curb systemic corruption, it is certainly capable of responding swiftly and decisively to public pressure. Under the right circumstances, therefore, China has a highly responsive government.
**China’s Responsiveness from a Comparative Perspective**
Many China scholars have maintained a certain insularity that prevents examination of similar phenomena in other places around the globe. Recent work by scholars such as Sarah Oates on the Russian media (2013) and a special issue of the *Journal of Communication* (62, 2, 2012) on the Arab Spring should have relevance for scholars looking to place the Chinese media into an international context. A 2012 follow-up to Hallin and Mancini’s influential 2004 book *Comparing Media Systems* has expanded beyond a Western context, with a chapter by Zhao Yuezhi looking at China’s media from a comparative perspective. Such work is, however, still relatively rare.
Aiming in part to address this lacuna, below is a brief comparison of the responsiveness of the Chinese and Kenyan governments to public pressure. Note that this section is based on previous research conducted with co-author Jennifer N. Brass and published elsewhere (Hassid and Brass 2014). Although Kenya and China are quite different, the use of such distinct outliers allows us to “inductively identify variables and hypotheses that have been left out of existing theories” (Bennett 2004: 38) using the crucial-case method (Gerring 2001). In theory, Kenya, a democratic country with a free press, should be more likely to change policy in response to public demands than China, a one-party dictatorship. Because Kenya’s regular election cycles allow ordinary Kenyans to remove their leaders for non-performance, conventional wisdom dictates that Kenyan politicians should react quickly when the public demands they solve a particular issue. Chinese politicians, not facing the pressures of a ballot box, should, by contrast, be free to govern as they see fit, without reference to the wants or needs of even the angriest mass public.
To measure the responsiveness of the Chinese and Kenyan regimes to public pressure, we searched the EBSCO newspaper database for English-language articles that contained the country name (China or Kenya) plus either “scandal”, “graft” or “corruption” from 2000 to 2010. After discarding articles that were not relevant, 258 articles on China and 248 on Kenya remained. In neither case were domestic newspaper articles used, in order to preclude the influence of local censorship and media control, especially germane in the Chinese case. Although the international press is only likely to report on the largest, most prominent scandals, its coverage is still likely to better represent Chinese and Kenyan conditions than the muzzled local press.
This is particularly true because those scandals that do receive domestic press scrutiny have generally already been handled. Often the first that citizens hear of official corruption cases is an announcement by official outlets *People’s Daily* or the *Xinhua News Agency*. There are exceptions, but a reliance on the Chinese domestic press would erroneously imply that all corruption that comes to official attention is punished harshly. Though data are necessarily sketchy, one scholar has found that “provincial supervisory bureaus turned only 6 per cent of those found guilty of disciplinary infractions over to the legal system” and that “of those subject to administrative action, over half (53 per cent) received minor sanctions” (Wedeman 2004). Relying on domestic Chinese media coverage would risk biasing the data. International press coverage, by contrast, is not hampered by these restrictions and is not likely to vary substantially across international borders. Admittedly, this method is not perfect; international press coverage is dominated by a few outlets in the US and UK, and media outlets’ systematic use of wire service reports concentrates this coverage further. Ultimately, however, international press coverage seems a reasonable proxy for how scandals are handled around the world.
Next, computer content analysis (CCA) software called Yoshikoder allowed a comparison of the words in the articles with a “dictionary” of pre-defined keywords sorted into categories (see Sullivan and Lowe 2010, for an earlier example). To determine whether the government responded to scandals, categories that suggest a judicial response were chosen: words related to prison, punishment and the judiciary (see the appendix for a full list). Analysing words in these categories allows us to look at how frequently uncovered scandals result in sanctions for those involved. These results are presented in Table 1. The table also presents similar results from a comparison of baseline, non-scandal-related articles to ensure a legitimate point of comparison. The point of this comparison is to elicit from the press how many scandals and other forms of questionable behaviour are eventually acted upon in both countries.
Table 1: Content Analysis Results for Articles on Scandals in China and Kenya, with Baseline (T-Test)
| Country | Mean number of words per scandal article | Mean number of words per baseline (non-scandal) article | Mean difference (China-Kenya, scandal articles) | Mean difference (China-Kenya, baseline) |
|---------|----------------------------------------|--------------------------------------------------------|-------------------------------------------------|----------------------------------------|
| Judiciary | Kenya | 1.73 | 1.45 | -0.05 | -1.15*** |
| | China | 1.68 | 0.30 | | |
| Prison | Kenya | 0.62 | 0.60 | 1.18*** | -0.06 |
| | China | 1.80 | 0.54 | | |
| Punishment | Kenya | 0.35 | 0.21 | 0.23* | -0.1* |
| | China | 0.58 | 0.11 | | |
| Article word count | Kenya | 978.34 | 786.61 | 18.58 | -161.24*** |
| | China | 959.76 | 625.37 | | |
Source: Hassid and Brass 2014.
Notes: *p<.05, ***p<.001.
Using a simple t-test, two categories of words show statistically significant differences between press coverage of Chinese and Kenyan
scandals: words related to prison and to punishment. These results suggest that when the media reveals Kenyan scandals, those involved are less likely to go to prison or otherwise be punished than their Chinese counterparts. Indeed, articles on Chinese scandals are nearly three times as likely as those about Kenya to mention imprisonment and almost twice as likely to mention other punishments, suggesting a very real difference in outcomes between the two countries. This difference is especially pronounced compared to the baseline, non-scandal articles, which discuss the judiciary and punishment more in Kenya than China.
Although the absolute difference seems small – around one extra “punishment”-related word per four articles about Chinese compared to Kenyan scandals – this does not imperil substantive analysis of the results. In this case, nearly twice as many newspaper articles mention punishment in China than in Kenya. Content analysis, especially of newspaper articles averaging only approximately 1,000 words, often produces such seemingly small differences (Popping 2000). Indeed, because the newspaper corpus reflects dozens of articles on each individual scandal at all stages from discovery to resolution, wildly divergent results between the two countries would be unexpected. Here, the results suggest a meaningfully higher number of reports of punishment and imprisonment in revealed Chinese scandals compared to Kenyan ones. A dictatorship, in other words, can indeed be more responsive to public pressure than a democracy, under the right conditions.
To ensure that these results were not just a reflection of an unusual country pairing, Jennifer N. Brass and I expanded this analysis to every country with a 2010 Freedom House score (N=162). Using the same methods described above, we created a unique dataset of 17,160 articles for these 162 countries. When controlling for GDP per capita, population and other variables, no combination of Freedom House’s Civil Liberties or Political Rights variables proved statistically significantly related to punishments meted out for scandals. Some might suggest that a comparison with other culturally similar countries might be more revealing, but Singapore (classed as “partly free” in 2010) and Taiwan (classed as “free”) both have similar levels of state scandal response to China. Indeed, China scores higher on responsiveness than Singapore in every category tested, and Taiwan is
only mildly more responsive than either, except in the “punishment” category – where it statistically ties China.
Another suggestion might be to compare countries that have similarly authoritarian regimes and are about as rich as China. To run this analysis, I compared China to Jordan, Egypt and Belarus. China, Egypt and Jordan all score similarly on the Freedom House rankings as “not free”, and with the exception of Egypt – which is a bit poorer – all these countries have similar per capita GDPs in the five to seven thousand USD per year range. Using an ANOVA with Bonferroni correction, none of these countries show meaningfully different levels of reported punishment for scandals, except for the comparisons between China and Egypt and between China and Jordan. Here, China scores as more willing to punish exposed wrongdoers than either Egypt or Jordan (at the p<0.1 threshold of statistical significance, with full results available from author on request). These results demonstrate that state response to scandals does not correlate to regime type, levels of democratization or national wealth. In other words, even though it is clearly authoritarian, China shows surprising nimbleness in appeasing public anger over revealed scandals.
Why the CCP Responds to Public Pressure
Given these results and the dozens of cases of media and internet pressure forcing policy or personnel changes, the real mystery is not whether powerful Chinese officials respond to mass demands but why they do so. Much of the party-state’s responsiveness is seemingly based in a fear of the public’s response to official inaction (Distelhorst 2012). The cover story of a 2009 issue of the news magazine Zhongguo Baodao (中国报道, *China Report*) captures this sense of official worry, with a headline proclaiming “Netizens are three feet above our heads” and wondering whether “the internet brings forth popular will and the popular voice, or whether it brings hidden dangers”. The accompanying picture, with officials in imperial-style court dress engaged in an apparently worried discussion over a computer, reinforces the point (Wang et al. 2009).
For Chinese officialdom, it seems the “hidden dangers” are often more apparent than the benefits of bringing forth “popular will”. Liu Chang (2012), for example, cites a survey in which 88 per cent of regular netizens think that the internet is overall a “good thing, proving social progress” – at the same time as 70 per cent of public officials have “internet terror” (网络恐惧, wangluo kongju). Another Chinese study on the rise of the internet, and of China’s Twitter-like microblogging services in particular, showcases similar official worry. Writing from the viewpoint of Chinese officials, Kan Daoyuan (2010) finds that because microblogs sap the CCP’s ability to direct and control public opinion, these services act as a potent threat to “social stability”. If information is allowed to flow unchecked, Kan argues, then “rumours” will be much more likely to lead to “mass events” and other forms of social chaos (Kan 2010: 15). These and other studies suggest a culture in which Chinese officials, especially those at lower levels of government, are fearful of China’s internet public opinion, which can serve as an “alarm system” for pointing out problems to higher-ups (Lorentzen 2014). Responsiveness, then, does not happen for its own sake, but is seen by many officials as a means to preserve stability and prevent problems from getting out of hand.
Note that I am not arguing that China is particularly effective at combating corruption or very pro-active in pursuing cases of official malfeasance. Most official corruption in China surely goes unpunished, and evidence is strong that corruption is systemic even at the highest levels of the party-state (Barboza 2012). But when such cases are uncovered and appear before the public eye, authorities generally act very quickly to punish those targeted by popular pressure.
**Implications of China’s Responsive Government**
From one perspective, the CCP’s sprightly response to public opinion – especially online – is a boon to many of China’s citizens. As in any country, China faces a host of social problems that power holders are either unwilling or unable to tackle. The powerful nexus of an increasingly aggressive media (within limits) and mobilized public opinion has forced reluctant officials to confront problems ranging from official corruption, to choking pollution, poisonous food, an inadequate legal system, worker exploitation and other social ills. The result, when coupled with other practices like allowing citizens to sue the state, “an increasing use of People’s Congresses to discuss policy”, along with “the acceptance of some kinds of autonomous civil society organizations” – admittedly in a regime with “no apparent
interest in regime-level democratization” – has led He Baogang and M. Warren (2011: 269) and others to see China as an emerging example of “deliberative authoritarianism” (He 2006; He and Warren 2011; Jiang 2010).
This optimistic perspective sees the CCP’s increasing engagement with citizens as both helping to solve festering social problems and increasing overall regime effectiveness. He and Warren (2011: 280) write that “deliberation may simply function more effectively to maintain order, generate information and produce legitimate decisions” than a commandist approach. For the CCP, of course, the key word is “legitimate”. “Within a context in which ideological sources are fading while development-oriented policies create winners and losers”, they write, “deliberative processes”, including internet discussion, “can generate legitimacy” – legitimacy which might help the CCP stay in power (He and Warren 2011: 282, emphasis in original). Although acknowledging that the regime is responding to public pressure for its own selfish reasons, this perspective argues that most Chinese citizens are still better off living in a country that takes public demands seriously.
For the victims of the Wenzhou train crash, for the relatives of those hurt by Li Qiming, for the family of Sun Zhigang, the CCP’s increasing responsiveness has been an unalloyed blessing. And indeed, it is hard to object to the punishment of corrupt officials, the opening of government records, and other small signs that the regime is willing to look beyond coercion as the solution to all social problems.
Chinese Government Responsiveness:
A Double-Edged Sword
But a hidden trap might lurk in the party’s increasing willingness to bend to public compulsion. First, and most importantly, the “commentariat” – those who read newspapers and internet discussion topics, stay up to date on public affairs and comment on microblogging sites – is not coterminous with China’s citizenry. For one thing, although China had an estimated 632 million netizens in July 2014, this impressive group still represents less than 47 per cent of China’s population (CNNIC 2014). The major barriers preventing the remaining 800 million people from entering the online fray are either
technical (no knowledge of computer use or a fear that they are “too old”) or financial (internet use fees or lack of a computer/web-capable mobile phone). Only 11.6 per cent of those who do not use the internet claim to be uninterested in doing so, meaning that most of the non-netizens are probably kept informed by TV, radio and newspapers (CNNIC 2012). And even the traditional media do a relatively poor job of providing coverage in Western China and other less developed parts of the country (Stockmann 2013). Although these non-internet users may well be able to keep up to date on national affairs, they have virtually no way to participate in public discussion.
And those who do participate online are hardly representative of the general Chinese population, being younger, more urban, better educated, more male, and richer than average. For example, internet users are estimated to be 55.6 per cent male (compared with 51 per cent of China’s population), with an average age of 19.9 (compared to an estimated 37.9 for China as a whole). Urban residents, comprising less than half of the national population, make up 71.8 per cent of China’s netizens. And in a country where only 8.9 per cent of the population has some university education (including those who do not finish), the fact that 10.7 per cent of netizens have completed at least an undergraduate degree is telling. Meanwhile, more than 70 per cent of netizens earn at least the 2009 national average wage of about 1,400 CNY/month (224 USD), even though nearly 30 per cent of them are current students who likely have very low incomes (Netizen data from CNNIC 2012, 2014. Data on China’s average age estimated from National Bureau of Statistics of China 2010. Sex composition and educational attainment data from National Bureau of Statistics of China 2012).
Microblogging, called *weibo* (微博) in Chinese, seems to be a particularly influential medium for influencing government action. The Wenzhou train crash, discussed above, was first broken on *weibo*, as were dozens of other influential cases of citizen-led activism in recent years (Michelle and Uking 2011). But *weibo* users are more demographically skewed toward the social and economic elites than even other netizens, according to a stratified random survey (N=705) of *Sina Weibo* users conducted in August 2013. *Sina Weibo* is the largest of China’s *weibo* services and serves as a stand-in for all microblogging in China. The survey was administered by a commercial survey firm, oversampling active users – those who post at least seven times a
week. Potential participants were randomly selected and contacted directly through *Sina Weibo* itself, with an overall response rate of 11 per cent (7.7 per cent complete and valid). This is low by the standards of traditional offline surveys but broadly in line with online research in other countries (Kaplowitz, Hadlock, and Levine 2004).
The elite bias of Chinese microbloggers is especially apparent in their geographic concentration, with fully 47 per cent of all *Sina Weibo* users concentrated in Beijing, Shanghai and Guangdong Province – the three richest areas of China. Inland areas are hardly represented at all, and this situation has not much improved since 2011 research found similar (but even larger) findings of geographic concentration (Hassid 2011). Figure 1 provides a visual representation of where *weibo* users live in China; note especially how Western China is almost entirely bereft of microbloggers despite climbing internet penetration rates in the region. Surveyed *weibo* users are also far richer and more professionally oriented than ordinary Chinese citizens, with an average monthly income of 6,050 CNY compared to 3,000 CNY for ordinary netizens and a mere 1,400 CNY for the average Chinese citizen. The high income of surveyed *weibo* users is hard to overstate; less than 10 per cent of the sample had incomes below 2,500 CNY/month, an amount already more than 175 per cent of the national average. *Weibo* users are also far more professionally oriented than even China’s (already elite) netizens, with more than 50 per cent in “professional” jobs, compared to 20 per cent of netizens and a far smaller percentage of ordinary citizens (CNNIC 2012, 2014; National Bureau of Statistics of China 2010, 2012).
The fact that China’s netizens represent a relatively elite slice of the national population is not itself troubling, but it does suggest that internet users and commentators have been relative economic and social “winners”. As such, the issues the commentariat brings to government attention are likely to be biased against those who need the most help. In a 2004 example, workers in one Chongqing factory decided that going on strike was the only way to prevent the sale of their employer to a lowball bidder. These savvy factory hands, knowing that mobilizing the media and public opinion was perhaps their only route to success, organized a journalists’ seminar the day before the planned strike. Despite the seminar and preparation of a written press release, however, “there was no mainstream media response and little internet mobilization on behalf of workers”. Media scholar
Zhao Yuezhi asserts that the lack of public response was due to bias among journalists, who create the same “superficial, manipulated and one-sided research and analysis that have contributed to a policy-formation process detrimental to the interests of workers” (Zhao 2008: 311). Moreover, Zhao argues, the news media “are the main channels of propaganda for government officials and factory managers, and they play a major role in amplifying neoliberal reform ideas” (Zhao 2008: 311). Internet commentators are often just as biased against poorer workers.
Figure 1: Geographic Location of Surveyed Sina Weibo Users (N=705)
Source: Author survey (August 2013).
If Sun Zhigang, the graphic designer beaten to death in police custody in 2003, were a more typical migrant worker, it is unlikely that his case would have garnered any attention at all from the internet or mainstream media. By official figures, over one million Chinese citizens, mostly poor migrant workers, were detained each year in the early 2000s, with “abysmal living conditions, beatings, sexual abuse and deaths” being commonplace (Hand 2006: 120–121). Yet none of the earlier deaths attracted the same kind of media and internet attention as Sun’s, and his death was seen as potentially threatening to the very sort of people who were likely to be online (especially in 2003). Despite the fact that *Nanfang Dushibao* asked, “In the state apparatus of a great country, who is not a nobody? […] Who is not an ordinary citizen?” (Hand 2006: 122), if Sun were a “nobody” rather than a white-collar university graduate, his death would most likely have passed unnoticed.
A further worry is that CCP responsiveness to public pressure will undermine recent attempts to build a more powerful and independent Chinese judiciary – albeit one within circumscribed limits. Since the reform era began, the CCP has made fitful progress in improving the quality of the Chinese legal system. A major push began in the aftermath of the Fifteenth Party Congress in 1999, when the Supreme People’s Court (SPC) issued a blueprint for legal reform, calling for a “fair, open, highly effective, honest and well-functioning” judiciary (Gechlick 2005: 98), apparently for the first time since the 1949 founding of the People’s Republic (Zhang 2003: 71). Among other reforms, the SPC has required since 2002 that “new judges [be] required to be university graduates and to pass the difficult national bar exam (which has a pass rate of about ten per cent)”, a reform which has resulted in increasing the number of university-qualified judges from 12 per cent in 1995 to more than 50 per cent just ten years later (Liebman and Wu 2007: 267).
This is not to say that China has created a Western-style independent judiciary. Major political cases are still decided in consultation with CCP functionaries, and their pre-ordained verdicts are rarely in doubt. The current president of the SPC has reaffirmed party supremacy over the court system, noting that judicial power “is a significant way for the party and the people under its leadership to administer state and social affairs” (Hou and Keith 2012: 63, quoting SPC President Wang Shengjun). Nonetheless, for most day-to-day cases, judges have increasing latitude in adjudicating according to the merits of a case. Judges are even able to rule on potentially sensitive environmental cases with a degree of judicial professionalism, though this autonomy often depends on the local situation (Stern 2010). As
Zhu Suli, the former head of the Peking University Law School, puts it,
the party’s influence is “ubiquitous at every level and in every aspect of contemporary Chinese society”, but [...] its influence on the judiciary is “general and diffuse” (Hou and Keith 2012: 63, quoting Zu Suli).
The result is a system that more Chinese and even foreign companies see as increasingly non-partisan and fair, especially in regard to commercial cases (Peerenboom 2010). Recent years have seen a partial “turn against the law”, where judges have been encouraged to mediate rather than litigate, and in the context of which a “suspicion of lawyers has risen” (Minzner 2011: 936). Eva Pils has similarly seen an “increasing number of repressive strikes against human rights lawyers, petitioners” and others as the CCP partially backtracks from its earlier legal reforms (Pils 2009: 141). The CCP’s reduced emphasis on law in recent years, however, still allows far more judicial autonomy and professionalism than in the early years of the reform era, and as Minzner notes, “There is still some (albeit reduced) room for progressive institutional reform in China under the ‘rule of law’ rubric” (Minzner 2011: 937).
The party-state’s susceptibility to public pressure, however, can sometimes undermine progress toward a more professional judiciary. While the power of the internet can promote justice – as in the case of She Xianglin, freed by internet pressure after being wrongly convicted of murdering his wife (Liebman and Wu 2007: 275) – it can also easily distort China’s fragile judicial gains. Writing about the Maoist era, Sumei Hou and Ronald Keith write that “undue subscription to due process was easily conceived as throwing water on the masses who demanded justice”, but a similar dynamic persists today when public demands for accountability become overwhelming (Hou and Keith 2012: 67).
One of the most ominous cases involves the trial (and retrial) of admitted Shenyang mob boss Liu Yong. In 2003, Liu was convicted by the Liaoning court system of “a range of crimes, including organizing a criminal syndicate, bribery and illegal possession of firearms” and sentenced to death. After two appeals, however, the Liaoning High People’s Court vacated the execution order and sentenced him to lifetime imprisonment. “One reason for the reduction”, legal scholars Benjamin Liebman and Tim Wu write, “was the fact that
Liu’s confession had been obtained through torture” (Liebman and Wu 2007: 283). After *Bund Pictorial*, a Shanghai news magazine, questioned the commutation of Liu’s sentence, “Web discussion forums filled with angry commentary, denouncing Liu’s ‘lenient’ treatment” (Liebman and Wu 2007: 283). Goaded by public pressure, the SPC quickly invoked a never-before-used rule and sentenced Liu to death (*People’s Daily* 2003). The sentence was carried out the same day (Liebman and Wu 2007: 283).
And the Liu Yong case is not unique. During the 2002 trial of Zhang Jinzhou, an official at a state-owned construction company on trial for economic crimes, the media repeatedly called Jiang a “criminal” before his conviction, and “at least one newspaper ran a headline stating that ‘execution will be too light a punishment’” (Liebman 2005: 72). A similar story in 1997 ended with the court’s conclusion that if the defendant were not killed, “it would not be enough to assuage popular rage” (Liebman 2005: 71). Needless to say, the defendants in both cases were quickly executed.
This and other cases demonstrate the potential danger of CCP responsiveness. As Susan Shirk writes,
The elite’s extreme nervousness about potential protests makes them highly responsive when the media report on a problem […]. Once the media publicize an issue and the issue becomes common knowledge, then the government does not dare ignore it (Shirk 2011: 17).
If a case becomes enough of a *cause célèbre*, party authorities are apparently willing to ignore established rules and procedures and instead turn to rough and ready judgement to appease popular anger.
**Conclusion**
Despite its authoritarian bent, the Chinese party-state is surprisingly responsive to public demands when the clamour for change becomes loud enough – especially when the internet is involved. A typical pattern involves a newspaper reporter finding about a potential scandal on the internet, either by chance or because netizens increasingly funnel story tips to journalists online. After publication in a newspaper, the story attracts much greater attention online, prompting further stories in the mainstream press and even more internet commentary. Eventually the pressure reaches a tipping point, forcing
Chinese officials to act to avoid social instability. In the short to medium term, such responsiveness keeps social tensions from building too high, as on most issues the CCP reacts decisively to assuage public anger before the people can take to the streets (Hassid 2012).
This responsiveness can have salutary effects, improving the quality of governance, preserving stability and helping central authorities learn about local problems that would otherwise be hidden from Beijing’s view. And Beijing seems serious about uncovering local problems. In May 2008, for example, the party-state initiated regulations (the “Open Government Regulations” or 政府信息公开条例, Zhengfu Xinxi Gongkai Tiaoli) forcing local authorities to release more government information in an effort to improve transparency across the country. Although few local governments had met even the basic legal requirements years later (Lorentzen, Landry, and Yasuda 2010; Distelhorst 2014), the effort demonstrated that there is at least some support in the CCP for increasing the flow of information and, presumably, bettering the quality of governance. After all, if Beijing can learn about problems early, scandals – and subsequent public pressure on the CCP – can be prevented.
But this responsiveness also presents hidden dangers. First, the online commentariat is not synonymous with China’s population as a whole. Having a distinct bias toward urban, rich, well-educated males, the online community may well advocate for issues that help them, the relative “winners”, at the expense of other segments of society. This bias is especially prevalent among China’s microbloggers, who represent an online “super-elite” with an overwhelming professional orientation and more than four times the monthly income of the average Chinese citizen. Such opinion makers are generally far more interested in their own concerns than the plight of the rural (and urban) underclass. For example, although the death of Sun Zhigang was tragic, countless other migrant worker deaths in custody before his had failed to garner much public attention. It is the fact that he was a member of the university-educated elite, rather than his death in particular, that helped spawn the massive public outcry. Given netizens’ bias toward those already relatively well off, CCP responsiveness to public opinion may exacerbate, rather than help, China’s growing social inequality and promote short-term, urban-oriented solutions at the cost of long-term stability. If the attention of senior officials to local problems is limited, any increased attention to the
problems of elite urban netizens might come at the expense of rural residents – residents who already protest more than 100,000 times a year (China Labor Bulletin 2009).
And second, there is a danger that the CCP may undermine its own nascent efforts to build an effective, competent judiciary. Legitimacy is, in part, derived from procedural fairness, and if officials are seen to bow to mob justice, people’s trust in the system may suffer in the long run (Tyler and Fagan 2008; Sullivan 2013). Although China has made some progress toward establishing a competent, neutral judiciary, these gains are still fragile. For a system already suffering from what Thomas Friedman calls a “huge trust deficit”, the end result might be dire indeed (Friedman 2012).
Although this paper has sketched out the CCP’s surprising responsiveness to public pressure and examined some of the positive and negative ramifications of this trend, future research is needed in a number of areas. For one, it is still unknown why authorities decide to allow discussion on some sensitive topics while ruthlessly censoring others. Direct criticism of high-level leaders is clearly not allowed, and recent research has indicated that the CCP is most vigilant about controlling potential organizational threats (King, Pan, and Roberts 2013). Beyond these broad parameters, however, the mechanisms of general state response to potentially sensitive topics are quite murky. On a related note, it is unclear why some (potential) scandals capture public attention while others disappear without a trace. Perhaps there is some common element to those scandals that capture public attention? And finally, future research should examine how the changing demographics of China’s internet users might affect the dynamics outlined above. As the average netizen becomes more similar to the average Chinese citizen, it is possible that in time the system will become more responsive to all, rather than just a lucky few. Until that happens, party-state responsiveness to an unaccountable online elite might slowly increase China’s potential for instability, especially if attention to the concerns of rich, coastal internet users redirects official attention from the increasingly troubled plight of rural residents.
References
Baidu Baike (2012), 李意珍 (Li Yizhen), online: <http://baike.baidu.com/view/765197.htm?fromTaglist> (16 October 2012).
Barboza, Daniel (2012), Billions in Hidden Riches for Family of Chinese Leader, in: *The New York Times*, 25 October, online: <www.nytimes.com/2012/10/26/business/global/family-of-wen-jiabao-holds-a-hidden-fortune-in-china.html> (15 May 2015).
*BBC News* (2011), China Hit-and-Run Driver Sentenced to Six Years in Jail, 30 January, online: <www.bbc.co.uk/news/world-asia-pacific-12317756> (15 May 2015).
Bennett, Andrew (2004), Case Study Methods: Design, Use, and Comparative Advantages, in: Detlef F. Sprinz and Yael Wolinsky-Nahmias (eds), *Models, Numbers, and Cases: Methods for Studying International Relations*, Ann Arbor, MI: University of Michigan Press, 27–64.
Besley, Tim, and Robin Burgess (2001), Political Agency, Government Responsiveness and the Role of the Media, in: *European Economic Review*, 45, 4–6, 629–640.
Brady, Anne-Marie (2008), *Marketing Dictatorship: Propaganda and Thought Work in Contemporary China*, Lanham: Rowman & Littlefield.
Chang, Jessie T. H., Isabelle I. H. Wan, and Philip Qu (eds) (2006), *China’s Media and Entertainment Law* (Vol. 2), Hong Kong: TransAsia.
*China Daily* (2011a), Hasty Burial of Wreckage Sparks Suspicion, 27 July, online: <www.chinadaily.com.cn/cndy/2011-07/26/content_12980184.htm> (15 May 2015).
*China Daily* (2011b), Suspect in Deadly Campus Crash Set to Face Court, 25 January, online: <www.chinadaily.com.cn/china/2011-01/25/content_11909345.htm> (15 May 2015).
China Internet Network Information Center (2014), 第 34 次中国互联网络发展状况统计报告 (*Di 34 ci Zhongguo Hulianwangluo Fazhan Zhuangkuang Tongji Baodao, The 34th Statistical Survey Report on the Internet Development in China*), Beijing: CNNIC, online: <www.Cnnic.cn/hlwfzyj/hlxwzbg/hlwtjbg/201407/P020140721507223212132.pdf> (15 May 2015).
China Internet Network Information Center (2012), 中国互联网络发展状况统计报告 (*Zhongguo Hulianwangluo Fazhan Zhuangkuang Tongji Baodao, Statistical Survey Report on the Internet Development in China*), Beijing: CNNIC.
*China Labor Bulletin* (2009), Going It Alone: The Workers’ Movement in China 2007–2008 Research Reports, 1–57.
CNNIC see China Internet Network Information Center
Distelhorst, Gregory (2014), *The Power of Empty Promises: Quasi-democratic Institutions and Activism in China*, online: <http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2491744> (15 May 2015).
Distelhorst, Gregory (2012), *Publicity-Driven Official Accountability in China: Qualitative and Experimental Evidence*, online: <http://web.mit.edu/polisci/people/gradstudents/papers/Distelhorst_PDA_0927.pdf> (15 May 2015).
Fish, Issac Stone (2012), Trickle-Down Economics for Chinese Officials, in: *Foreign Policy*, 5 January, online: <http://blog.foreignpolicy.com/posts/2012/01/05/china_should_beef_up_officials_salaries> (15 May 2015).
Friedman, Thomas J. (2012), In China We (Don’t) Trust, in: *The New York Times*, 11 September, online: <www.nytimes.com/2012/09/12/opinion/friedman-in-china-we-dont-trust.html> (15 May 2015).
Gechlick, Mei Ying (2005), Judicial Reform in China: Lessons from Shanghai, in: *Columbia Journal of Asian Law*, 19, 1, 97–137.
Gerring, John (2001), *Social Science Methodology: A Criterial Framework*, Cambridge: Cambridge University Press.
Hallin, Daniel C., and Paolo Mancini (eds) (2012), *Comparing Media Systems Beyond the Western World*, Cambridge, New York: Cambridge University Press.
Hallin, Daniel C., and Paolo Mancini (2004), *Comparing Media Systems: Three Models of Media and Politics*, Cambridge, New York: Cambridge University Press.
Hand, Keith J. (2006), Using Law for a Righteous Purpose: The Sun Zhigang Incident and Evolving Forms of Citizen Action in the People’s Republic of China, in: *Columbia Journal of Transnational Law*, 45, 1, 114–195.
Hassid, Jonathan (2012), Safety Valve or Pressure Cooker? Blogs in Chinese Political Life, in: *Journal of Communication*, 62, 2, 212–230, doi: 10.1111/j.1460-2466.2012.01634.x.
Hassid, Jonathan (2011), *On (the) Line: How Internet Public Opinion Shapes China’s Media and Public Policy*, unpublished work.
Hassid, Jonathan (2008), Controlling the Chinese Media: An Uncertain Business, in: *Asian Survey*, 48, 3, 414–430, doi: 10.1525/as.2008.48.3.414.
Hassid, Jonathan, and Jennifer N. Brass (2014), Scandals, Media and Good Governance in China and Kenya, in: *Journal of Asian and African Studies*, (online before print).
He, Baogang (2006), Participatory and Deliberative Institutions in China, in: E. J. Leib and Baogang He (eds), *The Search for Deliberative Democracy in China*, New York: Palgrave Macmillan, Ch. 9.
He, Baogang, and Mark E. Warren (2011), Authoritarian Deliberation: The Deliberative Turn in Chinese Political Development, in: *Perspectives on Politics*, 9, 2, 269–289.
Hou, Shumei, and Ronald C. Keith (2012), A New Prospect for Transparent Court Judgement in China?, in: *China Information*, 26, 1, 61–86.
Jiang, Min (2010), Authoritarian Deliberation on Chinese Internet, in: *Electronic Journal of Communication*, 20, 3–4, online: <www.cios.org/EJCPUBLIC/020/2/020344.html> (15 May 2015).
Kan, Daoyuan (阚道远) (2010), 微博兴起视野下的思想政治工作 (Weibo Xingqi Shiyexia de Sixiang Zhengzhi Gongzuo, The Rise of Weibo in the Eyes of Ideological and Political Work), in: 思想政治工作研究 (*Sixiang Zhengzhi Gongzuo Yanjiu, Studies in Ideological and Political Work*), 4, 14–16.
Kaplowitz, Michael D., Timothy D. Hadlock, and Ralph Levine (2004), A Comparison of Web and Mail Survey Response Rates, in: *Public Opinion Quarterly*, 68, 1, 94–101.
King, Gary, Jennifer Pan, and Margaret Roberts (2013), How Censorship in China Allows Government Criticism but Silences Collective Expression, in: *American Political Science Review*, 107, 2, 326–343.
Liebman, Benjamin L. (2005), Watchdog or Demagogue? The Media in the Chinese Legal System, in: *Columbia Law Review*, 105, 1, 1–157.
Liebman, Benjamin L., and Tim Wu (2007), China’s Network Justice, in: *Chicago Journal of International Law*, 8, 1, 257–322.
Liu, Chang (刘畅) (2012), 微博问政、治理转型与“零碎社会工程” (Weibo Wenzheng, Zhili Zhuanxing Yu “Lingsui Shehui Gongcheng”, Participation in State Affairs by Microblog, Social Transformation through Repairs and Piecemeal Construction), in: 南京社会科学 (*Nanjing Shehui Kexue, Social Sciences in Nanjing*), 4, 110–116.
Liu, Chang (刘畅) (2010), Father and Son Apology for Hit-and-Run Seen as “Show”, in: *Global Times*, 25 October, online: <http://china.globaltimes.cn/society/2010-10/585212.html> (10 January 2012).
Lorentzen, Peter L. (2014), China’s Strategic Censorship, in: *American Journal of Political Science*, 58, 2, 402–414.
Lorentzen, Peter L., Pierre F. Landry, and John K. Yasuda (2010), *Transparent Authoritarianism?: An Analysis of Political and Economic Barriers to Greater Government Transparency in China*, paper presented at the APSA Annual Conference, Washington, DC.
MacKinnon, Rebecca (2009), China’s Censorship 2.0: How Companies Censor Bloggers, in: *First Monday [Online]*, 15, 2.
Michelle and Uking (2011), Special: Micro Blog’s Macro Impact, in: *China Daily*, 2 March, online: <www.chinadaily.com.cn/china/2011-03/02/content_12099500.htm> (29 September 2014).
Minzner, Carl F. (2011), China’s Turn Against the Law, in: *American Journal of Comparative Law*, 59, 935–984.
National Bureau of Statistics of China (国家统计局, Guojia Tongjibu) (2012), *Communiqué of the National Bureau of Statistics of the People’s Republic of China on Major Figures of the 2010 Population Census (No. 1)*, Beijing: NBS.
National Bureau of Statistics of China (国家统计局, Guojia Tongjiju) (2010), *中国统计年鉴 (Zhongguo Tongji Nianjian, China Statistical Yearbook)*, Beijing: 中国统计出版社 (Zhongguo tongji chubanshe, China Statistics Press).
Ni, Ching-Ching (2007), China Slavery Verdicts Anger Victims’ Families, in: *Los Angeles Times*, 19 July, online: <http://articles.latimes.com/2007/jul/19/world/fg-china19> (15 May 2015).
Noesselt, Nele (2013), Microblogs and the Adaptation of the Chinese Party-State’s Governance Strategy, in: *Governance: An International Journal of Policy and Administration*, 27, 3, 449–468.
Oates, Sarah (2013), *Revolution Stalled: The Political Limits of the Internet in the Post-Soviet Sphere*, Oxford: Oxford University Press.
O’Brien, Kevin J. (1996), Rightful Resistance, in: *World Politics*, 49, 1, 31–55.
Osnos, Evan (2012), Boss Rail, in: *The New Yorker*, 22 October, 44–53.
Peerenboom, Randall (2010), Judicial Independence in China, in: Randall Peerenboom (ed.), *Judicial Independence in China: Lessons for*
Global Rule of Law Promotion, Cambridge: Cambridge University Press, 1–22.
People’s Abstracts (人民文摘, Renmin Wenzhai) (2004), 透视电影《时差七小时》事件 (Toushi Dianying “Shicha Qi Xiaoshi” Shijian, Gaining Perspective on the “Seven Hour Time Zone” Movie Incident), 12.
People’s Daily (2003), Shenyang Gang Leader Liu Yong Gets Death Penalty, 22 December, online: <http://english.peopledaily.com.cn/200312/22/eng20031222_130952.shtml> (15 May 2015).
Pils, Eva (2009), The Dislocation of the Chinese Human Rights Movement, in: Stacy Mosher and Patrick Poon (eds), A Sword and a Shield: China’s Human Rights Lawyers, Hong Kong: China Human Rights Lawyers Concern Group, 141–159.
Popping, Roel (2000), Computer-Assisted Text Analysis, London and Thousand Oaks, CA: Sage Publications.
Reporters Without Borders (2014), Worldwide Press Freedom Index, online: <http://rsf.org/index2014/en-index2014.php> (15 May 2015).
Reporters Without Borders (2013), China, online: <http://en.rsf.org/report-china,57.html> (15 May 2015).
Shambaugh, David (2008), China’s Communist Party: Atrophy and Adaptation, Washington, DC: Woodrow Wilson Center Press.
Shirk, Susan L. (2011), Changing Media, Changing China, in: Susan L. Shirk (ed.), Changing Media, Changing China, New York: Oxford University Press, 1–37.
Stern, Rachel E. (2010), On the Frontlines: Making Decisions in Chinese Civil Environmental Lawsuits, in: Law and Policy, 32, 1, 79–102.
Stockmann, Daniela (2013), Media Commercialization and Authoritarian Rule in China, New York: Cambridge University Press.
Sullivan, Jonathan (2013), China’s Weibo: Is Faster Different?, in: New Media & Society, 16, 1, 24–37.
Sullivan, Jonathan, and Will Lowe (2010), Chen Shui-Bian: On Independence, in: China Quarterly, 203, 619–638.
Thompson, John B. (2000), Political Scandal: Power and Visibility in the Media Age, Cambridge: Polity Press.
Tyler, Tom R., and Jeffrey Fagan (2008), Legitimacy and Cooperation: Why Do People Help Police Fight Crime in Their Communities?, in: Ohio State Journal of Criminal Law, 6, 231–275.
Wang, Xinling (王新玲), He Jing (何晶), Dong Yan (董彦), and He Liu (何流) (2009), 头上三尺有网民 (Toushang sancun you wangmin, The Netizens are Three Feet over Our Heads), in: 中国报道 (Zhongguo Baodao, China Report), 62, 4, 42–57.
Wedeman, Andrew (2004), The Intensification of Corruption in China, in: China Quarterly, 180, 895–921.
Wines, Michael (2010), China’s Censors Misfire in Abuse-of-Power Case, in: The New York Times, 17 November, online: <www.nytimes.com/2010/11/18/world/asia/18li.html> (15 May 2015).
Wines, Michael, and Sharon LaFraniere (2011), In Baring Facts of Train Crash, Blogs Erode China Censorship, in: The New York Times, 28 July, online: <www.nytimes.com/2011/07/29/world/asia/29china.html?_r=1&scp=1&sq=weibo&st=cse> (15 May 2015).
Yang, Guobin (2009), The Power of the Internet in China: Citizen Activism Online, New York: Columbia University Press.
Zhang, Qianfan (2003), The People’s Court in Transition: The Prospects of the Chinese Judicial Reform, in: Journal of Contemporary China, 12, 34, 69–101.
Zhao, Yuezhi (2008), Communication in China: Political Economy, Power, and Conflict, Lanham, MD: Rowman & Littlefield.
Zhu, Zhe (2007), More Than 460 Rescued from Brick Kiln Slavery, in: China Daily, 15 June, online: <www.chinadaily.com.cn/china/2007-06/15/content_894802.htm> (15 May 2015).
Appendix
The prison category counted the following words and stems:
“hard labour”, “imprison*”, “incarcerate*”, “jail*”, “prison*”, “reeducat*”, “reeducat*” and “sentenced”. Punishment measured: “demoted”, “execute”, “executed”, “fined”, “fired”, “punish*” and “stripped”. Judiciary measured: “appeal”, “appellate”, “attorney(s)”, “barrister(s)”, “charged”, “court(s)”, “defendant(s)”, “indict*”, “judge(s)”, “lawyer(s)”, “magistrate(s)”, “plaintiff(s)”, “procurator*”, “prosecut*”, “solicitor(s)”, “trial(s)”, “tribunal(s)”.
The asterisk indicates a “wild-card” search that allows any terms. For example, a search for “jail*” would get results that included the terms “jail”, “jailing”, “jailer”, “jailed” and “jails”, while a search for “procurator*” would include “procurator”, “procurators” and “procuratorate”.
Contents
Stability Maintenance and Chinese Media
Introduction
- Jonathan HASSID and Wanning SUN
Stability Maintenance and Chinese Media: Beyond Political Communication? 3
Research Articles
- Wanning SUN
From Poisonous Weeds to Endangered Species: *Shenghuo* TV, Media Ecology and Stability Maintenance 17
- Jonathan HASSID
China’s Responsiveness to Internet Opinion: A Double-Edged Sword 39
- Ashley ESAREY
Winning Hearts and Minds? Cadres as Microbloggers in China 69
Analysis
- HAN Rongbin
Manufacturing Consent in Cyberspace: China’s “Fifty-Cent Army” 105
Research Article
- Orhan H. YAZAR
Regulation with Chinese Characteristics: Deciphering Banking Regulation in China 135
Contributors 167
|
Abstract:
One of the most cases of complexity in the construction industry is the selection of the appropriate contractor. This summary thesis investigates the concepts of contractor pre-qualification requirements. It has three major parts. The first part deals with the nature of PQ and the necessity/benefit of conducting PQ prior to bidding, PQ methodology and how to apply rating strategies are presented on the second part. Eventually, mathematical modes techniques with sample calculations are listed in the third part. The main purpose of this mathematical/statistical analysis is to eliminate or minimize the subjectivity in selection of qualified contractors.
INTRODUCTION
A successful construction program can occur only if it is performed by a combination of capable and knowledgeable people. The goal in construction, from point of view of the owner, is to provide him with appropriate facilities representing an effective and efficient expenditure of his money. A qualified contractor will minimize problems and complete the project according to the owner’s expectations. However, if that contractor is not qualified by experience, skill, integrity, and responsibilities, and does not have the financial means to provide a completed project, the result will be disappointing.
Contractor pre-qualification means screening construction contractors according to a predetermined set of criteria in order to determine their competence or ability to participate in the project bid.
PROBLEM STATEMENT:
In the case of public project, the contract is normally given to the lowest responsible bidder in a competitive bidding delivery system. However, a major problem may arise in the public sector during the competitive bidding phase, this problem is to determine the responsibility of the contractor and his ability to perform the owner’s project. Therefore, depending solely of the lowest price is not warranted approach. Beside that, the public owner bases his decision on subjective judgment, which does not follow a sequential structured approach to determine short-listed qualified contractors.
Responsible bidder refers to more than the capacity, skill, reliability, and integrity of the bidder. The awarding authorities should verify that the bidder:
1. Has adequate financial resources, experience, personnel resources, and equipment to perform the task.
2. Has the ability to comply with the required performance and time schedule.
The responsible contractor may be required to vouch for the responsibility of his subcontractors as well as his material suppliers. It should be realized that using a pre-qualification questionnaire alone does not mean using a strategy for per-qualification because it is only a means of gathering information needed for evaluation. Ensuring contractors characteristics and capabilities matching the requirements of the project under consideration is significant step.
PRE-QUALIFICATION BENEFITS TO PUBLIC PROJECTS:
The contractor will benefit by assurance that he will be on a reasonable even basis with his competition. Moreover, both the owner and A/E benefit through the problems elimination of selecting unqualified contractor. Other advantages are:
- Assuring that the low prime bidder and his major subcontractors will be competent to handle the task without becoming overburdened.
- Eliminating the contractors who have limited financial resources or experience.
Controlling the number of bidders, so the qualified will stay.
Protecting the contractors from being awarded a project that they are incapable of performing.
Speeding the process of the evaluation and awarding the contract.
Shifting the process form subjectivity to objectivity by bringing a structure to the pre-qualification process.
Chapter 2
DESCRIPTION OF THE CONTRACTOR PRE-QUALIFICATION PROCESS
TENDERS PRE-QUALIFICATION:
This procedure consists of three main stages: pre-qualification of tenders, obtaining tenders, and opening and evaluation of the same. The pre-qualification stage includes the steps from preparation of enquiry documents and invitation to contractors to pre-qualify (see fig1).
| Employer/ Engineer | Contractor |
|--------------------|------------|
| Place PQ Advertisement In Press, Etc. As Appropriate Stating:
- Employer & Engineer
- Outline Of Project (Scope, Location, Etc.)
- Enquiry Issue & Tender Submission Date.
- Instruction For Applying PQ.
- Submission Date For Contractor.
- PQ Data. | Invitation to Contractors to Pre-qualify. |
| Issue PQ instruction & Questionnaire requesting from each company/ Joint venture:
- Organization and Structure.
- Experience in Same Type of work.
- Resources (Managerial, Technical, labor, etc).
- Financial Statement | Issue and Submission of PQ Documents. |
| Acknowledge Receipt | Request PQ Documents |
| Analyze PQ data:
- Company/Joint venture structure
- Experience & Resources.
- Financial & General stability | Analysis of PQ Data selection & Notification of List of selection |
| Select Company/Joint venture for inclusion in list of Tenderers. | Acknowledge Receipt |
| Notify all Contractors/Joint venture of the list of selected Tenderers. | Confirm Intention to Submit Valid Tender |
Figure (1): PQ of Tenderers
The first step includes invitation to contractors through advertisement telling the contractors where they can obtain the pre-qualification questionnaire. A typical questionnaire includes the following information:
- Introduction: a brief description of the project.
- Organization: classification and company’s organizational chart.
- Financial resources: financial capability of the contractor.
- Physical resources: contractor manpower, equipment, …etc.
- Experience: contractor experience on similar projects.
When the pre-qualification questionnaires are submitted back to the owner, data evaluations will begin to eliminate contractors who don’t meet the minimum requirements. After short-listing contractors, a notification is sent to each asking them to collect project documents and bid. A general pre-qualification decision-making process is presented in fig2.
**ELEMENTS OF PER-QUALIFICATION:**
This includes three major elements:
1. **LITTER TO CONTRACTORS (invitation):**
This letter is sent to each contractor asking him to pre-qualify. A typical letter may include the name of owner, a brief description of the project, and the source of pre-qualification documents.
2. **PRE-QUALIFICATION FORM:**
This consists of three parts:
a. **Information For The Contractor:**
i. **Objective and Scope of Work:** which includes construction sketches and project description in addition to the scope of work.
ii. **General Information:**
1. **Degree of Eligibility:**
This refers to the contractor’s capacity to be assigned one or more construction portion of a contract.
2. **Formation of Partnership or Joint Venture:**
A license proving the validity of this shall be submitted to the owner if he solicits.
3. **Bonding Capacity:**
Certificates of the bonding company must be attached signifying its willingness to issue bid or performance bonds to the contractor. In addition, the name of the banks with which the contractor is conducting the business must be attached.
4. **Official Language:**
English is always preferable unless otherwise stated.
5. **Supply Materials:**
The owner may ask the contractor to procure materials from certain sources desired by him.
6. **Questionnaire Submission:**
The owner will specify a certain time, date, and location by which the contractor should submit his pre-qualification questionnaire.
7. **Beginning and duration of construction:**
The owner will insert in the pre-qualification form the recommended date to begin the project and duration.
b. **Pre-Qualification Questionnaire:**
i. **Identification of the Contractor:**
Such as the name of the firm, home address, Fax, and phone. Moreover, it tells whether the firm is an individual, partnership, corporation, or joint venture.
ii. **Contractor Performance:**
A list of current construction contracts performed with details. Sometimes recommendations from the owners of previous projects are required.
iii. **Contractor’s Equipment:**
The amount, type, and condition of the contractor’s equipment are important.
iv. **Construction Ability:**
The ability of the contractor to complete the project should be thoroughly investigated.
v. **Completion Ability:**
The ability to meet reasonable completion dates successfully should be considered.
vi. **Client Relationship:**
The ability to work compatibly with the staff of the owner and how cooperative in the field is important.
c. **Certification and Waves:**
At the end of the pre-qualification form, each contractor will be asked to sign and declare the truth of all information. In addition, the owner may ask the contractor to write a waiver of claim and confidentiality.
3. **CONTRACTOR RATING STRATEGIES:**
They all try to examine and evaluate the data arrived at the owner’s office from the candidate contractors.
a. **Dimensional Weighting:**
This process is based on the characteristics of the owner. Once the criteria are established, contractors can be rated with respect to these criteria. Contractor’s score is calculated as a weighted sum of ratings over all the criteria. The rank order of the scores can then be used for contractors’ selection (see table 1). Form these values; cut line can be set to reject all contractors below. A subjective judgment may be used to make a decision.
b. Two-step Pre-qualification:
Step 1 entails the contractors are qualified or disqualified based on how well they satisfy a number of preliminary screening dimensions. In order for the contractor to be eligible to proceed to the second step in the pre-qualification process, he must meet these criteria. The second step utilizes the dimensional weighting strategy by using more specific criteria to determine the competitiveness of the contractor as described. The application of the two-step pre-qualification allows rapid elimination of unwanted contractors.
CONTRACTOR DATA SOURCES:
This can be divided into two kinds: internal data and external data. The internal data compare the contractor’s performance of past projects done for the owner. They are much more reliable than any other source of data. The decision maker may find them through monthly progress reports and discussion with owner’s personnel who were in contact with the contractor. On the other hand, the external data are gathered through:
- The questionnaire filled by the contractor.
- Some additional data source such as the banks, subcontractors, and suppliers the contractor deals with.
- Site visits to the projects currently being completed by the contractor.
Chapter 3
BASIC TECHNIQUE TO PRE-QUALIFY CONSTRUCTION CONTRACTOR:
This technique was divided into two processes: the paired comparison criteria weighting process and the matrix analysis process.
Paired comparison criteria weighting: These criteria differ between projects and owner’s need so they must be assigned different weight values according to their impact on the project. This strategy is called “Paired Comparison” (see fig.3). This process can be done through:
I. List all criteria that are considered important.
II. Determine how important each of these to the owner and the project. The importance of one criterion over another can be major (given 3 points), medium (2), minor (1), or none (0).
III. Sum the total raw score of each criterion.
IV. Adjust the raw scores to a scale of 1(low)-10(high).
The evaluation matrix:
This is indicated in fig.4 and can be expressed as follows:
I. Rank each criterion against each contractor. The scoring system used in the evaluation matrix is to assign 1(Poor)-5(Excellent).
II. Multiply the rank of each with the weight of each criterion.
III. Sum the total score of each contractor and rank them for selection. Contractors having the highest total points are the ones chosen to submit proposals.
| Contractor Name: | Grade: |
|------------------|--------|
| Address: | |
| CRITERIA | Weight | 5 Exce | 4 V.G | 3 Good | 2 Fair | 1 Poor |
|----------------|--------|--------|-------|--------|--------|--------|
| A Experience | 10 | √ | | | | 40 |
| B Equipment | 3.3 | √ | | | | 16.5 |
| C Financial Resources | 1.7 | | √ | | | 5.3 |
| D Reputation | 3.3 | | | √ | | 6.6 |
**TOTAL SCORE**
68.4
Figure (4): The Evaluation Matrix
This technique can’t handle a large number of criteria and may trap when determining the preferences of the criteria.
The involvement of quantitative and qualitative data analysis is crucial. The purpose of quantitative analysis is to reveal those questionnaire items that have major influence and those having minor influence on the contractor pre-qualification process. On the other hand, the purpose of qualitative is to test if the means of the questionnaire items provide by the participants are statistically different at assigned level of significance.
Figure (3): Determining For Weights Evaluation
MODEL PHILOSOPHY:
This model utilizes a dimensional weighting approach based on multiple-criterion decision-making. In this way, each decision factor (criteria) used for the evaluation and its weight is determined based on the preferences of the decision maker.
The following assumptions are associated with this approach:
1. The impact of each criterion can be quantified on a numerical scale 1[unsatisfactory] to 10[excellent].
2. The numerical value can be reasonably obtained from the pre-qualification (PQ) questionnaire made by the decision maker and filled in by the contractor.
3. The addition or deletion of any decision parameters requires no dependency of the model’s parameters.
In order to develop PQ model, two types of parameters are needed to be determined. They are called Composite Decision Factor (CDF) and Decision Factor (DF). A CDF represents a single construct made up of interrelated DFs (see fig 5).
Once the CDFs and their associated DFs are determined, the decision maker will give each a weight according to its influence on the PQ process.
THE CALCULATION OF MODEL PARAMETERS:
Calculation of Decision Weights:
This enables to determine which extend each decision factor and sub-factor impacts the PQ decision process using a scale from zero (no impact) to four (very high impact). The responses of each CDF will be translated to weights according to the following steps:
a. Calculate the mean impact for each DF included in each CDF. The
b. Calculate the DF weight to each DF from equation:
\[ w_{ij} = \frac{DFMI_{ij}}{\sum_{j=1}^{m_i} DFMI_{ij}} \]
Where: \( w_{ij} \) = the weight of the \( DF_j \) associated with the \( CDF_i \).
\( DFMI_{ij} \) = the mean impact of the \( DF_j \) associated with \( CDF_i \).
c. Calculate the mean impact for the CDF from equation:
\[ CDFMI_i = \frac{\sum_{j=1}^{m_i} DFMI_{ij}}{m_i} \]
Where: \( CDFMI_i \) = mean value of CDF.
\( m_i \) = the number of DFs in the CDF.
d. Calculate the weight of the CDF using the equation:
\[ w_i = \frac{CDFMI_i}{\sum_{i=1}^{n} CDFMI_i} \]
e. Find the aggregate score of the candidate contractor K using the following equation:
\[ AWS_K = \sum_{i=1}^{n} w_i \left( \sum_{j=1}^{m} \langle w_{ij} \rangle \langle R_{ijK} \rangle \right) \] ---(Eq.4)
Where: \( AWS_K \): aggregate weighted rating for the contractor K.
\( n \): number of CDFs; \( m \): number of DFs in the CDF.
\( R_{ijk} \): score of the DFj in the CDFi for the contractor K on scale of 1 (unsatisfactory) to 10 (excellent for specific project). The approach to calculate this value is described in the following paragraph.
The CDFs are listed in levels whereby each CDF is placed at one level. At each level of CDF, three possible decision answers exist.
1. To disqualify the contractor for this CDF and terminate the process (if \( R_{ijk} = 0 \)).
2. To disqualify the contractor for this DF and continue for the next DF (\( R_{ijk} = 0 \)).
3. To qualify the contractor (\( R_{ijk} = x \) where \( 1 < x < 10 \)).
Note that the difference between rules 1 & 2 is that in rule 1 the decision factor (called critical decision factor) is considered highly important in that if the contractor has failed in this DF, the whole process will terminate and a new contractor will be evaluated. However, in rule 2 if the contractor failed in that DF, then only for that DF will be zero and the system will go to the next DF. This means the concerned DF is not so important to be considered.
**MODEL ADVANTAGES:**
- Simple and understandable.
- Provides a systematic approach for all candidate evaluation.
- The calculated aggregate weighted score for each contractor provides a basis for comparison.
**DATA COLLECTION:**
For this type of data analyses, questionnaires among the contractors are the basis for this PQ model technique. The author of this thesis has selected three main contractors in Bahrain. Questionnaires consisting of 16 CDF and 37 DF had been distributed among them to conduct this study. A sample calculation for this technique is shown in the following Sample Data Calculation (Next Page).
SIMPLE DADA CALCULATION:
For this sample calculation, three CDFs with eight (8) CFs were initiated to conduct PQ study for three different contractors. For evaluating $R_{ijk}$ or scoring for each DF as stated earlier, questionnaire should be addressed with these criteria during information gathering and the evaluation shall be compromised with scoring of questionnaire results.
**Contractor A:**
| CDF(1) | DF | DFMi | Wij | CDFMi | Wi | Rijk | Wij*Rijk | AWSK1 |
|--------|---------------------|------|------|-------|------|------|----------|-------|
| | Banking Arrangement | 2 | 0.222| | | 3 | 0.667 | |
| | Bonding Capacity | 4 | 0.444| | | 4 | 1.778 | |
| | Financial Statement | 3 | 0.333| | | 8 | 2.667 | |
| | | 9 | | | | | | 5.111 |
| CDF(2) | DF | DFMi | Wij | CDFMi | Wi | Rijk | Wij*Rijk | AWSK2 |
|--------|-----------------------------|------|------|-------|------|------|----------|-------|
| | Success of Completing Projects | 3 | 0.375| | | 4 | 1.500 | |
| | Size of Completing Projects | 2 | 0.250| | | 3 | 0.750 | |
| | No of Similar Completdd Projects | 2 | 0.250| | | 6 | 1.500 | |
| | Types of Completing Projects | 1 | 0.125| | | 5 | 0.625 | |
| | | 8 | | | | | | 4.375 |
| CDF(3) | DF | DFMi | Wij | CDFMi | Wi | Rijk | Wij*Rijk | AWSK3 |
|--------|---------------------|------|------|-------|------|------|----------|-------|
| | Current Work load | 3 | 1 | 3 | 0.375| 7 | 7 | 2.625 |
$AWSK(A) \text{Total} = AWSK1 + AWSK2 + AWSK3 = 5.635$
**Contractor B:**
| CDF(1) | DF | DFMi | Wij | CDFMi | Wi | Rijk | Wij*Rijk | AWSK1 |
|--------|---------------------|------|------|-------|------|------|----------|-------|
| | Banking Arrangement | 2 | 0.222| | | 5 | 1.111 | |
| | Bonding Capacity | 4 | 0.444| | | 3 | 1.333 | |
| | Financial Statement | 3 | 0.333| | | 7 | 2.333 | |
| | | 9 | | | | | | 4.778 |
| CDF(2) | DF | DFMi | Wij | CDFMi | Wi | Rijk | Wij*Rijk | AWSK2 |
|--------|-----------------------------|------|------|-------|------|------|----------|-------|
| | Success of Completing Projects | 3 | 0.375| | | 3 | 1.125 | |
| | Size of Completing Projects | 2 | 0.250| | | 5 | 1.250 | |
| | No of Similar Completdd Projects | 2 | 0.250| | | 4 | 1.000 | |
| | Types of Completing Projects | 1 | 0.125| | | 8 | 1.000 | |
| | | 8 | | | | | | 4.375 |
| CDF(3) | DF | DFMi | Wij | CDFMi | Wi | Rijk | Wij*Rijk | AWSK3 |
|--------|---------------------|------|------|-------|------|------|----------|-------|
| | Current Work load | 3 | 1 | 3 | 0.375| 6 | 6 | 2.25 |
$AWSK(B) \text{Total} = AWSK1 + AWSK2 + AWSK3 = 5.135$
From this sample calculation, the three contractors can be ranked based on their $AWSK$ earned scores. In this case, Contractor A has the highest score followed by Contractor C then contractor B.
**CONCLUSION:**
Form this research; it is clear that contractors PQ are crucial to avoid poor quality and work delay. The addressing of several mathematical models here is aimed to minimize if not eliminate all aspects of subjectivities that may lead to undesirable results. The owner of the project shall set the criteria required for the project to be conducted and assign weight for each criterion based on how it is important from the owner point of view. After that, questionnaires to be distributed among the bidders for PQ them. Using the models explained above shall help the owner in determine a list containing only the qualified bidders.
**BIBLIOGRAPHY:**
1. Al-Alawi, Muhsen Ahmed; *Contractor Pre-qualification*; Dhahran; King Fahad University of Petroleum & Minerals; Aug, 1991
2. Clough, Richard H.; *Construction Contracting*; New York: John Wiley and Sons, Inc.; 1985.
3. “Getting the Right Contractor on the Right Job”; *Consulting Engineer*; May 1981; pp. 12-13.
4. Gooch, K. O., and John Caroline; *Construction for Profits*; Reston: Reston Publishing Company, Inc.; 1980.
5. Wynne, James D.; *Learning Statistics: A Common-Sense Approach*; New York: Macmillan Publishing Co., Inc.; 1982.
6. Russell, Jeffry: “Model for Owner Pre-Qualification of Contractors”; *Journal of Management in Engineering*; January 1990; pp 59-75.
Slides Overview
|
Financial Transparency
The Monthly District Financials are posted on the Financial Transparency page https://www.yourtahoeplace.com/ivgid/financial-transparency.
Through the first nine months of the Fiscal Year, District-wide revenues are $2,832,037 ahead of projected budget and District-wide operating uses are $787,204 below projected budget. In total, we are $3.62 million to the good for year to date budget. For the month of March, we were $139,188 to the good due to a strong month of Spring Breaks at Diamond Peak. While skier visits were slightly above average, skier revenue to date set a new record – $10,616,786.
With record activity at Diamond Peak, it is likely that a Budget Augmentation will be necessary. If needed, it will be submitted for Board consideration at a May or June Board of Trustees Meeting.
We will be providing an update at the next Board of Trustees Meeting on May 22, 2019 to provide more information on the final financial results for the 2018-19 season at Diamond Peak.
The Capital Improvement Report for the second quarter of the Fiscal Year is now available on the Financial Transparency page.
Also a reminder that the Month and Year Ending June 30, 2018 (Pre-Audit) is now posted as well. As June 30 is the end of the Fiscal Year, these financials provide the final pre-audit numbers for the 2017-18 Fiscal Year. In addition, annual and quarterly reports are also posted for previous fiscal years.
Venue Status Reports
Venue Status reports are available on a monthly basis for key venues and operations. Reports are prepared for Public Works, Parks & Recreation, Finance/Accounting, Risk Management, Human Resources along with Ski and Golf when they are in season.
These reports are used to provide the Board of Trustees and the community with a summary of the activities for each venue, including significant expenditures performed under the General Manager’s authority. For example, the Public Works status report for March notes that three new construction contracts were issued that month valued between $56,775.50 and $1,456,654.00 (Incline Ballfields).
In addition, it provides real time updates of construction in progress. For example, the December Public Works Status Report provides detailed information on the one major project currently underway. It notes the Original Contract Amount, Change Orders to Date, Current Total Contract Amount, Total Payments for Work Completed to Date, and Current Balance to Completion (including retainage). It also includes updates on two Sewer Pump Replacement projects.
This report also includes monthly updates on Public Works benchmarks. For example, customer service requests in March numbered 46, slightly below the three-year average for March of 49. For the Fiscal Year-to-Date, customer service requests are two below the three-year average of 789.
There were only two Trash Complaints (actual call-outs) in March. For the Fiscal Year-to-Date, complaints are 26 versus 329 the previous year.
Wastewater flow was at 34 million in March, just below the 36 million for the three-year average. For the Fiscal Year-to-Date, total flows are at 257 million, below the three-year average of 271 million.
Additionally, the March report notes that with seventeen more days of snow it required staff to perform another 218 hours of snow removal. The crew had to replace the chains and cutting edges on all three of the loaders used for plowing.
The Finance/Accounting and Risk Management Status Report for December provides an update on the Sales Tax Refund by the State of Nevada and a number of other timely issues. It also outlined the District’s latest Risk Management and Safety Initiatives.
The Human Resources Status Report included updates on employee recruitment, training, community relations and worker’s compensation.
The Venue Status reports are typically posted by the middle of each month and can be accessed on the District’s “Resources” web page.
**Bidding Opportunities**
The District’s “Resources” web page also includes a Bidding Opportunities link for businesses and the community.
Invitations to Bid, a quarterly update of projects awarded in excess of $25,000 in value since April 30, 2015 along with a link to pertinent Nevada Revised Statutes (NRS) code sections related to procurement and contracts are included in this section of the web page.
In addition, it includes a link to planetbids.com, which is where interested parties can search for District bid opportunities and review all bid documents. For recent bidding opportunities, it includes a list of prospective bidders and bid results.
**Capital Projects Update**
**Design**
**WRRF Aeration System Improvements**
The aeration process of wastewater treatment supplies oxygen to facilitate the biological activity that converts raw sewage into treated wastewater effluent. The plant has six 200,000 gallon aeration basins with two jet aeration clusters per basin. These clusters utilize pressurized air to mix and recirculate the wastewater and provide the necessary oxygen to the microorganisms. The pressurized air is delivered by multistage centrifugal blowers that are metered by electronically operated valves in order to keep the correct balance of oxygen in the aeration basins at all times. This project funds the design and replacement of the aeration system equipment at the WRRF. The age of the equipment, the number of hours of operation, and condition assessment indicates the existing centrifugal blowers are at the end of their serviceable life. Additionally, the blowers are no longer supported by the manufacturer and replacement parts are difficult to acquire. Jacobs Engineering is working on the designs documents and final bid level documents are scheduled to be completed in the June 2019 to replace aeration blowers and associated piping, valves and control system. The Engineering staff will then bid the construction project in July 2019, with construction beginning in September 2019 and substantial completion in May 2020.
SPS #1 – (Overflow Parking Lot)
The District owns 18 sewer pumping stations in Incline Village and Crystal Bay. Sewer Pump Station #1 collects and transports 50 percent of the raw sewage and transport to the wastewater treatment plant on Sweetwater Road. If something were to happen to Sewer Pump Station #8 there is a direct bypass that would send all of the raw sewage to Sewer Pump Station #1, thus accounting for 75 percent of the raw sewage in the District. Constructed in the early 1970s this station has provided reliable service. The station contains the mechanical and electrical equipment to pump sewage to the wastewater treatment plant. The equipment in the station to be replaced as a part of this project are the variable frequency drives for the three pumps. Jacobs Engineering is working on the design for the replacement of the three variable frequency drives (VFD’s) and replacement of the motor control center (MCC). Final bid level documents are expected in June 2019. The Engineering staff will then bid the construction project in July 2019, with Construction anticipated for the fall/winter 2019-20.
WPS 2-1 Incline – (Burnt Cedar Beach)
Water Pump Station 2-1 (WPS 2-1) is located at the Burnt Cedar Water Disinfection Plant (BCWDP) and pumps the disinfected potable water into the water distribution system to serve Incline Village and Crystal Bay. WPS 2-1 was largely constructed in 1972 with minor upgrades in 1995 and 2012. The electric motor control centers (MCCs) and switchgear at WPS 2-1 date to the original 1972 installation. This equipment does not meet modern OSHA requirements for Arc Flash safety and the MCCs and switchgear is at the end of its service life and no longer supported by the respective manufacturers. Jacobs Engineering is working on the design for the replacement of the three water pump motor soft starts and replacement of the motor control center (MCC). Final bid level documents are expected in May 2019. The Engineering staff will then bid the construction project in May 2019, with construction beginning in August 2019 and substantial completion in April 2020.
Mountain Clubhouse
On August 11, 2018 a fire occurred in the Mountain Golf Course Clubhouse which completely decimated the kitchen area. Smoke damage was incurred throughout the facility, which in turn affects walls, flooring and mechanical systems. The District's insurance coverage is for replacement. However, the evaluation of what is the best solution long term for the operations indicates a revised allocation of floor space, changes to access and ultimate substantial change to customer flow requires a makeover of the floor plan. These changes facilitate other objectives including a long standing issue of ADA accessibility to the lower level for food service.
The Smith Design Group has completed design documents which have been submitted to Washoe County for permits. Once we receive comments, bid level documents will be finalized. The project will be brought to either the May 22, 2019 or June 19, 2019 Board meeting for approval to bid.
If approved, the project will then either be administered through the insurance company or publically bid. Construction is scheduled for fall 2019 with substantial completion prior to the 2020 golf season. In the near term to facilitate utilization of the building for the 2019 golf season the interior will be painted and the floors will be carpeted.
In addition, a follow up meeting was held on March 22 with members of the Mountain Course Golf clubs. The representatives were provided with a status update on the project.
**Construction**
**Repair Deck, Stairs, and Powder Coat all Patio Deck Railings**
This project will replace the railings and southern stairway on the eastside deck at the Recreation Center. The Board awarded the contract to Bruce Purves Construction on April 10, 2019. Notice to Proceed will be issued on or about April 25, 2019. The Project is expected to be substantially complete by June 24, 2019.
**Water Reservoir Safety and Security Improvements**
This project would replace the ladders that access the top of the water reservoirs, install intermediate access platforms, install protective railings and install new fall protection devices. The exterior access to the roof area is required to meet the needs of the District to monitor the water quality in the reservoirs and perform routine repairs to radio communication equipment. The ladders also need to be secured from access by the public. The reservoir ladders, fall protection, platforms, and protective railings will meet the current Occupational Safety and Health Administration (OSHA) safety standards. The Board awarded the contract to Resource Development Company on April 10, 2019. Construction will begin this summer and is expected to be substantially complete by June 30, 2020.
**ADA Access to Golf Course Bathrooms (Mountain Course)**
The Mountain Golf Course on-course bathrooms at holes #6 and #13, and the site surrounding the restrooms are not in compliance with current American Disabilities Act (ADA) requirements for access due to excessive cross slopes between the golf cart parking and the restrooms entryway. This project will re-construct the cross slopes and pave access from the golf cart parking to the restrooms entryway. The
project was awarded to Colbre Grading and Paving at the January 23, 2019 Board Meeting. Construction will begin as weather permits and is expected to be substantially completed in July 2019.
**Incline Park Ballfields Renovations**
The project was awarded to Rapid Construction at the March 18, 2019 District Board meeting. The project was reduced in scope to only improvements at Field #3 to include:
- New Baseball specific Turf Infield, Drainage, and Irrigation
- New outfield specific French Drain
- New Scoreboard with naming rights panel
- New Modular Batting Cages with retaining/seating wall
- New Foul Poles
- Expansion of outfield dimensions, fencing replacement, and renovated outfield warning track
- New Backer Board at Backstop/Includes padding
- New enclosed custom modular Dugouts with equipment storage
- Site Signage Improvements
The project will begin this spring as the weather allows and will be substantially completed by August 30, 2019.
**Burnt Cedar Pool**
The Burnt Cedar Pool, constructed in the 1970’s, features a skimmer type recirculation system. The piping system and turnover times are undersized and problematic for pool clarity. This project will replace the pipes from the Mechanical room at the edge of the Burnt Cedar Pool. During construction when the piping is exposed at the edge of the pool both visual and camera inspection will be completed on the piping from the edge of the pool to the bottom of the pool in an effort to scope the next phase schedule for the fall of 2019. Piping replacement is currently under construction and is scheduled to be completed by April 30.
**Other Projects**
The Grant funded Incline Creek Restoration project located on the Hole 14 of the Championship Golf Course is currently being publically bid with the bid opening on April 25, 2019 and construction slated for post-Labor Day 2019.
**IVGID Quarterly**
The April edition has now been distributed. This Quarterly includes the Spring-Summer Recreation Guide along with features on the Incline Village Library and the IVGID Appreciation Days.
Washoe County Federal Lands Bill
On September 12, 2018 I sent you a letter from the Chair of the Washoe County Board of County Commissioners regarding the status of the Washoe County Economic Development and Conservation Act (also referred to as the Washoe County Federal Lands Bill).
The letter informed IVGID that they would not be able to include any of our parcels in their request for federal legislation.
In each case, the land was removed in part, due to opposition from the U.S. Forest Service. Washoe County did indicate that the U.S. Forest Service would be willing to entertain proposals for potential lease of the parcels by IVGID, which has always been our understanding.
On October 5, 2018, Washoe County Commissioner Berkbigler and Jamie Rodriguez, Washoe County Government Affairs Manager toured the Forest Service Parcel across from Incline High School. This is one of the parcels included in IVGID’s December 2016 request for inclusion in the Washoe County Lands Bill.
Washoe County Commissioner Berkbigler and Ms. Rodriguez were educated about the benefits that could accrue to both the U.S. Forest Service and IVGID from a potential transfer of this property.
Ms. Rodriguez volunteered to facilitate a meeting between IVGID and the U.S. Forest Service to discuss the potential benefits in more detail. The U.S. Forest Service has not yet provided a time for a potential meeting.
Director of Golf
Darren Howard started as the Director of Golf/Community Services on April 15. Darren has over three decades of experience in the industry, most recently serving as CEO/General Manager at The Clubs at Houston Oaks in Texas. He has a Bachelor of Science in Business Marketing from the University of Tennessee (Chattanooga) and has extensive experience in all aspects of golf course operations as well as food and beverage, marketing, and resort management. Along with the Golf staff, Darren will also be overseeing all District staff responsible for food and beverage, banquets, and marketing.
One of his subordinate staff will be Ashley Wood, who also started last Monday as the Head Golf Professional at the Mountain Golf Course. Ashley is an Incline Village native and was an active participant in our Junior Golf Program before San Diego State University granted her a golf scholarship. Ashely obtained both
undergraduate and graduate degrees from San Diego State and most recently served as the Head Golf Professional at The Presidio Golf Club in San Francisco.
Staff conducted a meet and greet with representatives from the Golf Community on April 18 to introduce both Darren and Ashley. Please join me in welcoming Darren and Ashley to the IVGID team!
**Additional Staffing Updates**
Three of our outstanding employees have accepted career advancement opportunities at nearby utility and recreation districts.
Principal Engineer Charley Miller’s last day with the District will be Friday, April 26. Charley will be managing the Engineering Staff at the Tahoe City Public Utility District.
Events Manager Cathy Becker will be leaving us in early May to take on the North Tahoe Events Center in Kings Beach for the North Lake Tahoe Public Utility District.
Communications Coordinator Misty Moga will also be heading over to Kings Beach in May. Misty was recruited to become the Board Clerk for the North Tahoe Public Utility District.
These folks will be sorely missed, so please join me in thanking me for their great contributions to our District. And please wish them well in their new, challenging endeavors.
|
Staff Directory
Name of the Staff Designation
Department
Contact Address
Contact No
Email Id
| 1 | Ajithambili K | Senior Supdt. | Administrative Office | Senior Supdt.,Administrative Office | 9446368897 |
|---|---|---|---|---|---|
| 2 | Biju T | Office Attendant | Administrative Office | Office Attendant,Administrative Office | 8301954104 |
| 3 | Bindu K S | Head Accountant | Administrative Office | Head Accountant,Administrative Office | 9495266496 |
| 4 | Geethadevi D | Senior Clerk (Establishment) | Administrative Office | Senior Clerk (Establishment),Administrative Office | 9447695052 |
| 5 | Lucy Mathews | Clerk (Academic) | Administrative Office | Clerk (Academic),Administrative Office | 8157975816 |
| 6 | Preethi N Pillai | Clerk (Billing) | Administrative Office | Clerk (Billing)Administrative Office | 9447179192 |
| 7 | Sabeena Beevi | U D Typist | Administrative Office | U D Typist,Administrative Office | 9495114920 |
| 8 | Sheeba P R | Clerk( Purchase) | Administrative Office | Clerk( Purchase)Administrative Office | 9048570564 |
| 9 | Shiju Kumar | Office Attendant | Administrative Office | Office Attendant,Administrative Office | 8606557263 |
| 10 | Sibu Nair | U D Typist | Administrative Office | U D Typist,Administrative Office | 9947044194 |
| 11 | Vimal Kumar G | Clerk(Examination & Scholarships) | Administrative Office | Clerk(Examination & Scholarships),Administrative Office | 8075627189 |
|---|---|---|---|---|---|
| 12 | Biju George | Head of Department | Automobile Engg | Head of Department,Automobile Engg | 9188003749 |
| 13 | Arun G | Lecturer | Automobile Engg | Lecturer,Automobile Engg | 8129293531 |
| 14 | Jijeesh Mon G R | Lecturer | Automobile Engg | Lecturer,Automobile Engg | 9567271987 |
| 15 | Philip J Nadackal | Lecturer | Automobile Engg | Lecturer,Automobile Engg | 9400630316 |
| 16 | Sabu George | Lecturer | Automobile Engg | Lecturer,Automobile Engg | 9496317281 |
| 17 | Sreejith K | Lecturer | Automobile Engg | Lecturer,Automobile Engg | 9747544611 |
| 18 | Anand G | Workshop Instructor | Automobile Engg | Workshop Instructor,Automobile Workshop | 9496478008 |
| 19 | Arun G S | Workshop Instructor | Automobile Engg | Workshop Instructor,Automobile Workshop | 9495719886 |
| 20 | Byju M G | Assistant Professor | Chemistry | Assistant Professor,Chemistry | 9446094220 |
Name of the Staff Designation
Department
Staff Directory
Contact Address
Contact No
Email Id
| 21 | AkhilaLekshmi N H | Lecturer | Civil Engineering | Lecturer,Civil Engineering | 9605238996 | firstname.lastname@example.org |
|---|---|---|---|---|---|---|
| 22 | Bindu O S | Lecturer | Civil Engineering | Lecturer,Civil Engineering | 9446186714 | email@example.com |
| 23 | Raveena R | Lecturer | Civil Engineering | Lecturer,Civil Engineering | 9645239644 | firstname.lastname@example.org |
| 24 | T.G.Santhosh Kumar | Head of Department | Civil Engineering | Head of Department,Civil Engineering | 9447981409 | email@example.com |
| 25 | Thara Prasad | Lecturer | Civil Engineering | Lecturer,Civil Engineering | 9605839878 | firstname.lastname@example.org |
| 26 | Anupama K G | Demonstrator | Computer Engg | Demonstrator,Computer Engg | 8547506231 | email@example.com |
| 27 | Asha Chandran T | Lecturer | Computer Engg | Lecturer,Computer Engg | 9048003405 | firstname.lastname@example.org |
| 28 | Babida Vamadevan | Demonstrator | Computer Engg | Demonstrator,Computer Engg | 9446021839 | email@example.com |
| 29 | Deepthi V | Lecturer | Computer Engg | Lecturer,Computer Engg | 9544932316 | firstname.lastname@example.org |
| 30 | Devadas K | Tradesman | Computer Engg | Tradesman,Computer Engg | 9495074772 | email@example.com |
Name of the Staff Designation
Department
Staff Directory
Contact Address
Contact No
Email Id
| 31 | Harikumar K | Trade Instructor | Computer Engg | Trade Instructor,Computer Engg | 9446093308 |
|---|---|---|---|---|---|
| 32 | Jagadeesh Kumar S | Trade Instructor | Computer Engg | Trade Instructor,Computer Engg | 9447860496 |
| 33 | Manoj Kumar S | Tradesman | Computer Engg | Tradesman,Computer Engg | 9495120433 |
| 34 | Reena S | Lecturer | Computer Engg | Lecturer,Computer Engg | 9447664300 |
| 35 | Roy Thomas | Head of Department | Computer Engg | Head of Department,Computer Engg | 9495302953 |
| 36 | Shajila Beegam M.K | Lecturer | Computer Engg | Lecturer,Computer Engg | 9847226846 |
| 37 | Shyjila P A | Lecturer | Computer Engg | Lecturer,Computer Engg | 9847990581 |
| 38 | Vinod Kumar | Lecturer | Computer Engg | Lecturer,Computer Engg | 8547941772 |
| 39 | Anil B | Demonstrator | Electronics Engg. | Demonstrator,Electronics Engg. | 9400701905 |
| 40 | Anjali Anand | Tradesman | Electronics Engg. | Tradesman,Electronics Engg. | 9947769432 |
Name of the Staff Designation
Department
Staff Directory
Contact Address
Contact No
Email Id
| 41 | Arun Sasidharan | Lecturer | Electronics Engg. | Lecturer,Electronics Engg. | 9567247090 |
|---|---|---|---|---|---|
| 42 | Deepa S | Trade Instructor | Electronics Engg. | Trade Instructor,Electronics Engg. | 9746575825 |
| 43 | Deepu K | Trade Instructor | Electronics Engg. | Trade Instructor,Electronics Engg. | 9846464658 |
| 44 | Gigimol George | Lecturer | Electronics Engg. | Lecturer,Electronics Engg. | 9446603780 |
| 45 | Joice Mathew | Demonstrator | Electronics Engg. | Demonstrator,Electronics Engg. | 8281824412 |
| 46 | Kamala S | Tradesman | Electronics Engg. | Tradesman,Electronics Engg. | 9048376626 |
| 47 | Mini C | Lecturer | Electronics Engg. | Lecturer,Electronics Engg. | 8593989172 |
| 48 | Priyanka Prakash | Lecturer | Electronics Engg. | Lecturer,Electronics Engg. | 9747435765 |
| 49 | Reena R | Lecturer | Electronics Engg. | Lecturer,Electronics Engg. | 9447113892 |
| 50 | Shibu R S | Head of Department | Electronics Engg. | Head of Department,Electronics Engg. | 7025334623 |
Name of the Staff Designation
Department
Staff Directory
Contact Address
Contact No
Email Id
Staff Directory
Name of the Staff Designation
Department
Contact Address
Contact No
Email Id
| 51 | Angelina Elizabeth Oommen | Asst. Professor (Adhoc) | English | Asst. Professor (Adhoc),English | 9562726902 |
|---|---|---|---|---|---|
| 52 | Jalson Jacob | Assistant Professor | English | Assistant Professor,English | 9496162089 |
| 53 | Binu V G | Workshop Instructor | General Workshop | Workshop Instructor,General Workshop | 9400377610 |
| 54 | Usha K | Trade Instructor | General Workshop | Trade Instructor,General Workshop | 9446563599 |
| 55 | Manzoor R | Tradesman | General Workshop | Tradesman,General Workshop | 9847222334 |
| 56 | Prasad Chandran | Trade Instructor | General Workshop | Trade Instructor,General Workshop | 9495374315 |
| 57 | Rajendran K.P | Trade Instructor | General Workshop | Trade Instructor,General Workshop | 9747205907 |
| 58 | Satheesh Kumar N V | Trade Instructor | General Workshop | Trade InstructorGeneral Workshop | 9400322699 |
| 59 | Satheesh Kumar T | Workshop Instructor | General Workshop | Workshop Instructor,General Workshop | 9400377610 |
| 60 | Shaju T B | Workshop Superintendent | General Workshop | Workshop Superintendent,General Workshop | 9495204101 |
Staff Directory
| Sl.N o | Name of the Staff | Designation | Department | Contact Address | Contact No | Email Id |
|---|---|---|---|---|---|---|
| 61 | Sunil Kumar Sasi | Tradesman | General Workshop | Tradesman,General Workshop | 9846191971 | firstname.lastname@example.org |
| 62 | Ujjwalakumar U | Tradesman | General Workshop | TradesmanGeneral Workshop | 7592879573 | email@example.com m |
| 63 | Venugopal V D | Trade Instructor | General Workshop | Trade Instructor,General Workshop | 9605495636 | venugopal.@gmail.com |
| 64 | Sakhilekha | Librarian | Library | Librarian,Library | 9495430250 | firstname.lastname@example.org |
| 65 | Kavitha Mathew | Assistant Professor | Mathematics | Assistant Professor,Mathematics | 9497775345 | email@example.com |
| 66 | Sunil M P | Assistant Professor | Mathematics | Assistant Professor,Mathematics | 9495054703 | firstname.lastname@example.org |
| 67 | Rohit M | Assistant Professor | Physics | Assistant Professor,Physics | 9496842940 | email@example.com |
|
British American Tobacco’s Youth Smoking Prevention Campaign: What are its true objectives?
Research and Analysis
Dhaka, Bangladesh, August 2001
Research and Analysis:
Raton Deb
Aminul Islam Sujon
Syed Mahbubul Alam
Rafiqul Islam Milon
Syed Samsul Alam
Oliur Rahman
Amit Ranjan Dey
Report written by:
Debra Efroymson, Regional Director, PATH Canada Asia
Acknowledgments:
Emma Must, Tobacco Control Advisor, PATH Canada
I. Introduction
“If any cigarette company told me not to smoke, I’d think it was some sort of slyness on their part.”\(^1\) --13-year-old male student
“They make the cigarettes, then will tell us not to smoke them—isn’t there any other target for their mischief?” --15-year-old male student
On 28 July 2001, British American Tobacco (BAT) launched its “Youth Smoking Prevention Campaign”. BAT’s messages consist of a 30-second television ad, three one-minute radio scripts, a billboard, and a sticker. BAT claims that it sees smoking as an adult choice, that those under age 18 should not smoke, and that BAT feels the responsibility to curtail/prevent youth smoking. BAT says parents, retailers, media and the government can all play their part to prevent youth smoking.\(^2\)
Is BAT truly, as it claims, a responsible company seeking to address the problem of youth smoking? Or is the whole campaign in fact a clever public relations scheme in order to stop attempts at legislation and to deflect criticism from the manufacturers of the only consumer product in the world which, when used as intended, kills its user?
This report looks at tobacco company youth smoking prevention campaigns in general, that of BAT Bangladesh in particular, and information provided by formerly private industry documents. It includes information from a focus group, and a survey of 300 youth. Information about research is included in the Appendix. We hope it inspires you to resist similar programs in your own country. For more information on reacting to such programs, we suggest you also visit the website for Essential Action: www.essentialaction.org/tobacco
II. Why do tobacco companies run youth smoking prevention campaigns?\(^3\)
“Before doing any publicity to prevent smoking, the cigarette companies should stop manufacturing cigarettes.”
“It doesn’t make sense for cigarette companies to discourage people from smoking.” --13-year-old male student
A worldwide strategy
According to BAT’s materials,\(^4\) Philip Morris (manufacturer of Marlboro), BAT and Japan Tobacco have promoted youth smoking prevention campaigns in almost 70 countries. Why such an interest among big tobacco companies to prevent youth from smoking?
The tobacco industry is facing increased regulation around the world. The World Health Organization is working with governments to negotiate an international treaty, the Framework Convention on Tobacco Control, to regulate tobacco at the international level and strengthen individual countries’ laws. Many countries have passed laws that limit or ban advertising, make many public places smoke-free, and require large and strongly-worded warnings on cigarette packs. Such measures have been shown to reduce smoking in the general population and among youth. Measures such as tax increases and ad bans are particularly successful with youth, as youth are far more price sensitive, and far more susceptible to advertising, than adults.
---
\(^1\) Quotes from students are drawn from focus group and survey research conducted in Bangladesh in August 2001, and translated into English.
\(^2\) BAT, *Be smart, a campaign for youth smoking prevention*, 2000.
\(^3\) This section draws heavily on one report: Cancer Research Campaign and Action on Smoking and Health (London), *Danger! PR in the Playground: Tobacco industry initiatives on youth smoking*, 2000. www.ash.org.uk
\(^4\) BAT, 2000.
According to the World Bank, “the impact of higher taxes is likely to be greatest on young people, who are more responsive to price rises than older people”. That is, if the price of cigarettes goes up, far more youth than adults will give up the habit. In the United States, about 86% of youth smokers prefer the three most heavily-advertised brands, as opposed to about a third of adult smokers. Children aged 10-12 who approve of cigarette advertising are twice as likely to become smokers within a year as children who disapprove of it. Children as young as three recognize cigarette ads. It is children, not adults, who buy the most heavily advertised brands.
If the tobacco industry were serious about reducing smoking among youth, they would support, or at least not oppose, such measures. In fact, transnational tobacco companies like BAT strongly oppose such laws. They wish to distract government and others from insisting on such legislation by promoting voluntary agreements instead.
A lawsuit in the United States forced tobacco companies to make available millions of internal documents. Those documents clearly show the intent of tobacco companies in designing youth programs. In those documents, we can see that the tobacco industry decided to start conducting youth campaigns to convince governments not to pass legislation; to reinforce the belief that peer pressure, not advertising, is the cause of youth smoking; and to seize the political center and force the anti-smokers to an extreme.
A 1991 Tobacco Institute memo clearly describes BAT’s current practice:
“In order to offset further erosion of the industry’s image in this area, and to avoid further legislative forays, the tobacco industry should take two actions: Clearly and visibly announce our position on teenage smoking to the public generally and to leaders of all youth-oriented organizations [and]...A program to depict cigarette smoking as one of many activities some people choose to do as adults.”
A 1991 Asian Tobacco/BAT document is similarly straightforward about the objectives of a youth campaign:
“We need to ask ourselves whether as an industry we could be turning our declared belief that we have no interest in recruiting children and by that I mean sub-teenagers—to more practical account. Much of what we have done around the world has been desultory and patchy—and yet being seen to cooperate on this particular issue has many positive public relations and public affairs benefits; is often relatively inexpensive to mount, and usually very difficult for the opposition effectively to counter without appearing sour and over-critical.”
---
5 World Bank, *Curbing the Epidemic: Governments and the economics of tobacco control*, 1999. www.worldbank.org
6 Cancer Research Campaign and Action on Smoking and Health (London), *Danger! PR in the Playground: Tobacco industry initiatives on youth smoking*, 2000. www.ash.org.uk
7 UICC, *Tobacco Control Fact Sheet 1: The case for banning advertising and promotion of tobacco*. www.uicc.org
8 Cancer Research Campaign and Action on Smoking and Health (London), 2000.
9 Cancer Research Campaign and Action on Smoking and Health (London), 2000.
The tobacco industry youth prevention campaigns have certain points in common.\textsuperscript{10}
- Involvement of authority figures, such as parents, teachers, and government officials. These are precisely the figures against whom teenagers rebel when they smoke.
- Absence of figures that are popular among youth—the sorts of people that youth wish to emulate, such as race car drivers, rock stars, and cricket players.
- Reinforcement of the message that smoking is an adult activity—which is precisely why so many teenagers smoke. They perceive smoking as adult, and wish to be adult. Telling people not to smoke before they turn 18 gives children and youth an easy way to show that they are grown-up.
- Absence of any mention of why smoking is a problem: that it causes 25 different diseases, including cancer, heart disease, and respiratory problems; that it causes impotence in men; that it is addictive; that cigarette smoke contains 4000 different chemicals including 40 known carcinogens; that cigarette smoke causes disease in non-smokers; that one in every two or three long-term smokers will die from smoking-related causes.
Do tobacco companies target youth?
“The younger smoker is of pre-eminent importance: significant in numbers, ‘lead in’ to prime market, starts brand preference patterning....” (Brown & Williamson [BAT], 1974)\textsuperscript{11}
“The loss of younger adult males and teenagers is more important to the long term, drying up the supply of new smokers to replace the old. This is not a fixed loss to the industry; its importance increases with time.” (RJ Reynolds, 1982)\textsuperscript{12}
Is it possible to believe that tobacco companies are serious about discouraging youth from smoking? Most people begin smoking when they are below age 20; many when they are as young as 12. People start as an experiment, to feel like an adult, and then become addicted. People also choose their brand fairly early, then tend to stick to it for most of their life. Most adults have chosen their brand, and will not change it, no matter how much advertising there is for other brands. Teenagers, on the other hand, tend to smoke the cigarettes that are most heavily advertised. If a company can win over a teenager, they are likely to keep him or her for life. If companies do not recruit teenagers, they will eventually dwindle and die, as adult smokers will either quit smoking or die. Teenagers are the pool of replacement smokers, and if the tobacco companies don’t actively recruit them, they will not be able to stay in business, and certainly will not be able to have the sort of growth that they enjoy.\textsuperscript{13}
Given the importance of youth smokers to tobacco companies, any effort on their part to convince youth \textit{not} to smoke must be looked at with skepticism. Why would any company try to chase away its own customers, especially those customers who
\textsuperscript{10} Cancer Research Campaign and Action on Smoking and Health (London), 2000.
\textsuperscript{11} Cancer Research Campaign and Action on Smoking and Health (London), 2000.
\textsuperscript{12} Cancer Research Campaign and Action on Smoking and Health (London), 2000.
are most vital to their survival? On the other hand, having the appearance of discouraging youth to smoke is an important contribution to business.
Aren’t voluntary agreements enough?
“Opportunities should be explored…so as to find non-tobacco products and other services which can be used to communicate the brand or house name, together with their essential visual identifiers…The principle is to ensure that tobacco lines can be effectively publicized when all direct forms of communication are denied.”
(BAT, 1979)\(^{14}\)
“The concerts are for the purpose of selling more Benson & Hedges cigarettes.”
--young male student
Tobacco companies say that voluntary agreements are sufficient; that no strong legislation to control tobacco is needed. But the companies also violate the laws that do exist, and look for loopholes when laws become stronger.
Although the BAT quote above is old, the words describe BAT’s current behavior—the use of the “&” symbol to signify Benson & Hedges (B&H) cigarettes, as well as the use of John Player and B&H signs and logos on shops, without mention of cigarettes. If BAT is already trying to ensure its advertising will continue following a possible ban, how can it be trusted to change its behavior voluntarily?
BAT and other tobacco companies claim that they have changed their behavior; that in the past, they may not have been forthcoming about the health effects of smoking, or about their efforts to advertise to children, but that all that has changed. Where is the evidence of change? BAT still does not fully acknowledge that smoking is addictive, and it denies altogether the incontrovertible evidence that cigarette smoke causes disease in others. It denies, despite the abundant evidence, that it has been involved in smuggling of cigarettes—in Bangladesh as well as other countries.\(^{15}\) It continues to oppose any meaningful legislation of tobacco, raising of taxes, and an effective global tobacco control treaty. And its ads continue to use the idols of youth: rock musicians, race car drivers, sailors of a yacht on a great adventure.
III. Critique of BAT Bangladesh’s Youth Smoking Prevention Campaign
“If a cigarette company said not to smoke, I would be astonished, and would think that they have concerts and ads to promote cigarettes, yet are telling me not to smoke.”
--14-year-old male student
General remarks
The BAT Bangladesh program is based on BAT’s youth smoking prevention campaigns in other countries, and closely resembles the campaigns being conducted by other tobacco companies. BAT documents indicate that it is doing the campaign for the same reasons as the other companies—to delay legislation and to improve its public image.
BAT—in Bangladesh and elsewhere—claims that it sees smoking as an “informed choice made by adults only, who are in a position to balance the pleasures of smoking against the inherent risks”.\(^{16}\) Let us analyze that phrase.
\(^{14}\) UICC, Tobacco Control Fact Sheet 1.
\(^{15}\) Campaign for Tobacco-Free Kids, Illegal Pathways to Illegal Profits: The Big Cigarette Companies and International Smuggling, 2001. www.tobaccofreekids.org
\(^{16}\) BAT, 2000.
“Informed” means that potential smokers have information that allows them to decide whether or not to smoke. From where are they to obtain that information? How are they to learn about “the inherent risks”? The only information BAT gives to smokers is the mandated government message, “Smoking is deleterious to health”. Is that information sufficient to make a decision? How is smoking deleterious? If people smoke light cigarettes, are they less likely to get sick? What if they smoke less than a pack a day? What sorts of diseases do smokers get? Can cigarette smoke harm others? Nowhere does BAT provide any of the more specific information that would be necessary to make a truly informed choice.
By “adults” BAT means people over the age of 18. How does one attempt to promote a product only to those over age 18? One could choose messages that are more popular among older adults than among teenagers: use of classical music, images of older smokers, avoidance of messages that have particular resonance with youth. One could regularly conduct research to see whether one’s advertisements are popular with teenagers, and then, rather than increasing use of those messages, one could stop using them. None of this is happening.
“Choice” implies free will. But the nicotine in cigarettes is extremely addictive, and evidence shows that cigarette companies manipulate the level of nicotine in cigarettes to ensure addiction is maintained. Nicotine is at least as addictive as heroin and cocaine, and at least as hard to give up. Do drug addicts “choose” to use drugs, or do they use them because their addiction compels them to? Is it really possible to talk about choice when discussing an extremely addictive substance? Even a BAT scientist admitted the truth:
“It has been suggested that cigarette smoking is the most addictive drug. Certainly large numbers of people will continue to smoke because they can’t give it up. If they could they would do so. They can no longer be said to make an adult choice.”
(BAT, 1980)\(^{17}\)
Focusing tobacco control activities exclusively on youth, and in isolation of effective measures, is a flawed approach. Beyond that, the ads that BAT is using in Bangladesh have many flaws of their own. The very nature of the ads is a problem, as it allows BAT to advertise its name on billboards, radio, and TV without a warning about the dangers of its products.
**Focus group results**
“When you see golden color and an ‘&’ sign on a billboard, you understand that it’s Benson & Hedges.”
--focus group participant
In order to gain some understanding of youth exposure to BAT cigarette ads, Work for a Better Bangladesh (WBB) held a focus group and conducted a survey of 300 students in one boys’ school. The focus group had eight participants: two girls and six boys, all non-smokers aged 14-16. The participants said that they did not pay much attention to cigarette ads because they didn’t smoke, but revealed great knowledge of BAT and other cigarette advertising. For instance, the participants all recognized the “&” as a Benson & Hedges symbol. They could recall the slogans on various billboards, and one gave an explanation of an image of torn pants on a clothesline that has the slogan “One & Only”: “There is one pair of pants, and there’s one brand of cigarettes.” They said that rock concerts are mostly viewed by teenagers and those in their twenties, and were aware of the current Star Search program sponsored by Benson & Hedges.
\(^{17}\) Action on Smoking and Health (London), *Tobacco Explained, The truth about the tobacco industry in its own words*, 1998. [www.ash.org.uk](http://www.ash.org.uk)
They also described BAT cigarette ads as quite attractive, and said that some children watch TV for the sake of seeing the ads, not the programs, and are exposed to a lot of cigarette ads in the process. Finally, they said many of the cigarette ads send the message that smoking involves heroism and bravery.
Results of the questionnaire, focusing on young teenagers’ exposure to BAT ads, appear in the next section.
**BAT Bangladesh’s youth smoking prevention campaign**
*“If cigarette companies said not to smoke, I’d be really upset. Because they’re producing life-destroying products, then saying they’re forbidden.”*
--13-year-old male student
Are BAT’s youth smoking prevention ads designed to appeal to youth? What are the messages? Researchers showed the BAT youth smoking prevention TV and radio ads and the billboard design/sticker to the focus group participants, in order to understand how youth might react to BAT’s campaign.
Some of the focus group participants were familiar with the BAT youth smoking prevention campaign. They criticized the basis of the campaign, saying that smoking is harmful for everyone, not just those under age 18. They said that making something forbidden increases its appeal for youth. As for survey participants, many of the students had angry remarks about BAT conducting a prevention campaign among youth. The contradiction was obvious to many of a company simultaneously promoting a product, and telling part of its audience not to consume it. They said they would be astonished if a cigarette company told youth not to smoke, given all the effort they make through advertising, especially rock concerts, to attract young smokers. They said if they were serious about young people not smoking, they should get out of the business altogether.
**Sticker/billboard**
*“This sticker is no good. When you first look at it, you can’t understand it. It could be an ad for Dano [powdered milk].”*
--focus group participant
The same image is used on the sticker and billboard. To the left is the word “no” with an attractive picture of smoke rings (learning to blow smoke rings may be one of the incentives to smoke). “No” resonates with parental authority—teachers, parents, and other adults telling teenagers what they shouldn’t do. To the right is the alternative: “We don’t smoke.” The bland message is accompanied by a suitably bland picture of six youths—four boys and two girls. One of the boys is holding his hat in the air, and one of the girls is holding up flowers; the boys are about to clasp hands. The boys are skinny and have no particular appeal. The background is white. The sterile image of the young non-smokers is in stark contrast to the more attractive, dreamy picture of smoke rings, and to the very attractive and sophisticated B&H and John Player Gold Leaf billboards and newspaper ads.
Some focus group participants said that the image used on the sticker and billboard is unattractive; others found it attractive but said it doesn’t make sense, and would require concentration to understand what the message is. They said B&H ads, with their dreamy sunset backgrounds, are far more attractive than the youth prevention image.
**Radio scripts**
There are three one-minute radio scripts.
*For working children*
A young boy’s boss asks him if he smokes. The boy hesitates, then admits he does: “I really want to be like the grown ups”. The boss replies, “…the more you learn, the more you will become an expert! No need to smoke. Understood?” The boy agrees. The voice-over states, “Everybody should come forward to prevent the underage from smoking,” and the final
song is, “We are free, we are independent, we are smart, we don’t smoke.” What is the problem with the script?
1. The script puts forward the idea that smoking is an adult activity, both in the boy’s words, and in the line about the underage not smoking. The boss counters the line, but only weakly.
2. The boss does not give any reasons not to smoke, nor does he state that he himself is a non-smoker. The idea that “the more you learn, the more you will become an expert” could include smoking as something that the boy can learn to be more grown up.
3. The line “Everybody should come forward to prevent the underage from smoking,” suggests the heavy hand of adult authority—precisely the authority against which youth rebel when they smoke. It also, as mentioned, further strengthens the idea that smoking is an adult activity—hence the reason for the boy’s smoking!
4. One way to demonstrate one is “free and independent” is to rebel against adult authority—in this case, against the boss, who tells the boy not to smoke.
Focus group participants said the ad was meaningless, as it offered no reasons not to smoke, and did not indicate whether or not the boss smoked. They said that the ad would be better if the boss said clearly that he doesn’t smoke.
**For guardians**
A mother is upset when she learns that her husband has again sent their son to buy cigarettes, as it might cause the son to adopt the habit. The boy’s father agrees not to do so anymore, and the voice-over states, “Every guardian should come forward to prevent the underage from smoking.” The script ends with the same song as above. What is the problem with this script?
1. The mother does not object to the boy’s father smoking; only to him sending the son to buy cigarettes. The ads neglect to mention a few facts: that the father’s smoking is itself a model to the boy to smoke; the diseases to which the father puts himself at risk by smoking; and the secondhand effects of smoking on the wife and children.
2. Again, the message reinforces that smoking is an adult activity, and something to which youth aspire.
3. Again, the authority figure is being pitted against the youth who aspires to adulthood—suggesting again that youth can successfully rebel and appear as adults if they smoke (which, in this case, is also modeling the father’s behavior, something which youth commonly do).
Focus group participants objected to the fact that the father does not offer to quit smoking—he only says he won’t send the son to buy cigarettes. They felt the ad would only have meaning if the father were to set a positive example for the son by not smoking. They said that as it is, the ad reinforces that smoking is not for youth—meaning that smoking proves adulthood. One participant remarked that forbidden goods have special appeal.
**For retailers**
A shopkeeper expresses surprise that a man is buying more cigarettes so soon, and guesses that the man’s son must be stealing cigarettes from his pack. The shopkeeper then announces his intention not to sell cigarettes to children. The voice-over this time addresses the responsibility of buyers and sellers. What is the problem with this script?
1. The script informs young people how to obtain cigarettes if they don’t have the money or are uncomfortable buying them themselves: just slip them from your father’s pack. It can become a sort of game, to see whether your father is smart enough to figure it out or not—whether or not you can get away with it!
2. Again, the suggestion is that adult men smoke (all adult men, judging by the radio scripts), but that children shouldn’t. “Smoking at this age is really a bad habit”, as the shopkeeper says, implies that there is nothing wrong with smoking if you are over age 18.
3. A new source of rebellion and adventure appears in this ad—the game of trying to buy cigarettes. Since there is no law to prevent them, they will of course succeed; in addition, retailers have an excuse, as the previous script reminds them that often children are buying cigarettes for their parents, not themselves.
Focus group participants’ response to the radio scripts was that none gave any reason for not smoking; that the fathers in the ads smoke, and the attraction is to be like one’s father; and that stores will never stop selling to minors, since they would lose money by doing so.
**Television ad**
The television ad shows a young boy trying to appear cool by smoking cigarettes. Meanwhile, his classmate gains success in cricket. A girl looks disgusted at the smoking, and joins the cricket player instead; meanwhile, the smoker crushes his (empty) pack of cigarettes and then goes to join them. What is the problem with this ad?
1. Compared to the BAT ads that appear on television for John Player Gold Leaf and B&H, the ad is poorly done. The ad is only thirty seconds long, which makes it difficult to understand the messages, or figure out who did what. The way the boy shows off his cigarette pack, then crumples it, are crude. The ad looks more like something produced by an under-funded health NGO than by a rich tobacco company.
2. While the crumpling of the pack is meant to indicate that the boy is fed up and won’t smoke anymore, the pack appears to be empty. It could just be that he is out of cigarettes.
3. As with the radio scripts and billboard, there is no mention of any of the health effects of smoking, that smoking is addictive, that it causes disease in non-smokers, and that smoking kills.
The focus group participants asked to watch the TV ad a second time, as it wasn’t clear from first viewing what had happened. After the second viewing, a lengthy argument ensued as to what had happened in the ad—who was smoking, who threw the ball to whom, and so on. One participant said, “The ad shows that you shouldn’t smoke during cricket matches.” Another participant complained, “There’s no one famous in the ad. The actors are all unknown, and they have nothing appealing to attract people to the message.” While they found the ad appealing to watch, they did not get an anti-smoking message from it.
**IV. Bangladeshi youth’s awareness of cigarette ads**
Most of the 300 male students interviewed were aged 12-15.

Most (88%) said they had never smoked. Only 3% reported currently smoking at least one cigarette a week, but, as with the focus group participants, they were quite familiar with cigarette ads, especially those for BAT brands.
Almost all the students (96%) said they had seen a cigarette ad on television. Of those who had seen one, 81% reported having seen Gold Leaf, 69% B&H, 25% Navy, and 39% named other brands. Gold Leaf and B&H are both BAT brands, revealing high exposure among the youth to BAT TV ads for cigarettes.
To the question, “Do you think that Benson & Hedges sponsoring rock concerts encourages youth to smoke?”, 55% answered yes and 45% no. Most (71%) thought boys their age like the concerts, 22% said they like them a little, and only 4% said they don’t like them. Ten students (3%) did not respond.
78% of the students said they had seen a B&H rock concert. While most of the students had seen them on TV, 11% said they had seen one live, despite BAT’s claim that they do not allow those under 18 to attend their concerts. Most (61%) liked the concerts to some degree or a lot, 21% said they liked the concerts a little, and only 17% said they disliked them.
As to exposure to B&H ads, 82% said they had seen a newspaper ad, 64% said they had seen a poster, and 50% said they had seen a billboard. In discussions following the questionnaire, many of the students asked what a billboard is; the results would likely have been higher if they had known!
When asked how they like the cigarette ads, 3% said they like them a lot; 28% said they like them; 19% said they like them a little, and 47% said they don’t like them, with 3% not responding.
Approval ratings were higher in the few students who smoked: among the 10 current smokers, 1 liked them a lot, 5 liked them, 1 liked them a little, and 3 said they disliked them. Ratings for ever-smokers were almost the same as for non-smokers. The “dislike” category presumably includes students who might find the ads visually appealing, but think it is wrong for BAT to advertise.
Of those watching TV less than once a week, 83% had seen a B&H concert; 87% of those watching TV 1-6 days a week had; and 77% of those watching TV 1-6 hours daily had seen one. That is, those who watch TV rarely were just as likely as those who watch it frequently to have seen a B&H concert, which implies either great frequency of concert showings, or that students make an effort to see them.
Likelihood of having seen a B&H rock concert also varied little by age, with the majority in all age groups having seen one.
Appealing cigarette ads, unappealing youth prevention ads
“The first goal of Benson & Hedges rock concerts is to publicize their company, and indirectly to attract adolescents to smoking.”
—young male student
When BAT and other tobacco companies wish to promote their brands, they use methods that are particularly popular among youth: sponsorship of motorcycle and car racing, rock concerts, sporting events. For example, in India, BAT and India Tobacco Company sponsored cricket through the Wills brand. That sponsorship was found to influence smoking rates and create false perceptions about smoking in Indian school children age 13-16 years. According to the research report, “Despite a high level of knowledge about adverse effects of tobacco, cricket sponsorship increased children’s likelihood of experimenting with tobacco by creating false associations between smoking and sport. Many of the children believed that cricketers smoked.”\(^{18}\)
In Bangladesh, BAT regularly sponsors rock concerts. Rock concerts are most popular among a young audience, including teenagers. Yet in the BAT Bangladesh youth smoking prevention campaign rock stars are not used. Cricket players are not used. No youth idols are used. The characters in the TV ad, on stickers, and on the billboard are a bland-looking group of youth who hold no special appeal to young people. The radio messages reinforce the idea that smoking is an adult activity, even while claiming to belie it. Nowhere in any of the messages are the harmful effects of tobacco mentioned. Words must be measured against action, and in this case, the actions are clear: BAT has no intention of reducing smoking among youth.
\(^{18}\) Cancer Research Campaign and Action on Smoking and Health (London), 2000.
V. Recommendations
- Do not work with BAT on their youth smoking campaign. BAT does not wish to see smoking among youth decline. BAT’s prime concern is to improve its image, and by partnering with them, you will help them in doing so.
- Tobacco control measures should never focus only on youth. Such programs make smoking an adult activity, thus increasing its appeal to youth.
- Question BAT’s claim that smoking is only a problem for youth. Adults who smoke are prone to a range of diseases and health problems, including emphysema, tuberculosis, impotence, reduced fertility, heart disease, stroke, and cancer of the lung, mouth, breast, and many other sites. When adults smoke around others, non-smokers are subjected to the same chemicals that smokers breathe in, and get some of the same diseases. Helping adults to quit reduces health expenditures, raises quality of life, and makes available for productive purposes money that would otherwise go to buy cigarettes. Any attempt to deal with the problem of smoking must address adults as well as youth.
- Support tobacco control measures that have been proven effective. Measures needed to address the many health and economic problems associated with smoking include a complete ban on all forms of tobacco promotion, bans on smoking in public places and workplaces, aids to help people quit, and higher taxes on all tobacco products.
- If you need information about tobacco, including information about what works best to reduce smoking among youth, or if you would like to learn more about how to become involved in tobacco control, please contact the Bangladesh Anti-Tobacco Alliance.
---
19 These recommendations draw heavily on one report: Cancer Research Campaign and Action on Smoking and Health (London), *Danger! PR in the Playground: Tobacco industry initiatives on youth smoking*, 2000. www.ash.org.uk
Bangladesh’s reaction: a case study
When BAT announced its new youth smoking prevention campaign in Bangladesh, Work for a Better Bangladesh (WBB), an active member of the Bangladesh Anti-Tobacco Alliance (BATA), sprang into action. With technical support from PATH Canada and financial support from Canadian CIDA, in just over two weeks after BAT’s launch, WBB, BATA, and PATH Canada had released their joint report and called a press conference to critique the BAT campaign. The 1,000 copies of the report went not only to the media, but to NGOs, UN agencies, the government, and entertainers.
The steps of WBB’s reaction:
1. Planning of a response: an informal meeting to discuss what to do, and how quickly a response could be generated. Over the next few days, more specific plans for producing a report and calling a press conference.
2. Gathering of information on industry-sponsored youth prevention campaigns, using particularly the Cancer Research Campaign/ASH (London) report and information from Essential Action’s website.
3. Creation of a research plan: a question guide for focus group research, and a questionnaire for an in-school survey.
4. Writing in English of a draft report.
5. Translation of the draft report into Bengali.
6. Conducting of the focus group research. The key findings are in the report below.
7. Conducting of the school survey. Over 300 in-school youth under age 18 were surveyed on one morning.
8. Data entry and analysis.
9. Addition of research to Bengali report. Report completed, edited, printed.
10. Informal meeting of those who will present and answer questions at the press conference.
11. Holding of press conference, and distribution of the report.
12. Celebration—and planning of next steps.
Appendix: Conducting Research on Youth Campaigns
A. Sample survey
Age: ______
Sex: M F
1. Have you ever smoked? Y N
2. Do you smoke now (once a week or more): Y N
3. Do you watch TV?
- Less than once a week
- 1-6 times a week
- 1-6 times a week
- Daily for ____ hours
4. Have you ever seen cigarette ads on TV? Y N (if yes, which brands?) ________________
5. Have you ever seen a Benson & Hedges concert? Y N
if yes: where did you watch it (which TV station/live concert/other)? ________________
6. What did you think of the Benson & Hedges concert?
- Liked it a lot
- Liked it
- Liked it a little
- Didn’t like it
7. What do you think the purpose of B&H concerts is?
8. Do you think that B&H sponsoring rock concerts encourages kids to smoke? Y N
9. What do people your age think of these concerts?
- Like them a lot
- Like them a little
- Don’t like them
10. Have you ever seen an ad for Benson & Hedges?
Poster: Y N Newspaper ad: Y N Billboard: Y N
11. What do you think of these ads?
- Like them a lot
- Like them
- Like them a little
- Don’t like them
12. What would you think if a tobacco company told you not to smoke?
B. Using this (or any) survey:
1. Adapt it for your local context. For instance, change from B&H to the most advertised cigarette brand(s) of the company doing the youth prevention program. Ask specifically about rock concerts or sponsored sporting events.
2. Test it before use, to make sure the questions are clear.
3. Decide which questions you need and which you don’t.
4. Plan your data analysis in advance—if you will use a computer, set up the form on the computer before you conduct the survey. Make sure you have the time and ability to do the analysis.
5. Only do the survey if you know how it will be useful for you.
6. Don’t tell people the purpose of the survey until AFTER you finish. Then you can explain, and—if you have the facilities to do so—invite the respondents to work with you on responding to the industry campaign.
7. Data analysis
The types of information you can get are:
- how many under age 18 boys and girls (and what percentage) are exposed to cigarette ads;
- the feelings of youth about the ads;
- whether awareness of ads and opinions about them varies by sex;
- whether more smokers (ever or current) are aware of or have positive feelings about cigarette ads than non-smokers;
- whether many youth are exposed to tobacco-sponsored rock concerts (or car racing or other tobacco industry-sponsored events in your locale);
- how aware and approving non-smokers are of cigarette ads (high awareness and approval probably means they are at risk of starting to smoke in future);
- what sorts of comments youth have about a youth prevention campaign.
The results can then be used to strengthen your argument. For instance, if you want to say that many youth are currently exposed to cigarette ads, you can use your results as evidence. Many youth may spontaneously say that if the industry were serious about youth not smoking, they would stop advertising or stop producing cigarettes. Such quotes can be very helpful in your report/reaction.
C. Focus group research
In addition to a questionnaire, you may wish to do qualitative research. It need not be formal focus group research; the point is to get opinions of young people about tobacco advertising and industry-sponsored youth smoking prevention campaigns. A few hints for conducting the research:
- If you have the materials that the tobacco company will be using with youth, show them to the participants. Give them time to look over them thoroughly, then ask what they think of them. Have specific questions at hand, such as:
- What is the message of this ad?
- Do you think this ad is attractive compared to the company’s cigarette ads? (Use specific examples—for example, if the company doing the youth prevention campaign is Philip Morris, then compare the prevention campaign ads to those for Marlboro cigarettes.)
- Do you think people your age will understand this ad? Will they like it?
- What information does the ad provide about smoking? What reasons does the ad give for youth not smoking?
- Does the ad include popular people or other images likely to appeal to youth?
- Do you think young people are likely to smoke less as a result of seeing/hearing this ad?
- As with the survey, when you make your list of questions to ask, think about what information you wish to collect.
- Cover the subjects you are interested in, but avoid leading questions. For instance, rather than asking “Do you think that tobacco companies sponsor rock concerts in order to encourage youth to smoke”, ask, “Why do you think tobacco companies sponsor rock concerts (and not other types of events like classical music concerts)?”
- If the participants in your research react with anger to the tobacco industry-sponsored youth prevention campaign, then involve them in protesting against it!
|
MINImobro
General Terms and Conditions
1. Entry into this competition is deemed acceptance of these terms and conditions.
2. The promotional period commences at 12:00.01am on Tuesday 1 November 2016 (AEDT) and concludes at 11:59.59pm on Wednesday 30 November 2016 (AEDT) (Movember 2016 Promotional Period).
3. There is one method of entering the MINImobro competition.
4. Entry into this promotion is open to all Australian residents who satisfy the following conditions:
i who are a registered Mo Bro atwww.movember.com to grow a moustache to raise funds for the Movember cause during the Movember 2016 Promotional Period;
ii who are aged 22 years or over as at 1 October 2016;
iii whose ordinary place of residence is within a 150km radius of the General Post Office of an Australian State or Territory capital city;
iv who hold a current full unrestricted driving licence issued by a relevant Australian State or Territory, which is not under suspension or cancellation;
v who follow MINI Australia on Instagram;
vi who, during the Movember 2016 Promotional Period, upload a photo or video on Instagram of the entrants Movember moustache growth using the hashtag #MINImobro.
(
Eligibility Requirements
)
5. Employees, officers, agents, sponsors, suppliers, and advisors of Movember or MINI and their immediate families are not eligible to enter.
6. The Promoter is the Movember Group Pty Ltd as trustee for the Movember Foundation (ABN 48 894 537 905) ("Movember").
7. Mo Bro's who satisfy the Eligibility Requirements in clause 4 above can enter this competition as many times as they like during the Movember 2016 Promotional Period.
8. Any entry not complying with these terms and conditions will be deemed invalid and Movember reserves the right to verify the validity of entries and to disqualify any entrant for submitting an entry which is not in accordance with these terms and conditions or who tampers with the entry process.
9. If for any reason the MINImobro competition is not capable of running as planned, including due to infection by computer virus, bugs, tampering, unauthorised intervention, fraud, technical failures, vandalism, power failures, tempests, natural disasters, acts of god or nature, civil unrest, strike or any other causes beyond the control of Movember, which in Movember's opinion potentially or actually corrupt or adversely affect the administration, security, fairness, integrity or proper conduct of this competition, then subject to any compulsory law to the contrary Movember reserves the right to cancel the competition.
10. Movember and its related bodies corporate are not liable for any loss or damage suffered (including but not limited to direct or consequential loss) nor for any personal injury sustained in connection with the competition or the prizes except for any liability which cannot be excluded by law. Entrants participate in the competition at their own risk.
11. All entries become the property of Movember. All information contained in the entries and details of the winner will be held in accordance with the Movember Privacy Policy, which can be accessed by visitingwww.au.movember.com.
12. Any request to access, update or correct any information relating to the MINImobro competition should be directed to Movember.
#MINImobro Prize
13. Movember does not take responsibility for any technological issues that prevent a hashtag or image from being processed correctly or that cause a disparity between the time you create the hashtag and the time that it is displayed on Instagram.
14. Any images that Movember or MINI consider, at their absolute discretion, to be offensive or inappropriate will not be considered eligible entries into this competition.
15. The judging of the #MINImobro competition will take place at the premises of Movember on Monday 5 December 2016.
16. The winner will be determined by MINI and Movember at their absolute discretion based on your moustache growth and grooming style during the Movember 2016 Campaign period and the creativity of your entry. The entry determined to be the best eligible and valid entry, will win the #MINImobro prize set out below.
17. The winner as determined under clause 16 will win the following prize made available by BMW Group Australia:
i the rental of a MINI ("Vehicle") for a 6-‐month period commencing from 15 December 2016 (subject to availability) (the Term) for the winner's use subject to these terms and conditions and any other conditions for use of the Vehicle set out herein.
18. The total value of the prize is $3,828 (including GST). All prize values are in Australian dollars.
19. The winner acknowledges that MINI may provide two or more Vehicles during the Term, but not more than one Vehicle at any given time and which may not necessarily be identical, by reason that each Vehicle made available must not be used for a period exceeding 10,000km.
20. The winner must notify MINI if the Vehicle made available for the Term registers 7,000km or more on its odometer at any time. The winner further agrees and acknowledges that MINI may contact the winner on a periodic basis to obtain the odometer reading of the Vehicle for various purposes, including coordination of servicing, repairs and change over if necessary. Notwithstanding the above MINI reserves the right to recall the Vehicle at any time during the Term in its sole discretion and in the case of such recall, will provide the winner with a replacement vehicle.
21. The Vehicle may only be redeemed by a winner aged 22 years or older holding a current full unrestricted driving licence issued by the relevant Australian State or Territory, which is not under suspension or cancellation and cannot be assigned or transferred to any other person or redeemed for cash or another prize of equivalent value.
22. The winner is responsible for pick up and return of the Vehicle from an authorised BMW/MINI dealership located in either Adelaide, Perth, Brisbane, Gold Coast, Sunshine Coast, Melbourne, Sydney or Newcastle. Any ancillary costs associated with redeeming the Vehicle, including but not limited to, travel to and from the dealership, petrol and any accommodation as required are the responsibility of the winner.
23. The Vehicle shall remain the property of MINI throughout the Term. MINI shall ensure that the Vehicle is registered with an appropriate State or Territory authority.
24. The winner must not sell, assign, pledge, mortgage, lease or encumber the Vehicle at any time.
25. Subject to clause 26 below, MINI shall insure and keep insured the Vehicle during the Term in MINI's name as owner against all loss or damage of any kind including damage by fire, accident or theft. If the winner is involved in an accident, causes damage to the Vehicle or if the Vehicle is damaged or destroyed during the Term, the winner agrees to indemnify MINI for the costs of repairs to or replacement of the Vehicle for the amount determined by MINI at its sole discretion. The decision of MINI in this regard is final.
26. All costs relating to insurance, registration, service and repairs (subject to clause 25 above) of the Vehicle shall be borne by MINI except for the following items:
i The winner shall be responsible for and shall indemnify MINI against (and the insurance specified in clause 25 shall not cover) all losses, damages, costs, expenses and injuries caused to, or by, the Vehicle arising from:
a. Any breach of these Terms and Conditions including Conditions of Use of the Vehicle set out herein; or
b. Any negligence or wilful act or omission of the winner.
27. The winner shall be responsible for any petrol costs, tolls, fines, penalties or any other charges incurred for driving, parking and traffic infringements (including speeding and right light camera fines) whilst the Vehicle is in the custody or control of the winner.
28. MINI may at any time request to inspect or test the Vehicle and to exercise any of its rights and powers under these Terms and Conditions.
29. The winner shall make the Vehicle available for collection by MINI upon reasonable notice by MINI including on expiration of the Term and shall, if necessary, allow MINI to enter upon any premises of the winner to retake possession.
30. If the Vehicle is involved in an accident or damaged in any way, MINI must be immediately notified and an Accident Damage Report in the form required by MINI is required to be completed and forwarded to the MINI Fleet Controller within 48 hours of the damage occurring.
i If the Vehicle is stolen the winner shall promptly notify MINI and provide MINI with all relevant details.
ii Repairs to the Vehicle are not to be undertaken without the written consent of MINI or its appointed assessor.
iii The winner must maintain the Vehicle in excellent condition and keep the Vehicle clean and tidy at all times.
iv Smoking is not permitted in the Vehicle and the winner shall be liable for any damage, rectification or cleaning costs as a result of any breach of this condition.
v MINI may require the maintenance of a log book by the winner for tax or traffic infringement purposes. The winner will be notified by MINI if such a requirement is imposed.
vi The Vehicle is not to be driven by or lent to anyone and the only authorised driver of the Vehicle is the winner. A breach of this condition will result in immediate termination and surrender of the Vehicle. MINI will not be liable for any loss, cost or any claim for damage by the winner as a result of such termination and surrender.
vii The Vehicle may not be claimed by the winner if the award or enjoyment of the Vehicle is not duly appointed by any relevant laws. The Vehicle may not in any circumstances be redeemed for cash or any other prize of equivalent
value. MINI will not be liable for any loss, cost or any claim for damage by the winner as a result of this condition.
viii The Vehicle is not to be driven by anyone under the influence of alcohol in excess of the legal limit or under the influence of any drug, medication or other intoxicating substance.
ix No modifications are to be made to the Vehicle without the prior written approval of MINI.
x The Vehicle is to be serviced (if required) by an authorised MINI Dealer only.
xi The Vehicle must at all times be driven in accordance with the applicable road rules.
xii The Vehicle shall not be used:
a. for any race, contest or illegal purpose;
b. to convey passengers or goods for hire;
c. to convey any load more than what the Vehicle was constructed for; or
d. for any purpose other than the purpose for which the Vehicle was built.
Notification and second chance draw
31. The winner of the #MINImobro competition will be notified by telephone on Friday 9 December 2016. This will be confirmed in writing by mail or email and the #MINImobro prize will be made available for collection by the winner from Thursday 15 December 2016 subject to stock availability. The name and suburb of the #MINImobro winner will be published on the websitewww.au.movember.com and in the Australian newspaper on Thursday 15 December 2016.
32. Movember's decision as to the selection of the #MINImobro winner is final and not subject to negotiation. The #MINImobro prize can only be claimed by verbal or written acceptance of the winner.
33. Subject to these Terms and Conditions, if a prize is not claimed or taken/redeemed by a winner by Monday 9 January 2017, the prize will be forfeited.
34. If any prize is unavailable, MINI reserves the right to substitute the prize with a prize of equal or greater value and/or specification.
35. If the #MINImobro prize is not claimed in accordance with clause 33 above, a second round of judging will take place at the premises of Movember at 11:00:00am on Tuesday 10 January 2017 (AEDT). The second round winner (if it occurs) will be notified by Wednesday 11 January 2017 in the same manner as set out in clause 31 above. The name and suburb of the #MINImobro winner will be published on the websitewww.au.movember.com and in the Australian newspaper on Wednesday 18 January 2017.
|
S. Ivashkovich
EXTRA EXTENSION PROPERTIES OF EQUIDIMENSIONAL HOLOMORPHIC MAPPINGS: RESULTS AND OPEN QUESTIONS
S. Ivashkovich. Extra extension properties of equidimensional holomorphic mappings: results and open questions, Matematychni Studii, 30 (2008) 198–213.
Holomorphic (nondegenerate) mappings between complex manifolds of the same dimension are of special interest. For example, they appear as coverings of complex manifolds. At the same time, they have very strong “extra” extension properties in compare with mappings in different dimensions. The aim of this paper is to put together the known results on this subject, prove some new ones and formulate open questions in order to give a perspective for the future progress.
С. Ивашкович. Серед-продовжаемость голоморфних отображень в рівних размірностях: результати і відкриті проблеми // Математичні Студії. – 2008. – Т.30, №2. – С.198–213.
Голоморфные (невырожденные) отображения между комплексными многообразиями одинаковой размерности представляют особый интерес. Например, они проявляются как накрытия комплексных многообразий. В то же самое время, они обладают более сильными свойствами продолжаемости по сравнению с отображениями в случае разных размерностей. В статье излагаются известные результаты в указанном направлении, доказываются новые, а также формулируются открытые проблемы с целью обрисовать перспективы дальнейшего прогресса в данном направлении.
1. Introduction.
1.1. Results. Recall that a domain \((D, \pi)\) over a complex manifold \(M\) is called locally pseudoconvex (locally Stein) if for every point \(a \in \pi(D)\) there exists a neighborhood \(V \ni a\) such that all connected components of \(\pi^{-1}(V)\) are Stein. Our first result is the following:
**Theorem 1.** Let \(X\) be a locally homogeneous complex manifold and let \(f : \hat{D} \to X\) be a meromorphic mapping from a locally pseudoconvex domain \((\hat{D}, \pi)\) over a complex manifold \(M\). Suppose that \(f\) is locally biholomorphic outside of its indeterminacy set. Then \(f\) is holomorphic (and therefore locally biholomorphic) everywhere.
In particular, if \(f\) is a locally biholomorphic mapping from a domain \((D, \pi)\) over a Stein manifold \(M\) to a compact locally homogeneous Kähler manifold then \(f\) extends locally biholomorphically to the envelope of holomorphy \(\hat{D}\) of \(D\).
The proof is given in Theorem 6 and in Remark 8 after the proof of Theorem 6 in Section 2.
2000 Mathematics Subject Classification: 32D15.
Remark 1. The compactness condition in this theorem, as in almost all results of this paper, can be relaxed to disk-convexity. See more about this in Section 6.
Remark 2. For more results of this type, see Corollary 4, Corollary 5, Corollaries 6, 7, 8 and Proposition 1.
Now we formulate a result of an another type.
Theorem 2. Let $p: X \to Y$ be a holomorphic fibration over a compact complex manifold with compact Kähler fibers. Let $S \subset Y$ be a closed subset such that $Y \setminus S$ is Stein. Then any meromorphic section of our fibration, defined on a neighborhood of $S$ extends to a meromorphic section over the whole of $Y$.
Remark 3. There is no assumption on how the Kähler metrics on fibers depend on the point of the base. Of course, the total space $X$ need not be Kähler and even locally Kähler.
The proof which is based on results of extension of meromorphic mappings into non-Kähler manifolds is given in Section 4. More results are proved in Theorems 9 and 10.
1.2. Open questions. A special accent in this paper is made to open questions. This is done in order to present the authors vision of the future developments in the area and in the hope to get interested new people to enter the subject.
Every section ends with a few open questions relevant to this section, and the last Section 7 is entirely devoted to questions which concern the paper in general.
Remark 4. 1. Mappings which decrease the dimension were studied in [31] and [5].
2. Mappings which increase the dimension do not have any extension properties in general, see example in [23]. What one can expect at the best, is explained in Section 3.
3. We do not discuss here an extremely extended topic of locally biholomorphic (or proper) mappings between domains in $\mathbb{C}^n$ and refer the reader to the recent paper [27] for the present state of art in this subject.
2. Coverings of Kähler manifolds.
2.1. General Kähler manifolds. We start with the relatively well understandable case of Kähler manifolds. This case includes Stein manifolds, projective and quasiprojective manifolds. Results quoted in this section are directly applicable also to manifolds of class $C$, i.e. bimeromorphic to Kähler ones.
In [16] the following theorem was proved.
Theorem 3. Let $X$ be a compact Kähler manifold. Then the following conditions on $X$ are equivalent:
i) for any domain $D$ in a Stein manifold $M$, any holomorphic mapping $f: D \to X$ extends to a holomorphic mapping $\hat{f}: \hat{D} \to X$ from the envelope of holomorphy $\hat{D}$ of $D$ into $X$.
ii) $X$ does not contain rational curves, i.e., images of the Riemann sphere $\mathbb{C}\mathbb{P}^1$ under non-constant holomorphic mappings $\mathbb{C}\mathbb{P}^1 \to X$.
Remark 5. The condition on $X$ to be compact is to restrictive. It can be replaced with the disk-convexity, see Section 6.
As a consequence from this theorem we obtain a positive solution of the conjecture of Carlson et Harvey, see [4]:
Corollary 1. Let $D$ be a domain in a Stein manifold $M$ and let $\Gamma$ be a subgroup of the group of holomorphic automorphisms of $D$ acting on $D$ properly discontinuously without fixed points. If $D/\Gamma = X$ is compact and Kähler then $D$ is Stein itself.
Indeed, such $X$ cannot contain rational curves and therefore the covering map extends from $D$ to its envelope of holomorphy $\hat{D}$, which is Stein. Therefore, the only possibility is $\hat{D} = D$ and therefore, $D$ is Stein itself. It can be viewed as certain generalization of the Siegel theorem [30]. In the Siegel theorem, $D$ is assumed to be a bounded domain in $M = \mathbb{C}^n$ (as a result, in this case $X$ is, moreover, projective). If $D$ is not necessarily bounded then $X$ may not be algebraic (an example: a non-algebraic torus as a quotient of $\mathbb{C}^n$ by a lattice).
Recall the following definition.
Definition 1. A meromorphic mapping $f$ from a complex manifold $D$ to a complex manifold $X$ is an irreducible, locally irreducible analytic set $\Gamma_f \subset D \times X$ (graph of $f$) such that the natural projection $\pi_D : \Gamma_f \to D$ is proper and generically one-to-one.
In that case there exists an analytic subset $I \subset D$ of codimension, at least, two such that $\Gamma_f \cap (D \setminus I) \times X$ is a graph of a holomorphic mapping (still denoted as $f$). This can be taken as a definition of a meromorphic mapping. The minimal $I$ satisfying this property is called the indeterminacy set of $f$.
In [17] the following conjecture of Griffiths, see [10], was proved
Theorem 4. For any domain $D$ in a Stein manifold any meromorphic mapping from $D$ into a compact Kähler manifold $X$ extends to a meromorphic map from the envelope of holomorphy $\hat{D}$ of $D$ into $X$.
Remark 6. Again, as in Theorem 3 one needs only disk-convexity from $X$ for a result to be still true.
In the same way as above one obtains the following.
Corollary 2. Let $D$ be a domain in a complex manifold $M$ (not necessarily Stein!) and $\Gamma$ be a subgroup of the group of holomorphic automorphisms of $D$ acting on $D$ properly discontinuously without fixed points.
(a) If $D/\Gamma$ is compact and Kähler then $D$ is locally pseudoconvex.
(b) If $D/\Gamma = X'$ is Zariski open in a compact Kähler $X$ then $D$ itself is Zariski open in a pseudoconvex domain $\hat{D}$ of $M$.
For $D \subset \mathbb{C}^n$, (b) is a theorem of Mok-Wong, [26].
2.2. Homogeneous Kähler manifolds.
Definition 2. A complex manifold $X$ is called infinitesimally homogeneous if the global sections of its tangent bundle generate the tangent space at each point.
One can prove, see Proposition 1.3 from [14], that for some natural $N$ there exists a surjective endomorphism of holomorphic bundles $\sigma : X \times \mathbb{C}^N \to TX$. This property can be taken as a definition of an infinitesimally homogeneous manifold. All parallelizable manifolds are inf. hom., as well as all Stein manifolds and all complex homogeneous spaces under a real Lee group. Every Riemann domain $(D, \pi)$ over an infinitesimally homogeneous manifold is infinitesimally homogeneous itself.
Using the morphism $\sigma$ and some riemannian metric on $X$ one can define, as in [14] a boundary distance function $d_D$ on $D$. Ruffly speaking, $d_D(z)$ for $z \in D$ is the supremum of the radii of balls $B$ with centers in $\pi(z)$ such that $\pi$ is injective over $B$. The principal result we need from [14] is contained in Theorem 3. It can be stated as follows.
**Theorem 5.** If $(D, \pi)$ is a locally pseudoconvex domain with finite fibers over an infinitesimally homogeneous complex manifold then the function $-\log d_D(z)$ is plurisubharmonic.
Note that no further assumptions on $X$ (like compactness or Kählerness) are needed.
We shall need also the Hironaka Resolution Singularities Theorem. We shall use the so called imbedded resolution of singularities, see [12], [2]. Let us recall the notion of the sequence of blowings up over a complex manifold $D$. Consider a smooth closed submanifold $l_0 \subset D_0$: $= D$ of codimension, at least, two. Denote by $\pi_1: D_1 \longrightarrow D_0$ the blowing up of $D_0$ along $l_0$. Call this: a blowing up of $D_0$ along the closed center $l_0$. The exceptional divisor $\pi^{-1}(l_0)$ of this blowing up we denote by $E_1$.
We can repeat this procedure, taking a smooth closed submanifold $l_1 \subset E_1$ of codimension, at least, two in $D_1$ and produce $D_2$ and so on.
**Definition 3.** A finite sequence $\{\pi^j\}_{j=1}^N$ of such blowings up we call a sequence of blowings up over $l_0 \in D$, or a regular modification over $l_0$.
By $\{l_j\}_{j=0}^{N-1}$ we denote the corresponding centers and by $\{E_j\}_{j=1}^N$ the exceptional divisors. We put $\pi = \pi_1 \circ ... \circ \pi_N$, $E$ denotes the exceptional divisor of $\pi$, i.e. $E = \pi_N^{-1}(l_{N-1} \cup ... \cup (\pi_1 \circ ... \circ \pi_N)^{-1}(l_0))$.
Let $f: D \rightarrow X$ be a meromorphic mapping into a manifold $X$. Denote by $I$ the set of points of indeterminacy of $f$, i.e. $f$ is holomorphic on $D \setminus I$ and for every point $a \in D$ $f$ is not holomorphic in any neighborhood of $a$.
**Theorem.** Let $f: D \rightarrow X$ be a meromorphic map between complex manifolds $D$ and $X$. Then there exists a regular modification $\pi: D_N \rightarrow D$ such that $f \circ \pi: D_N \rightarrow X$ is holomorphic.
See [12]. For the proof we refer also to [2]. Now we shall prove the following:
**Theorem 6.** Let $X$ be a compact infinitesimally homogeneous Kähler manifold. Then every locally biholomorphic mapping $f: D \rightarrow X$ from a domain $D$ over a Stein manifold into $X$ extends to a locally biholomorphic mapping $\hat{f}: \hat{D} \rightarrow X$ of the envelope of holomorphy $\hat{D}$ of $D$ into $X$.
**Proof.** Let $\hat{f}: \hat{D} \rightarrow X$ be the meromorphic extension of $f$. Denote by $I$ the set of points of indeterminacy of $\hat{f}$. Then $\hat{f}|_{\hat{D} \setminus I}$ is locally biholomorphic and we can consider the pair $(\hat{D} \setminus I, \hat{f}|_{\hat{D} \setminus I})$ as a Riemann domain over $X$.
This domain may not be locally pseudoconvex only at points of $I$. But then its domain of existence $\tilde{D}$ over $X$ contains some part of the exceptional divisor $E$ of the desingularization of $\hat{f}$. The union of this part of $E$ with $\hat{D} \setminus I$ is actually $\tilde{D}$ and the extension of $\hat{f}|_{\hat{D} \setminus I}$ to $\tilde{D}$ we denote as $\tilde{f}$. We consider $(\tilde{D}, \tilde{f})$ as a (locally pseudoconvex) Riemann domain over $X$.
Suppose $\tilde{D} \setminus E$ is not empty. Then it is easy to construct a sequence of analytic discs $\Delta_k$ in $\tilde{D} \setminus I$ and then in $\tilde{D}$ such that the boundaries of $\Delta_k$ stay in a compact part of $\tilde{D}$, but $\Delta_k$ converge to a disc plus some number of rational curves on $E \setminus \tilde{D}$. But this is clearly forbidden by the plurisubharmonicity of $-\log d_{\tilde{D}}$.
Therefore, \((\tilde{D}, \tilde{f})\) as a domain over \(X\) coincides with \((\hat{D}_N, \hat{f}_N)\), the desingularization of \(\hat{f}\). But then \(-\log d_{\tilde{D}}\) should be constant on all fibers of our modification, because we can take as \(\tilde{D}\) any locally pseudoconvex neighborhood of these fibers. This is impossible unless these fibers are points. That means that \(\hat{f}_N = \hat{f}\) and therefore, \(\hat{f}\) is holomorphic on \(\hat{D}\).
**Remark 7.** Theorem 4, Corollary 2 and Theorem 6 hold obviously true for manifolds of class \(\mathcal{C}\), i.e. for manifolds that are bimeromorphic to compact Kähler manifolds.
**Remark 8.** It is clear from the proof of Theorem 6 that the condition on \(X\) to be compact can be removed. In fact, disk-convexity is sufficient, see Section 6. Kählerness of \(X\) was used also only once when we extended \(f\) to the envelope of meromorphy. Therefore, Theorem 1 from the Introduction is also proved.
### 2.3. Open questions
Let \(X\) be a compact Kähler surface and let \(f: B^* \to X\) be a locally biholomorphic mapping of the punctured ball \(B^* = B \setminus \{0\}\) into \(X\). Then \(f\) extends meromorphically to the whole ball \(B\). The full image by the extension \(\hat{f}\) of the origin we denote by \(E: = \hat{f}[0]\).
**Question 1.** Is it true that \(E\) is an exceptional curve in \(X\)?
**Question 2.** What can be said about \(\hat{f}[I]\) in the conditions of Theorem 6?
### 3. Mappings into non-Kähler manifolds.
#### 3.1. The strategy
We start with the following remark. Let \(X\) be a compact complex manifold. Then, due to a result of Gauduchon, see [7], \(X\) admits a Hermitian metric \(h\) such that its associated form \(\omega_h\) satisfies \(dd^c \omega_h^{k+1} = 0\), where \((k+1)\) is the complex dimension of \(X\).
In fact, we shall need a property which is easier to prove:
*Every compact complex manifold of dimension \(k+1\) carries a strictly positive \((k,k)\)-form \(\Omega^{k,k}\) with \(dd^c \Omega^{k,k} = 0\).*
Indeed: either a compact complex manifold carries a \(dd^c\)-closed strictly positive \((k,k)\)-form or it carries a bidimension \((k+1,k+1)\)-current \(T\) with \(dd^c T \geq 0\) but \(\neq 0\). In the case of \(\dim X = k+1\) such current is nothing but a nonconstant plurisubharmonic function, which does not exist on compact \(X\).
Let us introduce the class \(\mathcal{G}_k\) of normal complex spaces, carrying a nondegenerate positive \(dd^c\)-closed strictly positive \((k,k)\)-forms. Note that the sequence \(\{\mathcal{G}_k\}\) is rather exhaustive: \(\mathcal{G}_k\) contains all compact complex manifolds of dimension \(k+1\).
Introduce furthermore the class of normal complex spaces \(\mathcal{P}_k^-\) which carry a strictly positive \((k,k)\)-form \(\Omega^{k,k}\) with \(dd^c \Omega^{k,k} \leq 0\). Note that \(\mathcal{P}_k^- \supset \mathcal{G}_k\). But Hopf three-fold \(X^3 = \mathbb{C}^3 \setminus \{0\}/(z \sim 2z)\) belongs to \(\mathcal{P}_1^-\) and not to \(\mathcal{G}_1\), see remark below.
Consider the Hartogs figure
\[
H_n^k(r): = \left[ \Delta^n(1-r) \times \Delta^k \right] \cup \left[ \Delta^n \times A^k(r,1) \right] \subset \mathbb{C}^{n+k}.
\]
Here \(\Delta^n(r)\) stands for the \(n\)-dimensional polydisk of radius \(r\) and \(A^k(r,1) = \Delta^k(1) \setminus \overline{\Delta}^k(r)\) for the \(k\)-dimensional annulus (or shell). In (1) one should think about \(0 < r < 1\) as being very close to 1.
**General conjecture.** Meromorphic mappings from \(H_n^k(r)\) to compact (disk-convex) manifolds of class \(\mathcal{P}_k^-\) should extend to \(\Delta^{n+k} \setminus A\) where \(A\) is of Hausdorff \((2n-1)\)-dimensional
measure zero. If the image manifold is from class \( \mathcal{G}_k \) then \( A \neq \emptyset \) should imply very restrictive conditions on the topology and complex structure of \( X \) (see results below).
3.2. **Mappings into manifolds of class \( \mathcal{G}_1 \).** Let \( A \) be a subset of \( \Delta^{n+1} \) of Hausdorff \((2n - 1)\)-dimensional measure zero. Take a point \( a \in A \) and a complex two-dimensional plane \( P \ni a \) such that \( P \cap A \) is of zero length. A sphere \( S^3 = \{ x \in P : \|x - a\| = \varepsilon \} \) with \( \varepsilon \) small will be called a "transversal sphere" if, in addition, \( S^3 \cap A = \emptyset \).
**Theorem 7.** Let \( f : H_n^1(r) \to X \) be a meromorphic map into a compact complex manifold \( X \), which admits a Hermitian metric \( h \), such that the associated \((1, 1)\)-form \( \omega_h \) is \( dd^c \)-closed (i.e. \( X \in \mathcal{G}_1 \)). Then \( f \) extends to a meromorphic map \( \tilde{f} : \Delta^{n+1} \setminus A \to X \), where \( A \) is a complete \((n - 1)\)-polar, closed subset of \( \Delta^{n+1} \) of Hausdorff \((2n - 1)\)-dimensional measure zero. Moreover, if \( A \) is the minimal closed subset such that \( f \) extends to \( \Delta^{n+1} \setminus A \) and \( A \neq \emptyset \), then for every transversal sphere \( S^3 \subset \Delta^{n+1} \setminus A \) its image \( f(S^3) \) is not homologous to zero in \( X \).
**Remark 9.**
1. A (two-dimensional) spherical shell in a complex manifold \( X \) is the image \( \Sigma \) of the standard sphere \( S^3 \subset \mathbb{C}^2 \) under a holomorphic map of some neighborhood of \( S^3 \) into \( X \) such that \( \Sigma \) is not homologous to zero in \( X \). Theorem 7 states that if the singularity set \( A \) of our map \( f \) is non-empty then \( X \) contains spherical shells.
A good example to think about is a Hopf surface \( H^2 = \mathbb{C}^2 \setminus \{0\}/(z \sim 2z) \) with the pluriclosed metric form \( \omega = \frac{i}{2} dz_1 \wedge d\overline{z}_1 + dz_2 \wedge d\overline{z}_2 \frac{\|z\|^2}{\|z\|^2} \).
2. Consider now a Hopf three-fold \( H^3 = (\mathbb{C}^3 \setminus \{0\})/(z \sim 2z) \). The analogous metric form \( \omega = \frac{i}{2} dz_1 \wedge d\overline{z}_1 + dz_2 \wedge d\overline{z}_2 + dz_3 \wedge d\overline{z}_3 \frac{\|z\|^2}{\|z\|^2} \) is not longer pluriclosed but only plurinegative (i.e. \( dd^c \omega \leq 0 \)). Moreover, if we consider \( \omega \) as a bidimension \((2, 2)\) current, then it will provide us a natural obstruction for the existence of a pluriclosed metric form on \( H^3 \). That means that \( H^3 \in \mathcal{P}_1^- \setminus \mathcal{G}_1 \). The natural projection \( f : \mathbb{C}^3 \setminus \{0\} \to H^3 \) has singularity of codimension three and \( H^3 \) does not contain spherical shells of dimension two (but contains a spherical shell of dimension three). [19] contains an extension theorem for mappings into manifolds from the class \( \mathcal{P}_1^- \) also.
Later on in this paper we shall need one corollary from Theorem 7. A real two-form \( \omega \) on a complex manifold \( X \) is said to "tame" the complex structure \( J \) if for any non-zero tangent vector \( v \in TX \) we have \( \omega(v, Jv) > 0 \). This is equivalent to the property that the \((1, 1)\)-component \( \omega^{1,1} \) of \( \omega \) is strictly positive. Complex manifolds admitting a closed form which tames the complex structure are of special interest. The class of such manifolds contains all Kähler manifolds. On the other hand, such metric forms are \( dd^c \)-closed. Indeed, if \( \omega = \omega^{2,0} + \omega^{1,1} + \overline{\omega}^{2,0} \) and \( d\omega = 0 \), then \( \partial \omega^{1,1} = -\overline{\partial} \omega^{2,0} \). Therefore, \( dd^c \omega^{1,1} = 2i \partial \overline{\partial} \omega^{1,1} = 0 \). So, Theorem 7 applies to meromorphic mappings into such manifolds. In fact, the technique of the proof gives more.
**Corollary 3.** Suppose that a compact complex manifold \( X \) admits a strictly positive \((1, 1)\)-form, which is the \((1, 1)\)-component of a closed form. Then every meromorphic map \( f : H_n^1(r) \to X \) extends to \( \Delta^{n+1} \).
**Remark 10.**
1. In particular, all the results of Section 2 remain valid for such manifolds.
2. Theorem 7 stays valid for meromorphic mappings from all \( H_n^k(r) \) for all \( k \geq 1 \). But it should be noted that in general, the extendibility of meromorphic mappings into some
complex manifold $X$ from $H^k_n(r)$ does not imply the extendibility of meromorphic mappings to this $X$ neither from $H^k_{n+1}(r)$ no from $H^{k+1}_n(r)$ (for holomorphic mappings this is true), see example in [21].
3.3. Class $\mathcal{G}_2$ and dimension 3. The following result was proved in [22].
**Theorem 8.** Let $X$ be a compact complex space of dimension 3 (more generally one can suppose that $X$ is of any dimension but carries a positive $dd^c$-closed $(2,2)$-form). Then every meromorphic map $f: H^2_1(r) \to X$ extends meromorphically to $\Delta^3 \setminus A$, where $A$ is a zero-dimensional complete pluripolar set. If $A$ is non-empty then for every ball $B$ with center $a \in A$ such that $\partial B \cap A = \emptyset$, $f(\partial B)$ is not homologous to zero in $X$, i.e. $f(\partial B)$ is a spherical shell (of dimension 3) in $X$.
**Remark 11.** Spherical shell of dimension $k$ in complex manifold (space) $X$ is an image $\Sigma$ of the unit sphere $\mathbb{S}^{2k-1} \subset \mathbb{C}^k$ under a meromorphic map $h$ from a neighborhood of $\mathbb{S}^{2k-1}$ into $X$ such that $\Sigma = h(\mathbb{S}^{2k-1})$ is not homologous to zero in $X$.
Results of such a type have interesting applications to coverings of compact complex manifolds as we shall see in the next sections. From this theorem immediately follows that if the covering manifold $\tilde{V}$ of a 3-dimensional manifold $V$ is itself a subdomain in some compact complex manifold $Y$ then the boundary of $\tilde{V}$ cannot have concave points.
Let’s give one more precise statement. Recall that a complex manifold is called affine if it admits an atlas with affine transition functions. In that case its universal covering is a domain over $\mathbb{C}^n$.
**Corollary 4.** Let $V$ be a compact affine 3-fold and let $(\tilde{V}, \pi)$ be its universal covering considered as a domain over $\mathbb{C}^3$ with a locally biholomorphic projection $\pi$. Then if $(\tilde{V}, \pi)$ is pseudoconcave at some boundary point then $V$ contains a spherical shell (of dimension 3).
Indeed, by Theorem 8, the covering map $p: \tilde{V} \to V$ can be extended to a neighborhood of a pseudoconcave boundary point, say $a$, minus a zero dimensional set $A$. But this cannot happen unless $\tilde{V} = V \cup A$ is a neighborhood of $a$. Therefore, spheres around $a$ project to shells in $V$ by Theorem 8.
**Remark 12.** Of course, an analogous result can be formulated for affine surfaces: either the universal cover of an affine surface $V$ is Stein or $V$ contains a spherical shell (of dimension two).
3.4. Open questions.
**Question 3.** We conjecture that the analogous result should hold for meromorphic mappings in all dimensions. I.e. from $H^k_n(r)$ to compact manifolds (and spaces) in the classes $\mathcal{P}_k^-$ and $\mathcal{G}_k$. In particular, Theorem 8 should be true for meromorphic mappings between equidimensional manifolds in all dimensions.
The main difficulty lies in the fact that it is impossible in general to make the reductions (a)–(c) of §1 from [22]. (Note that reductions (d)–(e) can be achieved in all dimensions.)
**Question 4.** One can start proving the general conjecture (as in question 3) by considering extension from $H^2_2(r)$ to a manifold of class $\mathcal{G}_2$.
Question 5. An analog of Corollary 4 in all dimensions seems to be an easier problem than Question 3 in its whole generality. Can one prove that?
It would be instructive to consult the paper [3] in this regard.
It is likely that one can say more about the singularity of the set $A$ of the extended mapping in theorems 7 and 8.
Question 6. Let $X$ be a compact complex manifold carrying a pluriclosed metric form and let $f: \Delta^3 \setminus S \to X$ be a meromorphic mapping. Suppose that $A$ is a minimal closed subset of $\Delta^3$ such that $f$ extends to $\Delta^3 \setminus A$ and that the Hausdorff 4-dimensional measure of $A$ is zero. Prove that each connected component of $A$ is a complex curve.
For general $X$ without special metrics the answer could be negative, see examples in the last section of [19].
4. Application to Kähler fibrations.
4.1. Extension of meromorphic sections. We start with the proof of Theorem 2 from Introduction, which answers a question posed to the author by T. Ohsawa.
Proof. Step 1. Every point $y \in Y$ has a neighborhood $U$ such that $X_U = p^{-1}(U)$ being the union of fibers over $U$, possesses a Hermitian metric such that its Kähler form $\omega_U$ is a (1,1)-component of a closed form.
To see this, take a coordinate neighborhood $U \ni y$ such that $X_U$ is diffeomorphic to $U \times X_y$. Let $p_1: U \times X_y$ be the projection onto the second factor. Let $\omega_y$ be a Kähler form on $X_y$. Consider the following 1-form on $X_U$: $\omega_U = p^*dd^c|z|^2 + p_1^*\omega_y$, where $z$ is the vector of local coordinates on $U$. $\omega_U$ is $d$-closed. Its (1,1)-component is positive for $U$ small enough, since $\omega_y$ is positive on $X_y$.
Let $\rho$ be a strictly plurisubharmonic Morse exhaustion function on the Stein manifold $W := Y \setminus S$. Set $W_t = \{y \in W: \rho(y) > t\}$. Given a meromorphic section $v$ on the neighborhood of $S$. Then $v$ is defined on some $W_t$. The set $T$ of $t$ such that $v$ meromorphically extends to $W_t$ is non-empty and close.
Step 2. $T$ is open. Let $t \in T$, then $v$ is well defined and meromorphic on $W_t$. Set $S_t = \{y \in W: \rho(y) = t\}$. Fix a point $y_0 \in S_t$. Take a neighborhood $U$ of $y_0$ and form $\omega_U$ as in Step 1. If $y_0$ is a regular point of $S_t$ then there exists a Hartogs figure $H \subset W_t$ such that the corresponding polydisk $D$ contains $y_0$. By Corollary 3, the meromorphic mapping $v: H \to D \times X_{y_0}$ can be meromorphically extended to $D$ and we are done.
If $y_0$ is a critical point of $S_t$ then we use the result of Eliashberg, see also Lemma 2.1 from [11]. By this description of critical points of strictly plurisubharmonic Morse functions, we can suppose that $y_0$ lies in a totally real disk $B$ in $D$ and $W_t \supset D \setminus B$. But then the argument remains the same, because in this case we can also find an appropriate Hartogs figure in $W_t$ with the corresponding polydisk containing $y_0$.
Therefore, $v$ extends to $W_t$ for all $t$ and the theorem is proved. □
Remark 13. Note that dealing with Kähler fibrations we were forced to use Corollary 3 which concerns non-Kähler situation.
4.2. Non-Kähler deformations of Kähler manifolds. Recall that a complex deformation of a compact complex manifold $X$ is a complex manifold $\mathcal{X}$ together with a proper surjective holomorphic map $\pi: \mathcal{X} \to \Delta$ of rank one with connected fibers and such that the
fiber $X_0$ over zero is biholomorphic to $X$. From [13] one knows that if $X_0$ is Kähler then this does not imply that the neighboring fibers $X_t$ are Kähler. But Step 1 in the proof of Theorem 2 tells us that for $t \sim 0$ the fiber $X_t$ admits a Hermitian metric such that its associated form is a $(1, 1)$-component of a closed form. Therefore, Corollary 3 can be applied to $X_t$.
Let us give a formal statement. We say that a complex manifold $X$ possesses a meromorphic extension property if for every domain $D$ in a Stein manifold every meromorphic mapping $f : D \to X$ meromorphically extends to the envelope of holomorphy of $D$.
**Corollary 5.** Let $X_t$ be a complex deformation of a compact Kähler manifold $X_0$. Then for $t \sim 0$, $X_t$ possesses the meromorphic extension property.
### 4.3. Open questions.
**Question 7.** Suppose all $X_t$ for $t \neq 0$ possed the meromorphic extension property (as, for example, Kähler manifolds). Does $X_0$ possess it as well?
**Question 8.** If $X_0$ possesses the meromorphic extension property, does $X_t$ possess it for $t$, close to zero?
### 5. Coverings of non-Kähler manifolds.
#### 5.1. General facts.
To stay within reasonable generality, we shall restrict ourselves here to subdomains of $\mathbb{C}\mathbb{P}^n$, covering compact complex manifolds (this includes also subdomains of $\mathbb{C}^n \subset \mathbb{C}\mathbb{P}^n$). However, many statements have an obvious meaning (reformulation) in the case of domains in general complex manifolds.
Locally pseudoconvex domains in (and over) both $\mathbb{C}^n$ and $\mathbb{C}\mathbb{P}^n$ are Stein (with one exception, $\mathbb{C}\mathbb{P}^n$ itself), see [29, 32]. They can cover both Kähler and non-Kähler manifolds. But Theorem 4 implies the following statement.
**Corollary 6.** If a subdomain $D \subset \mathbb{C}\mathbb{P}^n$ covers a compact Kähler manifold $V$ then $D$ is Stein, unless $V = \mathbb{C}\mathbb{P}^n$.
An example of Stein domain covering a non-Kähler compact manifold is any Inoue surface with $b_2 = 0$. Their universal covering is $\mathbb{C} \times H$, where $H$ is the upper half-plane of $\mathbb{C}$.
#### 5.2. Coverings by domains from $\mathbb{C}\mathbb{P}^2$.
Since every compact complex surface admits a $dd^c$-closed metric form, Theorem 7 yields the following assertion.
**Corollary 7.** If a subdomain $D \subset \mathbb{C}\mathbb{P}^2$ covers a compact complex surface $X$ then either $D$ is Stein, or $\mathbb{C}\mathbb{P}^2$ itself or $X$ contains a spherical shell.
In $\mathbb{C}\mathbb{P}^3$ we have an analogous corollary from Theorem 8. Recall that a domain $D \subset \mathbb{C}\mathbb{P}^n$ is $q$-convex if it admits an exhaustion function such that its Levi form has, at least, $n - q + 1$ strictly positive eigenvalues at each point outside of some compact subset of $D$.
**Corollary 8.** If $D \subset \mathbb{C}\mathbb{P}^3$ covers a compact complex 3-fold then either $D$ is 2-convex, or $D = \mathbb{C}\mathbb{P}^3$, or $V$ contains a (three dimensional) spherical shell.
#### 5.3. Coverings by "large" domains from $\mathbb{C}\mathbb{P}^3$.
A domain $D \subset \mathbb{C}\mathbb{P}^n$ is said to be "large" if its complement $\Lambda := \mathbb{C}\mathbb{P}^n \setminus D$ is "small" in some sense. Different authors give different senses to the notion of being "small", see [24, 25] and therefore we shall reserve ourselves from giving a general definition.
We start with the remark that if $\Lambda \neq \emptyset$ then its Hausdorff $n$-dimensional (resp. $n - 1$-dimensional) measure is non-zero if $n$ is even (resp. odd). For example, in $\mathbb{C}\mathbb{P}^2$ and in $\mathbb{C}\mathbb{P}^3$ this condition is the same: $h_2(\Lambda) > 0$, see [25]. Both cases is easy to realize by examples. We have the following statement.
**Proposition 1.** Suppose that a domain $D \subset \mathbb{C}\mathbb{P}^3$ covers a compact complex threefold $X$.
Case 1. If the complement $\Lambda = \mathbb{C}\mathbb{P}^3 \setminus D$ is locally a finite union of two-dimensional submanifolds then $\Lambda$ is a union of finitely many lines.
Case 2. If the complement $\Lambda = \mathbb{C}\mathbb{P}^3 \setminus D$ is locally a finite union of three-dimensional submanifolds then $\Lambda$ is foliated by lines.
**Proof.** Consider a point $p$ on the limit set $\Lambda$ and find a point $q \in D$ and a sequence of automorphisms $\gamma_n \subset \Gamma$ such that $\gamma_n(q) \to p$. Here $\Gamma$ is a subgroup of Aut$(D)$ such that $D/\Gamma = X$. Due to the Hausdorff dimension condition on $\Lambda$, there exists a line $l \ni q$ such that $l \cap \Lambda = \emptyset$. Then $\gamma_n(l)$ will converge to a line in $\Lambda$ passing through $p$.
In [24] an example of a $\Lambda$ of dimension 3 is constructed.
### 5.4. Open questions.
**Question 9.** Can one prove an analog of Case 1 of Proposition 1, assuming only that $h_2(\Lambda)$ is finite?
In that case the components of $\Lambda$ could be lines and points.
**Question 10.** Suppose that the complement $\Lambda = \mathbb{C}\mathbb{P}^3 \setminus D$ is locally a union of four-dimensional submanifolds. Are all the components of $\Lambda$ necessarily either complex hypersurfaces or CR-manifolds of CR-dimension one? Or, can one have components which are not CR-submanifolds? Are those CR-submanifolds Levi-flat?
[3] contains an example on pp. 82-83 where one component of $\Lambda$ is a complex hyperplane, and another is a Levi-flat “perturbation” of a complex hyperplane.
### 6. Disk-convexity of complex spaces.
#### 6.1. The notion of disk-convexity.
All results, except that of Section 4, presented in this paper are valid for more general classes of complex manifolds and spaces then just compact ones. The compactness can be replaced with much less restrictive condition, namely with the disk-convexity.
**Definition 4.** (a) A complex space $X$ is said to be *disk-convex* if for every compact $K \subset X$ there is another compact $\widehat{K}$ such that for any holomorphic map $h : \overline{\Delta} \to X$ with $h(\partial \Delta) \subset K$ one has $h(\Delta) \subset \widehat{K}$.
(b) An $X$ is said to be *disk-convex in dimension k* if for every compact $K \subset X$ there is another compact $\widehat{K}$ such that for any meromorphic map $h : \overline{\Delta}^k \to X$ with $h(\partial \Delta^k) \subset K$ one has $h(\Delta^k) \subset \widehat{K}$.
**Remark 14.**
1. In all formulations of Section 2 “compact Kähler” can be replaced with “disk-convex Kähler”. Neither original proofs, no backgrounds use more than disk-convexity.
2. In formulations of Subsection 3.2 the same: “compact of class $G_1$” can be replaced with “disk-convex of class $G_1$”. This was actually done in [19], see Theorem 4 there.
3. Theorem 8 is valid for manifolds from $G_2$ which are disk-convex in dimension 2.
6.2. \( k \)-convexity \( \implies \) disk-convexity in dimension \( k \). Now let us compare the notion of disk-convexity with other convexities used in complex analysis. We shall see that our notion is the most weak one (and this is its great advantage).
**Definition 5.** A \( C^2 \)-smooth real function \( \rho \) on \( X \) is said to be \( k \)-convex if for any local chart \( j: V \to \tilde{V} \subset \Delta^N \) there exists a real \( C^2 \)-function \( \tilde{\rho} \) on \( \Delta^N \) such that \( \tilde{\rho} \circ j = \rho \) and the Levi form of \( \tilde{\rho} \) has, at least, \( N - k + 1 \) positive eigenvalues at each point of \( \Delta^N \).
**Definition 6.** A complex space \( X \) is said to be \( k \)-convex (in the sense of Grauert) if there exists a \( C^2 \) exhaustion function \( \rho: X \to [0, \infty] \), which is \( k \)-convex outside some compact \( K \Subset X \).
We shall start with the following statement.
**Maximum Principle.** Let \( \rho \) be a \( k \)-convex function on the complex space \( X \) and \( A \) be a pure \( k \)-dimensional analytic subset of \( X \). If for some point \( p \in A \) \( \rho(p) = \sup_{a \in A} \rho(a) \), then \( \rho \big|_A \equiv \text{const} \).
**Proof.** If there is a smooth point \( p \in A^{\text{reg}} \) where \( \rho \big|_A \) achieves its maximum, then the conclusion is clear. Indeed, while the Levi form of \( \rho_A: = \rho \big|_A \) has, at least, one positive eigenvalue at \( p \), one can find an analytically imbedded disk \( \Delta \ni p \) such that the restriction \( \rho \big|_\Delta \) is subharmonic. This implies that \( \rho \big|_\Delta \equiv \text{const} \). Further, one can find a holomorphic coordinates \( (z_1, \ldots, z_k) = (z_1; z') \) in the neighborhood of \( p \) such that the restriction \( \rho_D \) of \( \rho \) to every disk \( D = \{(z_1; z'): z' = 0\} \) is subharmonic and such that our original disk \( \Delta \) is transversal to all such \( D \). We conclude that \( \rho \equiv \text{const} \) in the neighborhood of \( p \). The rest is obvious. \( \square \)
Now consider the case when \( p \) belongs to the set \( A^{\text{sing}} \) of singular points of \( A \). We shall be done if we prove that in the neighborhood of \( p \) there is another point \( q \in A^{\text{reg}} \) such that \( \rho(q) = \rho(p) \). Consider a neighborhood \( V \) of \( p \) together with an imbedding \( j: V \to \tilde{V} \subset \Delta^N \) of \( V \) as a closed analytic subset \( \tilde{V} \) in the unit polydisk. Let also \( j(p) = 0 \). We set \( \tilde{A} = j(A \cap V) \) and observe that \( \tilde{A} \) is an analytic subset of pure dimension \( k \) in \( \Delta^N \). Now fix some irreducible component \( B \) of \( \tilde{A} \) passing through zero.
**Lemma 1.** Let \( \Pi \) be a linear subspace of dimension \( N - k + 1 \) of \( \mathbb{C}^N \). Then for a subspace which is a generic perturbation of \( \Pi \) (and is again denoted by \( \Pi \)) there exists an \( \varepsilon > 0 \) such that \( \tilde{\Pi} \cap B \cap \Delta^\varepsilon_z \) is a complex curve.
**Proof.** Blow up the origin in \( \mathbb{C}^N \). Let \( \mathbb{P}^{N-1} \) be an exceptional divisor and \( \pi: \mathbb{C}^N \setminus \{0\} \to \mathbb{P}^{N_1} \) a natural projection. Denote by \( \hat{B} \) and \( \hat{\Pi} \) the strict transforms of \( B \) and \( \Pi \). Recall that \( \pi^{-1}(\hat{B} \cap \mathbb{P}^{N-1}) \cup \{0\} \) is a tangent cone to \( B \) at zero. While \( \hat{B} \cap \mathbb{P}^{N-1} \) is of dimension \( k - 1 \) and \( \hat{\Pi} \cap \mathbb{P}^{N-1} \) is a linear subspace of dimension \( N - k \), for a generic perturbation \( \Pi \) the intersection \( \hat{\Pi} \cap \mathbb{P}^{N-1} \cap \hat{B} \) is zerodimensional.
The usual properties of tangent cone imply that \( \Pi \cap B \) has the tangent cone at zero of dimension one. And this implies that for a small enough \( \varepsilon > 0 \) the intersection \( \Pi \cap B \cap \Delta^\varepsilon_z \) is a curve. The lemma is proved.
Let us finish the proof of the maximum principle. While the Levi form of \( \tilde{\rho}: = \rho \circ j \) has, at least, \( N - k + 1 \) positive eigenvalues at zero, one can find a linear subspace \( \Pi \) in \( \mathbb{C}^n \) of dimension \( N - k + 1 \) lying inside the positive cone of \( L_{\tilde{\rho}}(0) \). We can take instead of \( \tilde{A} \) some of its irreducible component \( B \) passing through zero. After a small perturbation \( \Pi \) became
transversal to $B^{\text{sing}}$ still being in the positive cone. Thus, $\Pi \cap B^{\text{reg}} \cap \Delta_{\varepsilon}^{N} \neq \emptyset$ for all $\varepsilon > 0$ small enough and the same is true for the small perturbations of $\Pi$. Now our lemma provides us with a perturbation $\Pi$ such that:
1) $\Pi \cap B \cap \Delta_{\varepsilon}^{N} =: C$ is a curve, passing through zero for some $\varepsilon > 0$;
2) $\Pi$ lies in the positive cone of $\mathcal{L}_{\rho}(0)$;
3) $C \cap B^{\text{reg}} \neq \emptyset$.
But this means that $\overline{\rho}|_{C}$ is subharmonic. Having zero as maximum, it is constant. Thus, we have found smooth points where $\rho$ takes its maximum. \qed
**Theorem 9.** $k$-convexity $\implies$ disk-convexity in dimension $k$.
**Proof.** Let $\rho$ be an exhaustion function on $X$, which is $k$-convex outside a compact $P$. Put $a = \sup_{x \in K \cup P} \rho(x)$, and put $\hat{K} = \{ x \in X : \rho(x) \leq a \}$. Let $h : \overline{\Delta}^{k} \to X$ be some meromorphic map with $h(\partial \Delta^{k}) \subset K$. If $h(\overline{\Delta}^{k})$ were not contained in $\hat{K}$ then $h(\overline{\Delta}^{k}) \setminus \hat{K}$ would be a nonempty pure $k$-dimensional analytic subset of $X \setminus \hat{K}$.
This clearly contradicts the maximum principle. \qed
**Remark 15.** This theorem answers a question which was posed to the author by D. Barlet. It is well known that the $k$-convexity is nearly weakest notion among convexities used in complex analysis.
### 6.3. Filling holes in complex surfaces
How fare can be a complex manifold or a space be from being disk-convex? This seems to be a difficult question. Here we shall indicate an interesting particular case of being non-disk-convex. For the technical reasons, we shall restrict ourselves to complex dimension two. $B^{*} = B \setminus \{0\}$ will stand for the punctured ball in $\mathbb{C}^{2}$.
Let $X$ be a normal complex surface, i.e. a normal complex space of complex dimension two, which will be supposed to be reduced and countable at infinity. Following [1], we give the following definition.
**Definition 7.** We say that $X$ has a hole if there exists a meromorphic mapping $f : B^{*} \to X$ such that $\lim_{z \to 0} f(z) = \emptyset$.
**Remark 16.** If $X$ has a hole then it is certainly not disk-convex.
But this particular cause of non-disk-convexity can be repaired.
**Theorem 10.** Let $X$ be a complex surface. Then there exists a complex surface $\hat{X}$ and a meromorphic injection $i : X \to \hat{X}$ such that:
i) $i(X)$ is open and dense in $\hat{X}$;
ii) $\hat{X}$ has no holes.
**Remark 17.** This result was announced in [20], here we shall give the sketch of the proof which crucially uses results of Grauert about complex equivalent relations, see [8, 9].
**Proof.** Let a hole $f : B^{*} \to X$ be given. If there is a curve $C \subset B^{*}$ contracted by $f$ to a point $p \in X$ we can blow-up $X$ at $p$ and get a new surface and a new map which is not contracting
$C$ and which is still a hole. Since, after shrinking $B$, there can be finitely many contracted curves only, we can suppose without loss of generality that
\[ f \text{ is not contracting any curves in } B^*. \tag{*} \]
On $B^*$ we define the following equivalence relation: $x \sim y$ if $f(x) = f(y)$. This means that if one of these points, say $y$ is an indeterminacy point of $f$ then $f(x) \in f[y]$. If both $x$ and $y$ are points of indeterminacy then we require that $f[x] = f[y]$. This equivalence relation $R \subset B^* \times B^*$ is an analytic set in $B^* \times B^*$. This follows from the fact that $f$ is a “hole”. Indeed, one cannot have an accumulation point of $R$ of the kind $(a, 0)$ of $(0, a)$ with $a \neq 0$. Moreover, $R$ is semiproper for the same reason. Therefore, $R$ extends to $B \times B$ (it is a meromorphic equivalence relation there in the sense of [9]). By the results of [8, 9], the quotient $Q = B/R$ is a normal complex surface.
Now we can attach $Q$ to $X$ by the quotient map $f|_R$ and get a new normal surface with a hole filled in.
Using the Zorn’s lemma, one constructs a maximal extension $\hat{X}$ of $X$ such that $X$ is open and dense in $\hat{X}$ ($\hat{X}$ is not unique!). The “filling in” procedure above implies that this $\hat{X}$ should be Disk-convex. □
6.4. Open questions. One could try to improve the result of Theorem 9 as follows.
**Question 11.** Can every complex surface be imbedded as a subdomain into a disk-convex complex surface?
In some cases another notions of “disk-convexity” are needed.
i) A complex manifold $X$ is said to be disk-convex if for any compact $K \subset X$ there exists a compact $\hat{K} \subset X$ such that for every Riemann surface with boundary $(R, \partial R)$ and every holomorphic mapping $\varphi : R \to X$ continuous up to the boundary the condition $\varphi(\partial R) \subset K$ imply $\varphi(R) \subset \hat{K}$.
ii) $X$ is called disk-convex if for any convergent on $\partial \Delta$ sequence $\{\varphi_n : \overline{\Delta} \to X\}$ of analytic disks this sequence converge also on $\overline{\Delta}$.
iii) The same definition can be given with sequences of Riemann surfaces instead of the disk.
**Question 12.** What is the relation between all these notions and that defined in Definition 4? Are they equivalent?
Of course, there are some obvious implications.
7. Open questions.
**Question 13.** Let the complex manifold $D$ is defined as two-sheeted cover of $\Delta^2 \setminus \mathbb{R}^2$, i.e. $D$ is a "nonschlicht" domain over $\mathbb{C}^2$. Does there exist a compact complex manifold $X$ and a holomorphic (meromorphic) mapping $f : D \to X$ which separates points?
Note that the results of this paper imply that if such an $X$ exists then it cannot possess a plurinegative metric form. Thus, examples could occur starting from $\dim X \geq 3$.
In the following problems the space $X$ is assumed to be equipped with some Hermitian metric form $\omega$. On the subsets of $\mathbb{C}^n$ the metric is always $dd^c \|z\|^2$.
**Question 14.** Consider the class \( \mathcal{J}_R \) of holomorphic mappings \( f: \Delta^k \to X \), \( X \) being compact, such that:
(a) \( \|Df\| \geq R > 0 \). Here \( \|Df\| \) denotes the norm of the differential of \( f \);
(b) \( \text{Vol}(f(\Delta^k)) \leq C_1 \) for all \( f \in \mathcal{J}_R \).
Prove that there is a constant \( C_2 = C_2(X, R, C_1) \) such that \( \text{Vol}(\Gamma_f) \leq C_2 \) for all \( f \in \mathcal{J}_R \).
To estimate the volume of the graph of \( f \), one should estimate the integral
\[
\text{Vol}(\Gamma_f) = \int_{\Delta^k} (\ddc ||z||^2 + f^*\omega)^k = \sum_{j=0}^{k} \int_{\Delta^k} (\ddc ||z||^2)^j \wedge (f^*\omega)^{k-j},
\]
were only the first integral \( \int_{\Delta^k} (f^*\omega)^k = \text{Vol}(f(\Delta^k)) \) is bounded by the condition of the question.
The following question is of the same nature.
**Question 15.** Let \( f: \Delta^k_s \to X \) be a meromorphic mapping from a punctured polydisk into a compact complex space \( X \). Suppose that \( \text{Vol}f(\Delta^k_s) < \infty \). Prove that \( f \) meromorphically extends to zero.
**Question 16.** Let \( f: \Delta^{k+1}_s \to X \in \mathcal{G}_k \) be a meromorphic map from punctured \((k+1)\)-disk into a compact complex space from class \( \mathcal{G}_k \). Prove that \( \text{Vol}(f(A^k(r, 1)) = O(\log^{\frac{k-1}{n-1}}(\frac{1}{r})) \) provided \( k \geq 2 \). In particular, for equidimensional maps \( f: \Delta^n_s \to X^n \) one always should have \( \text{Vol}(f(A^n(r, 1)) = O(\log^{\frac{n-1}{n-1}}(\frac{1}{r})). \)
For \( n = 1 \) there are no bounds on the growth of a meromorphic function in the punctured disk.
**Question 17.** Fix some \( 0 < r < 1 \) and some constant \( R \). Fix also a compact complex space \( X \). Consider the following class \( \mathcal{F}_R \) of meromorphic mappings from \( f: \Delta^n \to X \):
1. \( \text{Vol}_{2n}(\Gamma_f \cap (A^n(r, 1) \times X)) \leq R; \)
2. for every \( k \)-disk \( \Delta^k_z = \{z\} \times \Delta^k \) (where \( z \in \Delta^{n-k} \)) \( \text{Vol}_{2k}(\Gamma_{f_z} \cap A^k_z(r, 1) \times X) \leq R. \)
(a) Prove that for any constant \( l \) there is a constant \( A \) such that for any \( f \in \mathcal{F}_R \) satisfying \( \text{Vol}_{2k}(\Gamma_{f_z}) \leq l \) for all restrictions \( f_z \) of \( f \) to the \( k \)-disks \( \Delta^k_z \) one has \( \text{Vol}_{2n}(\Gamma_f) \leq A \).
(b) Vice versa: for any constant \( a \) there is a constant \( L \) such that for any \( f \in \mathcal{F}_R \) such that \( \text{Vol}_{2n}(\Gamma_f) \leq a \) one has \( \text{Vol}_{2k}(\Gamma_{f_z}) \leq L \) for all \( \Delta^k_z \).
The following question is a variation of Questions 7 and 8.
**Question 18.** Let \( \mathcal{X} = \{X_t\} \) be a deformation of compact complex surfaces. Suppose that \( X_t \) for \( t \neq 0 \) contain a global spherical shell. Does \( X_0 \) also contains a global spherical shell?
**Question 19.** Let \( \mathcal{F} \) be some family of holomorphic (meromorphic) mappings from the unit polydisk \( \Delta^{n+1} \) to a compact Kähler manifold \( X \) (or \( X \in \mathcal{G}_1 \) more generally). Suppose that \( \mathcal{F} \) is equicontinuous on the Hartogs figure \( H^1_n(r) \). Is \( \mathcal{F} \) equicontinuous on \( \Delta^{n+1} \)?
See more about this question in [18].
REFERENCES
1. Andreotti A., Stoll W. *Extension of Holomorphic Maps*. Annals of Math., 1960, **72**, 312-349.
2. Bierstone E., Milman, P.D. *Local resolution of singularities*. Proc. Symp. Pure. Math., 1991, **52**, 42-64.
3. Bogomolov F., Katzarkov L. *Symplectic four-manifolds and projective surfaces*. Topology and its Applications, 1998, **88**, 79-109.
4. Carlson J., Harvey R. *A remark on the universal cover of a Moishezon space*. Duke Math. J., 1976, **43**, 497-500.
5. Chazal, F. *Un théorème de prolongement d'applications méromorphes*. Math. Ann., 2001, **320**, 285-297.
6. Chern S.-S. *Differential geometry: its past and its future*. Proc. International Congress of Mathematicians, Nice, 1970.
7. Gauduchon P. *La 1-forme de torsion d'une variété hermitienne compacte*. Math. Ann., 1984, **267**, 495-518.
8. Grauert H. *Set Theoretic Complex Equivalence Relations*. Math. Ann., 1983, **265**, 137-148.
9. Grauert H. *Meromorphe Äquivalenzrelationen*. Math. Ann., 1987, **278**, 175-183.
10. Griffiths P. *Two theorems on extension of holomorphic mappings*. Invent. math., 1971, **14**, 27-62.
11. Forstneric F., Slapar M. *Stein structures and holomorphic mappings*. Math. Z., 2007, **256**, 615-646.
12. Hironaka H. Introduction to the theory of infinitely near singular points. Mem. Mat. Inst. Jorge Juan, **28**, Consejo Superior de Investigaciones Cientificas, Madrid, 1974.
13. Hironaka H. *An example of a non-Kählerian complex-analytic deformation of Kählerian complex structures*. Ann. of Math. (2), 1962, **75**, 190-208.
14. Hirschowitz A. *Pseudoconvexité au-dessus d'espaces plus ou moins homogènes*. Invent. math., 1974, **26**, 303-322.
15. Ilyashenko J. *Foliations by analytic curves*. Russ. Math. Sbornik, 1972, **88**, 558-577.
16. Ivashkovich S. *The Hartogs phenomenon for holomorphically convex Kähler manifolds*. Math. USSR Izvestija, 1987, **29**, №1, 225-232.
17. Ivashkovich S. *The Hartogs type extension theorem for meromorphic maps into compact Kähler manifolds*. Invent. math., 1992, **109**, 47-54.
18. Ivashkovich S. *On convergency properties of meromorphic functions and mappings*. In B. Shabat Memorial vol., Moscow, FASIS, 1997, 145-163 (in russian, english version in math.CV/9804007).
19. Ivashkovich S. *Extension properties of meromorphic mappings with values in non-Kähler complex manifolds*. Annals of Mathematics, 2004, **160**, 795-837.
20. Ivashkovich S. *Filling "holes" in complex surfaces*. Proc. Seminar Complex Anal., Novosibirsk, 1987, p.47.
21. Ivashkovich S. *An example concerning extension and separate analyticity properties of meromorphic mappings*. Amer. J. Math., 1999, **121**, 97-130.
22. Ivashkovich S., Shiffman B. *Compact singularities of meromorphic mappings between 3-dimensional manifolds*. Math. Res. Letters, 2000, **7**, 695-708.
23. Kato M. *Examples on an Extension Problem of Holomorphic Maps and a Holomorphic 1-Dimensional Foliation*. Tokyo J. Math., 1990, **13**, №1, 139-146.
24. Kato M. *A non-Kähler structure on an $S^2$-bundle over a ruled surface*. Unpublished preprint, 1992.
25. Lárusson F. *Compact quotiens of large domains in complex projective space*. Annales de l'Institute Fourier, 1998, **48**, №1, 223-246.
26. Mok N, Wong B. *Characterization of bounded domains covering Zarisski dense subsets of compact complex spaces*. Amer. J. Math., 1983, **105**, 1481-1487.
27. Nemirovski S., Shafikov R. *Uniformization of strictly pseudoconvex domains. I, II*. Izv. Mat., 2005, **69**, №6, 1189-1202 and 1203-1210.
28. Ohsawa T. *Remark on pseudoconvex domains with analytic complements in compact Kähler manifolds*. J. Math. Kyoto Univ., 2007, **47**, №1, 115-119.
29. Oka K. *Sur les fonctions analytiques de plusieurs variables, IX, Domains finis sans point critique intérieur*. Japan J. Math., 1953, **23**, 97-155.
30. Siegel C. Analytic functions of several complex variables. Institute for Advanced Study, Princeton, NJ, 1950.
31. Steín K. *Topics on holomorphic correspondences*. Rocky Mountain J. Math., 1972, **2**, 443-463.
32. Takeuchi A. *Domaines pseudoconvexes infinis et la métrique riemannienne dans un espace projectif*. J. Math. Soc. Japan, 1964, **16**, 159-181.
U.F.R. de Mathématiques
Université de Lille-1
59655 Villeneuve d’Ascq, France
firstname.lastname@example.org
IAPMM Acad. Sci. Ukraine
79601 Lviv, Naukova 3b, Ukraine
*Received 30.05.2008*
|
Reproduction parameters of the Iberian hare *Lepus granatensis* at the edge of its range
Authors: Fernandez, Alfonso, Soriguer, Ramón, Castien, Enrique, and Carro, Francisco
Source: Wildlife Biology, 14(4) : 434-443
Published By: Nordic Board for Wildlife Research
URL: https://doi.org/10.2981/0909-6396-14.4.434
Reproduction parameters of the Iberian hare *Lepus granatensis* at the edge of its range
Alfonso Fernandez, Ramón Soriguer, Enrique Castien & Francisco Carro
Fernandez, A., Soriguer, R., Castien, E. & Carro, F. 2008: Reproduction parameters of the Iberian hare *Lepus granatensis* at the edge of its range. - Wildl. Biol. 14: 434-443.
In order to provide a basis for the sustainable exploitation of the heavily hunted Iberian hare *Lepus granatensis*, we compared its reproductive parameters at the northern edge of its range, where it occurs at low densities, with those reported in other studies elsewhere within the species’ range. Monthly samples totalling 212 Iberian hares (104 males and 108 females) were collected in the province of Navarra during November 2001 - December 2002. Reproductive parameters varied only slightly from season to season. Sexually competent males, defined by the presence of epididymal spermatozoa, were present in all bimonthly periods. Reproductively active adult females were also present in all the bimonthly periods, although a slight seasonality was detected: the highest incidence of pregnancy and lactation (100%) occurred in March-April, while the lowest incidence of adult females that were neither pregnant nor lactating (15%) occurred in September-December. Mean annual litter size was 2.09 and the theoretical value of annual young production per female was estimated to be 16.1, which is much higher than estimates reported in studies of *L. granatensis* in Portugal and southern Spain. In Navarra, which is at the northern limit of the species’ range, densities are low due to intense hunting. However, the observed reproductive potential was surprisingly high and facilitates recruitment to the population which could, to a certain extent, make up for the high hunting pressure in the area.
**Key words:** *Lepus granatensis*, litter size, Navarra, productivity, reproduction
Alfonso Fernandez & Francisco Carro, Universidad Pública de Navarra, IARN, Campus Arrosadia s/n. E-31006 Pamplona, Spain - e-mail addresses: email@example.com (Alfonso Fernandez); firstname.lastname@example.org (Francisco Carro)
Ramón Soriguer, Estación Biológica de Doñana (CSIC). Av. María Luisa s/n., Sevilla, Spain - e-mail: email@example.com
Enrique Castien, Departamento de Medio Ambiente, Gobierno de Navarra. C/ Yanguas y Miranda. E-31002 Pamplona, Spain - e-mail: firstname.lastname@example.org
Corresponding author: Alfonso Fernandez
Received 16 October 2007, accepted 21 March 2008
Associate Editor: Glenn Iason
The Iberian hare *Lepus granatensis* is endemic to the Iberian Peninsula, where it mainly occupies agricultural areas and open fields (Duarte 2000). The species is widely distributed and intensively hunted throughout its range. More than one million individuals are taken annually and hunting pressure is increasing every year (Vargas 2002). Hunting thus needs to be sustainable and any application of the concepts of sustainable harvesting must take into account the demographic characteristics of the species’ populations. This would be possible if local data on demography (e.g. age structure and reproductive parameters) were available, and would allow implementation of strategies for managing populations based on knowledge of the species’ reproductive characteristics (Marboutin et al. 2003). Yet, through most of the Iberian hare’s distribution area (including our study area), population management is often based exclusively on abundance data (Lucio 1996) and, although abundance is directly related to reproduction, it is above all related to the breeding success of females and juvenile survival rates (Marboutin et al. 2003). In contrast to the situation regarding other *Lepus* species (e.g. brown hare *L. europaeus* and mountain hare *L. timidus*), information about the reproductive biology of the Iberian hare is still scarce.
Recent studies have investigated the reproductive parameters and cycles of the Iberian hare on the Iberian Peninsula (Alves et al. 2002, Duarte et al. 2002, Alves & Rocha 2003, Farfán et al. 2004), but all these studies have been carried out in the south of the species’ distribution range, where environmental conditions are optimal for the species and populations are higher than in the north (Duarte et al. 2002). Studies of other *Lepus* species have shown that the proportion of breeding females and their fecundity varies over the species’ geographical range in relation to differences in environmental characteristics and densities (Hansen 1992, Hackländer et al. 2001, Kauhala et al. 2005). If reproduction in the Iberian hare were density dependent, we would expect high reproductive rates to occur in the low-density study population at the edge of the species’ range. A possible hypothesis is that this low density occurs because conditions are not suitable for either a high reproductive output or juvenile survival. We therefore investigated the basic reproductive parameters of Iberian hares in Navarra by a) analysing their intra-annual variation, b) estimating the annual potential production of young and c) comparing our results with those already published for the species.
**Material and methods**
**Study area**
Our study was carried out in southern Navarra (northern Iberian Peninsula), between $42^\circ 10'$, $42^\circ 40'$N and $1^\circ 10'$, $2^\circ 20'$W. In this region, the distribution of the Iberian hare covers a total area of ca 4,000 km$^2$. Samples for our study were collected in 15 hunting areas along the Ebro river, which forms a wide alluvial plain characterised by smooth relief and mean altitudes of 400-500 m a.s.l. The landscape in the sampling areas consisted mainly of wheat and barley crops, mixed with vineyards, olive groves and fruit trees. The climate of the area can be considered to be Mediterranean, with warm, dry summers and cold to mild rainy winters, and a total annual rainfall of ca 600 mm (Pejenaute 1992).
**Samples and data collection**
We collected samples monthly during November 2001 - December 2002 as part of a monitoring programme carried out by the regional Government. Extra samples were collected during the hunting season (November-December) and some road-killed hares were also examined. We analysed a total of 212 reproductive tracts ($N=104$ males and $N=108$ females; Table 1) by examining the gonads (male testes and female uterus and ovaries) and reproductive status of individuals. As a result of this
| | Males | | Females | | Total | |
|----------------|-------|----------|---------|----------|-------|----------|
| | Young | Adult | Young | Adult | Indeterminate | Males | Females | Total |
| January-February | 2 | 10 | 2 | 10 | - | 12 | 12 | 24 |
| March-April | 5 | 10 | 3 | 4 | - | 15 | 7 | 22 |
| May-June | 6 | 2 | 3 | 8 | - | 8 | 11 | 19 |
| July-August | 3 | 6 | 5 | 5 | - | 9 | 10 | 19 |
| September-October | 3 | 5 | 5 | 4 | 1 | 8 | 10 | 18 |
| November-December | 21 | 31 | 13 | 38 | 7 | 52 | 58 | 110 |
| **Total** | 40 | 64 | 31 | 69 | 8 | 104 | 108 | 212 |
© WILDLIFE BIOLOGY · 14:4 (2008)
heterogeneous sampling, variable sample sizes for each parameter were obtained. Hares were weighed to the nearest 25 g using a dynamometer and analysed, if possible, when fresh. Otherwise, reproductive organs were frozen (-20°C) and analysed in the laboratory within 48 hours of removal from the dead animals. Individuals were classified as adult or young on the basis of the presence or absence of epiphyseal distal cartilage in the radius and ulna ('Stroh's sing'; Stroh 1931): forelegs were first palpated and then cleaned and examined with the naked eye. This has been shown to be a reliable and practical method for age classification in *Lepus* species (e.g. mountain hare), in particular when combined with eye-lens weight (Kauhala & Soveri 2001). A growth curve for the Iberian hare which determines exactly the discriminating adult-young eye-lens weight value is still lacking, although we have defined approximate values in previous studies (A. Fernandez & R. Soriguer, pers. obs.). Additional biometric parameters were measured and eye lenses were removed and handled as described in the literature for *Lepus* species (Suchentrunk et al. 1991). In the very few cases where age could not be determined from the ossification stage of the radius and ulna, the dry weight of the eye lenses was used as described previously (A. Fernandez & R. Soriguer, pers. obs.).
**Reproductive activity**
**Males**
After noting their position (intra or extra-abdominal), testes were excised, measured lengthways and weighed to the nearest 1 mg (including epididymides). To verify the presence of spermatozoa, a sample from the tail of the epididymis was taken and analysed microscopically. The presence of sperm in the testes was taken to indicate reproductive maturity. Several studies of the brown hare have already demonstrated that the presence of spermatozooids in the epididymides is a reliable indicator of reproductive activity (Blottner et al. 2000, Brodowski et al. 2001), and this criterion has also been applied to the Iberian hare (Alves et al. 2002). On the basis of this criterion, two categories of males were considered: 'active' with spermatozoa and 'inactive' without spermatozoa.
**Females**
The reproductive activity of females was estimated on the basis of mammary gland activity and the presence/absence of embryos in the uterus. Thus, five reproductive statuses were distinguished: 1) pregnant with embryos implanted, 2) lactating with developed mammary glands present and giving milk if squeezed, 3) pregnant and lactating, 4) inactive adult females with no sign of reproductive activity, i.e. neither pregnant nor lactating, and 5) immature young females with no development of their reproductive tracts. First, an external examination of the activity of the mammary glands was performed. Then, reproductive tracts (ovaries and uterus) were excised and analysed. If pregnant, the uterus was opened and the number of embryos and their implantation sites were noted. The length and weight of embryos were also measured. Ovaries were weighed to the nearest mg and preserved in 10% formalin for at least 30 days (Flux 1967). This parameter varies during the yearly cycle when *corpora lutea* develop as a consequence of embryo implantation in the corresponding uterus horn. In this case, the ovaries were cut by hand with a scalpel into longitudinal sections approximately 2 mm thick and the *corpora lutea*, if present, were counted using a binocular microscope.
Prenatal mortality in lagomorphs can be caused by both preimplantation mortality of ova and embryo resorption (Allen et al. 1947). Thus, the number of *corpora lutea* is used as an indicator of the number of ova which ovulated and preimplantation mortality was estimated as the difference between this number and the number of implanted embryos in the uterus (Broekhuizen & Maaskamp 1981, Hansen 1992, Bonino & Montenegro 1997, Alves et al. 2002). However, due to the difficulty in detecting recent implantations and fertilised zygotes, prenatal mortality analysis was restricted to females with implanted embryos (Alves et al. 2002). Finally, some females were considered to have just given birth, evidenced by the presence of signs of pregnancy, i.e. *corpora lutea* were still present in the ovaries, the uterus was very irrigated and enlarged, and there were recent placental scars in the uterus horns. In these few cases, placental scars were counted and taken to be implanted embryos. In order to compare our results with those published by other authors (Alves et al. 2002, Farfán et al. 2004), the annual productivity of females (number of young produced) was estimated using the following formula (Pepin 1989):
\[ \Sigma (\text{bimonthly \% pregnant females} \times \text{bimonthly mean litter size}) \times 1.46. \]
Over a period of two months, a healthy adult female may have 1.46 litters (Pepin 1989), assuming that the average birth interval of the Iberian hare is similar to the length of the gestation period (41-42 days; Alves et al. 2002). We calculated bimonthly mean litter size by dividing the number of embryos by the number of pregnant females in the relevant period.
**Data analysis**
Values for reproductive parameters were pooled bimonthly in order to increase the sample size and to be able to compare the reproductive activity of the Iberian hare in Navarra with that of other hare species and with Iberian hares from other parts of its range (i.e. Alves et al. 2002). The normality of the data was tested using Shapiro-Wilk’s tests; log x transformations were employed when data did not fit normality. Paired Z or Student t-tests were used to compare the mean values of reproductive and biometric parameters i.e. weight of right and left testes and ovaries, size and weight of testes in males with and without sperm, intra and extra-abdominal weight of testes, weight of young and adult testes, body weight of active and inactive young females, and the number of *corpora lutea* (mean litter size of pregnant females). Pearson correlation coefficients were calculated in an attempt to relate weight and testes size, and to analyse the relationship between the number of ova ovulated and prenatal mortality. A one-way analysis of variance (ANOVA) was employed to test for differences between bimonthly periods in testis and ovary weights and in mean litter size. When F was significant, Newman-Keuls tests were performed to show which groups differed most from each other. We used STATISTICA version 6 to perform all the statistical analysis.
**Results**
**Males**
**Testis weight**
No differences were detected between right and left testes for the parameters described; size (t = 1.12, df=196, ns) and weight (t = 0.34, df=196, ns). Thus, mean values were always used in the analyses. Mean testis size and weight in individuals without spermatozoids were 25.2 mm ± 8.1 and 1.97 g ± 1.79, respectively, and for active individuals the values were 42.9 mm ± 4.5 and 8.63 g ± 2.1, respectively; the differences were statistically significant for both parameters (t = 17.5, df=94, P < 0.01 and t = 20.66, df=94, P < 0.01). Furthermore, we observed a significant correlation between both parameters (r = 0.89, N = 101, P < 0.01), which allowed us to use the weight of just one of them (testis weight) during the analyses.
**Reproductive activity**
A high percentage of males with spermatozoids in their epididymides were found throughout the year. Likewise, a high mean testis weight in adult males was also observed throughout the year (Fig. 1). Nevertheless, a significant variation in testis weight during the bimonthly periods was also observed (ANOVA $F_{5df} = 3.59$, P < 0.05), with a period of maximum values occurring in winter and spring (January-April; see Fig. 1). Males with extra-abdominal testes were present in all bimonthly periods, with percentages ranging from 58% in November-December to 100% in January-February and May-June. Of the adult males, 100% had spermatozoids in their epididymides in all periods except November-December (90%; see Fig. 1).
Males with extra-abdominal testes weighed on average 2,006 g ± 260 and all except one (which had a body weight of 1,615 g) had sperm in their

Table 2. Sperm presence in epididymides in relation to testicular position in adult Iberian hare males (N = 77). ** indicates P < 0.01.
| Testicular position | N | With sperm (%) | Without sperm (%) |
|---------------------|-----|----------------|-------------------|
| Intra-abdominal | 36 | 14 (39%) | 22 (61%) |
| Testis weight (g) | | 7.70 ± 2.43 | 1.42 ± 1.13 |
| Extra-abdominal | 41 | 40 (98%) | 1 (2%) |
| Testis weight (g) | | 9.38 ± 1.84 ** | 3.89 |
epididymides. However, of the males with intra-abdominal testes, 39% were also active (Table 2). The differences in mean weight between intra (N=14) and extra-abdominal (N=39) active testes was significant (t-test = 3.41, P = 0.001; see Table 2).
The presence of spermatozoids was confirmed in 70 pairs of testes, all weighing > 4 g and 86% of which were from individuals classified as adults (mean body weight = 1,977 g ± 232) and 14% from young hares (1,680 g ± 170). The testes of young males were significantly lighter (6.53 g ± 1.33, N = 11) than those of adults (9.14 g ± 1.80, N = 59; t = 4.56, P < 0.001). The heaviest pair of testes recorded (13.6 g) was that of an adult male with a body weight of 2,450 g; the lightest testes (2.11 g) were found in an adult male with a body weight of 1,950 g.
Females
Ovary weight
We observed no differences in the weight of right and left ovaries, neither when no corpora lutea were present (N = 37, Z = 0.14, ns) nor when they were present in both ovaries (N = 27, t = 0.26, ns). However, significant differences were observed when corpora lutea were present in only one ovary (N = 21, t = 3.59, P < 0.01). Thus, in the first two cases, the mean value of both ovaries was taken and used in the analyses, whereas in the third case, only the weight of the ovary with corpora lutea was used.
The mean weight of ovaries in adult and young females (N = 50) was 1.32 g ± 0.7 and 0.30 g ± 0.33 (N = 28), respectively. These results are similar to those obtained by comparing the mean weights of ovaries with (N = 38) and without (N = 48) corpora lutea (1.27 g ± 0.54 and 0.30 g ± 0.27, respectively). The mean ovary weight of adult females varied significantly between bimonthly periods (F_{5df} = 4.92, P < 0.001), with the highest values observed in May-June and the lowest in November-December (Fig. 2). Nevertheless, no statistically significant differences were detected between any of the bimonthly periods (Newman–Keuls test: P > 0.05). For young females, the variation in mean ovary weight was not significant (F_{5df} = 2.12, P = 0.10; see Fig. 2).
Reproductive activity
As expected, reproductive activity was significantly higher in adult than in young females (85 and 16%, respectively; P < 0.001). Adult females with signs of reproductive activity were found in different proportions in all the bimonthly samples (Fig. 3). Of the total active females, 69% were pregnant (29% were also lactating at the same time) and 16% were...
lactating. The lowest percentage corresponded to inactive adult females (15%), which were detected only in the last two bimonthly periods of the year, i.e. September-October and November-December (20 and 24%, respectively; see Fig. 3). We also found some young females showing signs of reproductive activity. In the period July-August, three females classified as young, due to the presence of unossified distal epiphysis in the radius-ulna, were active (two were pregnant and one lactating). In May-June, one young female was found to be pregnant and another was found in November-December (see Fig. 3). The mean body weight of these young active females was $1,910 \text{ g} \pm 285$ ($N=5$), which was significantly higher ($t=2.20$, $P=0.03$) than the mean body weight of inactive young females ($1,535 \text{ g} \pm 305$, $N=23$).
**Reproductive efficiency**
**Litter size**
Litter size ranged between one and a maximum of four embryos. A total of 107 embryos were counted and the estimated overall mean litter size was 2.09 ($N=51$ pregnant females). The most frequent litter sizes were two (51%) or one (24%) embryo (Fig. 4). Mean bimonthly litter sizes varied significantly throughout the year ($F_{\text{sdif}}=2.95$, $P<0.05$; Fig. 5). The highest litter-size value, observed in March-April (3.00±0.82), was significantly higher than those observed in November-December (1.83±0.79) and January-February (1.89±0.78; Newman Keuls test: $P<0.05$).
**Prenatal mortality**
Almost 33% of females with *corpora lutea* ($N=49$) had suffered reproductive losses, and overall prenatal mortality was 18% ($N=124$ *corpora lutea* observed as compared to $N=102$ embryos implanted). The majority of this mortality was due to non-implantation (86%), and resorption (partially resorbed embryo) accounted for 14% ($N=3$ cases). Preimplantation mortality was evenly distributed throughout the bimonthly periods ($\chi^2_{\text{3df}}=3.42$, $P=0.6$; ns) and resorption was observed only in January-February and March-April (see Fig. 5). The maximum difference between ovulation and implantation levels was observed in May-June and July-August (23.8 and 18.3%, respectively) and the maximum percentages of mortality (preimplantation plus resorption) were observed in January-February and September-October (see Fig. 5). Thus, the mean number of *corpora lutea* per female (2.52±1.13) differed significantly ($Z=2.59$, $N=51$, $P<0.01$) from the mean litter size (2.19±0.85). In addition, a significant tendency towards a loss in reproductive efficiency as the ovulation level rose was observed in monthly samples ($r_{\text{10df}}=0.71$, $P<0.05$). Super-foetation (embryos in different developmental stages) was not observed and seems to be very infrequent in this species (Vargas et al. 1999, Alves et al. 2002). One case of possible blastocystis migration was detected (Broekhuizen & Maaskamp 1981).
**Production of young**
The theoretical value of the total annual production of young per female was estimated at 16.1 (Table 3). The maximum productivity of leverets per female was observed in March-April, while minimum values were observed in November-December (see Table 3).
Table 3. Bimonthly contribution and overall production of young in adult Iberian hares in Navarra.
| | Number of embryos | Mean litter size | Pregnancy (%) | Mean production of young |
|----------------|-------------------|------------------|---------------|--------------------------|
| January-February | 9 | 17 | 1.89 | 0.88 | 2.42 |
| March-April | 4 | 12 | 3.00 | 1.00 | 4.38 |
| May-June | 8 | 21 | 2.63 | 0.88 | 3.37 |
| July-August | 6 | 11 | 1.83 | 0.71 | 1.89 |
| September-October| 3 | 7 | 2.33 | 0.75 | 2.55 |
| November-December| 21 | 39 | 1.86 | 0.58 | 1.57 |
| Total | 51 | 107 | | | 16.10 |
Discussion
Based on our data, we conclude that male and female Iberian hares in Navarra are reproductively active throughout the year. Nevertheless, the yearly breeding activity curve shows a decrease in late autumn and winter, which agrees well with the yearly cycle described for this species in the literature (Alves et al. 2002, Duarte et al. 2002, Farfán et al. 2004). Unlike other *Lepus* species (e.g. *L. europaeus* and *L. timidus*; Raczynski 1964, Flux 1970, Frylestam 1980, Broekhuizen & Maaskamp 1981, Hansen 1992, Bonino & Montenegro 1997), no period of testis involution was observed in males, and pregnant or lactating females were observed in all the bimonthly periods.
Males
Testicular weight and position are normally considered as valid criteria for sexual activity in lagomorphs. Testis weight has been demonstrated to be directly related to sperm production and extra-gonadal sperm reserves in lagomorphs such as rabbits *Oryctolagus cuniculus* (Gonçalves et al. 2002), brown hares (Blottner et al. 2000) and Iberian hares (Alves et al. 2002). The extra-abdominal position of testis has been taken as a sign of sperm maturity and sexual activity (Simeunovic et al. 2000, Alves et al. 2002). Nevertheless, males with intra-abdominal testes are sometimes spermatogenetically active (Simeunovic et al. 2000), and Alves et al. (2002) have demonstrated that there is no difference in testis and epididymidis weight or sperm and testosterone production between intra and extra-abdominal testes in Portuguese Iberian hares. Our results confirm the presence of spermatozoa in a large number of intra-abdominal testes, whose weight differs significantly from those of extra-abdominal active testes. Nevertheless, weights were in both cases > 7 g which is well over the five g that has been given as a threshold for sexual activity for larger species of this genus (Hansen 1992). This leads us to believe that testis position is not a decisive criterion for sexual activity in the Iberian hare. Indeed, although the biological significance of this phenomenon has yet to be determined (Alves et al. 2002), a lower proportion of males with external testes was observed between September and December, as was also reported by Alves et al. (2002) and Farfán et al. (2004), suggesting that this is a period of lower sexual activity.
Maximum testis weight was observed in March-April, coinciding with the period of maximum sexual activity in females. Possibly, the bimonthly time scale used as a result of the small sample sizes is too coarse to reveal differences/coincidences in both sexes. Alves et al. (2002) used similar sample sizes and obtained similar results whereas Farfán et al. (2004), using larger sample sizes and a monthly time scale, observed that the pattern of reproductive activity (measured indirectly as testis and ovary weights) ran in parallel. These results allow us to conclude that male reproduction patterns are similar throughout the species’ distribution area despite regional differences in climate, vegetation and habitat structure. Alves & Rocha (2003) reported that environmental factors have little influence on the reproductive activity of this species. They showed that, despite seasonal environmental variations in southern Portugal, the reproductive characteristics of Iberian hares did not vary significantly between seasons. Consequently, we should not expect to find significant differences in hare reproductive behaviour between regions with environmental differences.
High reproductive activity was observed in adult males throughout the yearly cycle, and in all bimonthly periods except November-December the percentage of adult males with sperm was 100%. Small sample sizes in some bimonthly periods may play a part in explaining this finding, although we can conclude that, as in most mammals, male Iberian hares are unlikely to be a limiting factor in population reproductive performance.
Females
The percentage of reproductively active (pregnant and/or lactating) adult females found shows that the breeding season extends throughout the year; only in the period September-December did we find some inactive adult females. However, significant differences in mean ovary weights were detected, a finding
that confirms that a certain degree of seasonality in female activity exists, with maximum values occurring in spring when implantation levels also reached their highest values. This pattern was also observed by Alves et al. (2002) and Farfán et al. (2004). Overall, almost 30% of reproductively active females were both pregnant and lactating and, between March and June, >80% of the females were pregnant and lactating; an ability that enables them to reduce the period between litters and provides the species with great reproductive potential. Nevertheless, it is unlikely that a female could reproduce continuously throughout the year because many other factors such as size/age (Iason 1990), weather/food availability (Kauhala et al. 2005) and/or physical condition (Ims 1987) also affect reproduction. For example, in mountain hare (Hewson 1970) and brown hare (Frylestim 1980), only a very small proportion of females were able to realise their maximum potential number of litters. Some young females were also reproductively active; these females were heavier than young inactive females and had a mean body weight near to the 2,000 g that is considered to be the lowest possible weight of an adult (A. Fernandez & R. Soriguer, pers. obs.). This implies that, to some extent, reproductive activity may start before adult age is reached, although the lack of more reliable age data for the species prevents us from saying exactly when hares are able to start breeding. We must also be aware that although larger individuals are more likely to be reproductively active (Iason 1990), weight also increases with pregnancy, and for this reason we employed the ossification stage of the foreleg distal cartilage to determine ages. Kauhala & Soveri (2001) also employed this criterion for age classification and obtained a higher proportion of young hares when bones were examined using radiography rather than the naked eye. However, these authors consider this result to be an overestimation of the proportion of young individuals (in some adults ossification lines can still be seen) and believe that the examination of bones using the naked eye is the best method for age determination in mountain hare.
**Litter size and productivity**
Reported litter size in Navarra (2.09 leverets/pregnant female) is higher than that observed in the south of the Iberian Peninsula (Alves et al. 2002, Duarte et al. 2002, Farfán et al. 2004). Our results differ significantly from those reported in Portugal by Alves et al. (2002: 1.56), who analysed the same number of pregnant females. Farfán et al. (2004) used mean annual litter size, calculated as the mean of all sampled periods (2.08), instead of reported litter size. This value was also higher in our study (2.25) and lower in Alves et al. (2002; 1.67).
Both the percentages of pregnant females and mean litter sizes were higher in all the bimonthly periods in our study than in other studies. In all the bimonthly periods except November-December, >75% of the females were pregnant in Navarra. In Portugal (Alves et al. 2002), percentages >75% were only observed in two bimonthly periods and in southern Spain only in February (Farfán et al. 2004). In Navarra, mean litter sizes were >1.5 in all the periods. On the other hand, Alves et al. (2002) and Farfán et al. (2004) found that this value was only reached in six and five months of the year, respectively. Alves et al. (2002) did not find any differences in mean litter sizes between bimonthly periods. We, however, observed significantly higher litter sizes in March-April and May-June. These results ultimately determine important differences in the estimated productivity per female: 16.1 in our case vs 9.6 in Portugal and 7.21 in southern Spain. For a comparison with productivity values in other *Lepus* species, see the review in Alves et al. (2002). In Navarra, maximum productivity was obtained in March-April and May-June. Farfán et al. (2004) obtained similar results, whilst Alves et al. (2002) found that the highest productivity occurred in January-February and then again in July-August, the two periods in which the percentage of pregnant females was at its highest. It is possible that our sampling methodology contained a bias towards larger individuals. Many of the hares were captured in non-hunting reserves within hunting areas, where individuals live longer and the capture of young hares is avoided. Although some young hares were analysed, shot mainly during the hunting season, only adult individuals were included when we calculated productivity. This bias could be one of the explanations for the higher values in our study when compared to those in other studies. Nevertheless, reference weights and size values are lacking from these other studies and only the weights of the smallest reproductively active individuals (supposedly the youngest) are given: 2,000 g for females and 1,600 g for males in Farfán et al. (2004), and 2,150 g for females and 1,811 g for males in Alves et al. (2002). We obtained minimum weights of 2,005 g for females and 1,765 for males.
The largest litter we found had four embryos, the same as was found by Alves et al. (2002) in Portugal. Farfán et al. (2004) found a pregnant female with seven embryos, although this was taken to be exceptional; they also found two females with five embryos. Nevertheless, > 50% of litters had only one embryo. Both these studies reported that the most frequently observed gestations included one and then two leverets. Contrasting this, we observed two as the most frequent value (51%), followed by one (23.5%). The reproductive strategy of the Iberian hare and the brown hare, which also reproduces continuously (Alves & Rocha 2003), is concordant with the hypothesis that lagomorph litters are smaller where breeding seasons are longer, and that larger litters occur when breeding seasons are shorter (Swihart 1984). Whether a mammal reproduces seasonally or continuously depends mostly on its environment (Bronson 1989). It has also been shown that in several mammals there is a relationship between productivity and latitude that implies that latitudinal differences in litter sizes are caused by differences in the length of the breeding season, with bigger litters compensating for shorter breeding seasons (Sadleir 1969). Along these lines, survival rates and productivity in the mountain hare have been demonstrated to differ between areas in Finland (Kauhala et al. 2005), and unfavourable climatic conditions associated with certain habitats have been related to differences in demography and body condition in the brown hare (Jennings et al. 2006).
In terms of prenatal mortality, our 18% is very similar to the 21% obtained by Alves et al. (2002). Nevertheless, these authors suggest that calculating prenatal mortality as the difference between the number of *corpora lutea* and the number of embryos may lead to errors. In both cases the number of *corpora lutea* and embryos differed statistically.
In conclusion, the reproductive output of Iberian hares is very high in Navarra, a fact that is somewhat surprising for a species at the edge of its range, where densities are low. We obtained density values of 5.8 hares/km$^2$ (A. Fernandez, R. Soriguer, F. Carro & E. Castien, pers. obs.) in contrast to densities of > 10 hares/km$^2$ reported from more suitable areas in southern Spain (Calzada & Martinez 1994, Rodriguez et al. 1997). This contradiction may be due to factors such as hunting (practised intensively throughout the study area as an analysis of hares shot in the hunting season has demonstrated; A. Fernandez, F. Carro & R. Soriguer, pers. obs.) which cause great postnatal mortality. Other aspects of reproduction in Iberian hares, such as the relationship between age and reproduction, still need to be studied in more depth.
**Acknowledgements** - we would like to thank the referees, including Dr. Phillip Stott, who made many very valuable comments on the manuscript. We are very grateful to all the game rangers and hunters who helped with collection of samples. This study was funded by the Government of Navarra and was made possible by an agreement between the CSIC (Consejo Superior de Investigaciones Científicas) and UPNA (Universidad Pública de Navarra).
**References**
Alves, P.C., Gonçalves, H., Santos, M. & Rocha, A. 2002: Reproductive biology of the Iberian hare (*Lepus granatensis*) in Portugal. - Mammalian Biology 67: 358-371.
Alves, P.C. & Rocha, A. 2003: Environmental factors have little influence on the reproductive activity of the Iberian hare (*Lepus granatensis*). - Wildlife Research 30: 639-647.
Allen, P., Brambell, F. & Mills, I. 1947: Studies on the sterility and prenatal mortality in wild rabbits. I. The reliability of estimates of prenatal mortality based on counts of corpora lutea, implantation sites and embryos. - Journal of Experimental Biology 23(3/4): 312-331.
Blottner, S., Faber, D. & Roelants, H. 2000: Seasonal variation of testicular activity in European brown hare (*Lepus europaeus*). - Acta Theriologica 45(3): 385-394.
Bonino, N. & Montenegro, A. 1997: Reproduction of the European hare in Patagonia, Argentina. - Acta Theriologica 42(1): 47-54.
Brodowski, A., Jegenow, K., Pielowski, Z. & Blottner, S. 2001: Seasonal changes in histological-morphometric parameters of testes in European brown hare. - Zeitschrift für Jagdwissenschaft 47(1): 26-33.
Broekhuizen, S. & Maaskamp, F. 1981: Annual production of young in European hares in the Netherlands. - Journal of Zoology 193: 499-516.
Bronson, F. 1989: Mammalian Reproductive Biology. - The University of Chicago Press, Chicago, 236 pp.
Calzada, E. & Martínez, F. 1994: Requerimientos y selección de hábitat de la liebre mediterránea (*Lepus granatensis*) en un paisaje agrícola mesetario. - Ecología 8: 381-394. (In Spanish).
Duarte, J. 2000: Liebre ibérica (*Lepus granatensis*, Rosenhauer 1856). - Galemys 12(1): 3-14. (In Spanish).
Duarte, J., Vargas, J.M. & Farfán, M.A. 2002: Biología de la liebre ibérica (*Lepus granatensis*). Bases técnicas
para la gestión cinegética. - In: Lucio, A. & Sáenz de Buruaga, M. (Eds.); Aportaciones a la gestión sostenible de la caza. FEDENCA-EEC, Madrid, Spain, pp. 29-59. (In Spanish).
Farfán, M.A., Vargas, J.M., Real, R., Palomo, L. & Duarte, J. 2004: Population parameters and reproductive biology of the Iberian hare (Lepus granatensis) in southern Iberia. - Acta Theriologica 49(3): 319-335.
Flux, J.E.C. 1967: Reproduction and body weights of the hare (Lepus europaeus) in New Zealand. - New Zealand Journal of Science 10(2): 357-401.
Flux, J.E.C. 1970: Life history of the mountain hare (Lepus timidus scoticus) in north-east Scotland. - Journal of Zoology 161: 75-123.
Frylestad, B. 1980: Reproduction in the European hare in Southern Sweden. - Holarctic Ecology 3: 74-80.
Gonçalves, H., Alves, P.C. & Rocha, A. 2002: Seasonal variation in the reproductive activity of the wild rabbit in a Mediterranean ecosystem. - Wildlife Research 29: 165-173.
Hackländer, K., Frisch, C., Klansek, E., Steineck, T. & Ruf, T. 2001: On fertility of female European hares (Lepus europaeus) in areas of different population densities. - Zeitschrift für Jagdwissenschaft 47: 100-110.
Hansen, K. 1992: Reproduction in European hare in a Danish farmland. - Acta Theriologica 37(1-2): 27-40.
Hewson, R. 1970: Variation in reproduction and shooting bags of mountain hares in north-east Scotland. - Journal of Applied Ecology 7: 243-252.
Ims, R. 1987: Differential reproductive success in a peak population of the grey-sided vole (Clethrionomys rufocanus). - Oikos 50: 103-113.
Iason, G. 1990: The effects of size, age and a cost of early breeding on reproduction in female mountain hares. - Holarctic Ecology 13(2): 81-89.
Jennings, N., Smith, R., Hackländer, S., Harris, S. & White, C.L. 2006: Variation in demography, condition and dietary quality of hares Lepus europaeus from high-density and low-density populations. - Wildlife Biology 12(2): 179-189.
Kauhala, K. & Soveri, T. 2001: An evaluation of methods for distinguishing juvenile and adult mountain hares (Lepus timidus). - Wildlife Biology 7(4): 295-300.
Kauhala, K., Helle, P. & Hiltunen, M. 2005: Population dynamics of mountain hare Lepus timidus populations in Finland. - Wildlife Biology 11(4): 299-307.
Lucio, A.J. 1996: Planes Técnicos de caza. - In: Colegio Oficial de Biólogos (Eds); Curso de Gestión y Ordenación Cinegética. Granada, Spain, 242 pp. (In Spanish).
Marboutin, E., Bray, Y., Peroux, R., Mauvy, B. & Lartiges, A. 2003: Population dynamics in European hares: breeding parameters and sustainable harvest rates. - Journal of Applied Ecology 40: 580-591.
Pejenaute, J.M. 1992: El clima de Navarra. - Eunate, Pamplona, Spain, 223 pp. (In Spanish).
Pepin, D. 1989: Variation in survival of brown hare Lepus europaeus leverets from different farmland areas in the Paris basin. - Journal of Applied Ecology 26: 13-23.
Raczynski, J. 1964: Studies on the European hare. V: Reproduction. - Acta Theriologica 9: 305-352.
Rodríguez, M., Palacios, J., Martín, J.A., Yanes, T., Martín, P., Sánchez, C., Navesco, M.A. & Muñoz, R. 1997: La Liebre. - Ed. Mundi Prensa, Madrid, 160 pp. (In Spanish).
Sadleir, R. 1969: The ecology of reproduction in wild and domestic mammals. - Methuen, London, 231 pp.
Simeunovic, B., Strbenc, M. & Bavdec, S. 2000: Position and histological structure of the testes in the brown hare (Lepus europaeus) during seasonal regression and recrudescence. - Anatomic Histology and Embryology 29: 73-82.
Stroh, G. 1931: Zwei sichere Altermerkmale beim Hasen. - Berlinen Tierärzliche Wochenschrift 12: 180-181. (In German).
Suchentrunk, R., Willing, R. & Hartl, G.B. 1991: On eye lens weights and other criteria of brown hare (Lepus europaeus Pallas, 1778). - Zeitschrift für Säugetierkunde 56: 365-374.
Swihart, R. 1984: Body size, breeding season length, and life history tactics of lagomorphs. - Oikos 43: 282-290.
Vargas, J.M. 2002: Alerta cinegética. Reflexiones sobre el futuro de la caza en España. - Tero, Madrid, Spain, 399 pp. (In Spanish).
|
Summary:
This paper makes significant new progress in quantifying the efficiency of photochemical carbon monoxide production by particles. The authors use their efficiency spectra to drive a model of coupled (dissolved + particulate) CO photoproduction in the southeastern Beaufort Sea. The photochemical contribution to the carbon cycle in this region is likely to undergo significant change as ice cover decreases so the model results are timely with respect to the warming climate. Measurements of particulate, photochemical reaction efficiencies require more stringent optical assumptions and experimental controls than are necessary in the absence of particles, and while the authors address some of these issues, there are a few missing details which I have identified below. Also unlike dissolved-‐phase photoreactions, particulate reactions may involve organic matter, inorganic minerals, or some combination of both. I would like to see stronger justification for the authors' assumptions in this respect. To date, however, there are few measurements of particulate photoreaction efficiencies, so this work represents an important contribution to the growing literature on this subject.
Comments refer to "page.line"
General comments
* 16163.9-‐25: This sentence is too long, and it's unnecessary to list 8 separate categories of CDOM photoprocesses in introduction to a paper about particulate CO photoproduction—a brief mention of important processes, explanation of why they are important, and reference to one or two reviews should suffice. The rest of the Introduction is much more relevant to your topic.
* 16163.25-‐27: You first state that chlorophyll and lipid degradation are the only particulate photoprocesses receiving significant study to date, but then you go on to list a number of studies that, in fact, focus on other particulate photoprocesses. Please update your text so it's self-‐consistent.
* 16164.4-‐7: The word choice is quite similar to that of Zafiriou (2002)—please paraphrase further.
* 16164.12: Mayer et al. (2006) should be included here
* 16165.1-‐3: Very recently, Estapa et al. (2012) published AQY spectra for DOC photoproduction from POC.
* 16168.12-‐13, 19: Consider listing cutoff-‐wavelengths instead of model numbers (or at least confirm that the digits in the model numbers are the wavelengths, for readers unfamiliar with these filters)
* 16168.18-‐21: Was the CO production rate constant throughout every irradiation regardless of cutoff wavelength or sample absorption coefficient? Did you
measure absorption coefficients of samples after irradiation to determine photobleaching extent? If so, please include this information; if not, please address how you accounted for any dose-‐dependence effects, particularly for the longer irradiations.
* 16171.9-‐16 and 16168.5-‐6: You use the bb:Kd ratio at 300-‐400nm to justify neglecting scattering in your light absorption rate calculation. However, this only shows that scattering through angles greater than 90° was negligible. Unless the the sides of the irradiation cells were reflective, you need to also account for losses of side-‐scattered light through angles greater than ~8.5° (computed from the cell dimensions), which may more easily be affected by multiple-‐scattering. Also, the bb:Kd ratio at visible wavelengths (which are still important to CO photoproduction from particles) is probably larger than at 300-‐ 400 nm. Please discuss these scattering losses, the wavelength dependence of your assumptions, and possible bias (if any) in your derivedΦ spectra.
* 16168.5-‐6: If the irradiation cells' sides were transparent, were neighboring cells in the solar simulator shielded from one another? If not, can you estimate the extra irradiance received from the sides due to "leakage" of light from scattering samples in neighboring cells?
* 16171.Eq3 and 16174.4: At Sta. 697, at least (and perhaps others?) I suspect your irradiation samples were "optically thick" at UV wavelengths – that is, all or nearly all the irradiance was absorbed within the 0.114 m cell path. Were cells stirred during irradiation 1) to limit kinetic transport effects, and 2) in unfiltered samples, to keep particles in suspension?
* 16169.4 – What was the uncertainty of the CO measurement and approximate, propagated uncertainty in spectrally-‐averagedΦ values?
* 16172.Eq 5, 7, 8: Can you condense these a bit? The arithmetic is not complex, and you defined Qa,λ earlier, so perhaps you only need to write the generic equation (e.g., Px,λ =Φx,λ x Qx,a,λ)
* 16173.20 and Section 3.5: Your stations span quite a large temperature range, and temperature dependence for DOC photoproduction from POC appears to be larger than for many dissolved-‐phase photoreactions (Estapa et al., 2012). While a direct comparison between temperature dependence of CO photoproduction from particles and from CDOM (e.g. Zhang et al., 2006) would have been even more illustrative, you could still use the temperature dependence of the CDOM reaction measured by Zhang et al. 2006 to estimate the effect of temperature on your modeled rates. This might cause non-‐negligible changes in your model results since in situ temperatures were in some cases quite different than your 4°C irradiation temperature.
* 16174.20-‐26: These lines are not well-‐justified. Any spectral features inΦp,λ as determined here are due solely to the measured spectral shape of particulate absorption and the assumed form (Eq. 3) of the spectral shapes ofΦCDOM,λ and
Φt,λ. Only if you'd measuredΦp,λ during a series of monochromatic irradiations would you be able to infer increased photoreactivity in pigment wavebands.
* 16175.20-‐27, 16177.3 and Figure 4b: I would suggest removing these lines and Figure 4b. Even within the grouped subsets, the variability of bothΦp and aphy,412:ap,412 is so large that you cannot make a strong conclusion regarding reactivity of phytoplankton-‐derived organic matter. On the other hand, your statement that "more complex mechanisms" control the efficiency of particulate CO photoproduction is entirely reasonable, especially when you consider that living phytoplankton undoubtedly have evolved a variety of strategies to avoid photochemical degradation. A more useful comparison might be againstΦnon-‐phy for residual, non-‐pigmented particles in shelf and offshore stations (as you derive later for non-‐mineral POM in estuarine samples).
* 16177.8-‐16179.17: This section (on derivation and analysis ofΦpom) is based on the assumption that light absorption by inorganic particulate matter does not initiate or catalyze CO photoproduction. However, the CDOM literature (e.g., Gao and Zepp, 1998) suggests a role for iron in CO photoproduction, and iron-‐oxide minerals are the primary contributor to aM (e.g.,Stramski et al., 2007). So I'm not sure it's justifiable at present to normalize the CO production rate solely to the organic component of the light absorption rate. Second, the "organic, non-‐ pigment" component of spectral absorption (a0, Fig 6B) is very small, and somewhat uncertain – except for a dip at 375 nm (due to spectrophotometer lamp change?) it is noisy and flat except at wavelengths below about 325 nm, and the original anap data between 250-‐299nm were extrapolated from longer wavelengths (16170.5-‐7). Since apom = aphy + a0, this implies that most of apom is due to aphy, which is associated with living organisms. The lack of clear trend vs. salinity (Fig 7A) underscores the uncertainty in the derivedΦpom values. This section should be rewritten and shortened with more attention paid to the uncertainties in derived a0and apom spectra, and toward justifying the assumption that POM absorption drives all observed, particulate CO production. If uncertainties in apom are large, then this discussion will be mostly speculative in nature and Figures 6 and 7 may be unnecessary.
* 16185.1-‐2: Extrapolation to other regions is only feasible if you assume that relative photoreactivity of particles and CDOM depends only on their relative absorption coefficients. As composition probably plays a large role, you should qualify this statement.
* 16185.11-‐16: These statements are not well-‐supported by the data, as discussed above.
* Figures, general: Please make sure all text on all figures is large enough to be legible at printed size.
Technical comments
* 16162.27: change "no" to "not"
* 16164.26: change "affect" to "impact"
* 16165.3-‐4: sentence beginning "we modeled" – change to present tense to match the first sentence of the paragraph
* 16166.26-‐27: I believe the URL for a reference cited should be listed in the bibliography but not in the text.
* 16167.12: change "gravity-‐pushed" to "gravity-‐filtered"
* 16173.6-‐7: change "equivalent to the solar insolation-‐normalized production of CO" to "of CO production"
* 16175.9: change "DCM's" to "At the DCM,"
* 16175.15-‐17: change to read, "…from the MS and CB corresponded to a narrow range of low acdom (…) and were scattered with respect to acdom."
* 16175.18: change "DCM's" to "at the DCM, the"
* 16176.24: change "displaying" to "display"
* 16178.13: change "surge" to "steeper increase"
*
16178.24: change "lied above" to "was larger than"
* 16178.25: change "interested" to "these"
* 16178.29: change "thereby disabling us to" to "and we could not"
* 16181.13-‐14: change "…increased for both CDOM and particles but the one for particles went up far more." to "…increased for CDOM and especially for particles."
References
Estapa, M. L., Mayer, L. M. and Boss, E.: Rate and apparent quantum yield of photodissolution of sedimentary organic matter, Limnology and Oceanography, 57(6), 1743–1756, 2012.
Gao, H. and Zepp, R. G.: Factors influencing photoreactions of dissolved organic matter in a coastal river of the southeastern United States, Environmental Science & Technology, 32(19), 2940 – 2946, 1998.
Mayer, L. M., Schick, L. L., Skorko, K. and Boss, E.: Photodissolution of particulate organic matter from sediments, Limnology and Oceanography, 51(2), 1064 – 1071, 2006.
Stramski, D., Babin, M. and Wozniak, S.: Variations in the optical properties of terrigenous mineral-‐rich particulate matter suspended in seawater, Limnology and Oceanography, 52(6), 2418 – 2433, 2007.
Zafiriou, O. C.: Sunburnt organic matter: Biogeochemistry of light-‐altered substrates, Limnology and Oceanography Bulletin, 11(4), 69–71, 2002.
Zhang, Y., Xie, H. and Chen, G.: Factors affecting the efficiency of carbon monoxide photoproduction in the St. Lawrence estuarine system (Canada), Environ. Sci. Technol., 40(24), 7771–7777, doi:10.1021/es0615268, 2006.
|
Scaling limits of anisotropic random growth models
Amanda Turner
Department of Mathematics and Statistics
Lancaster University
(Joint work with Fredrik Johansson Viklund and Alan Sola)
Overview
1. Generalised HL(0) clusters
2. Loewner chains driven by measures
3. A shape theorem for anisotropic HL(0) clusters
4. The evolution of harmonic measure on the cluster boundary
Let $D_0$ denote the exterior unit disk in the complex plane $\mathbb{C}$. Let $K_0 = \mathbb{C} \setminus D_0$ be the closed unit disk. Consider a simply connected set $D_1 \subset D_0$, such that $P = D_1^c \setminus K_0$ has diameter $d \in (0, 1]$ and $1 \in \overline{P}$. The set $P$ models an incoming particle, which is attached to the unit disk at 1. We use the unique conformal mapping $f_P : D_0 \to D_1$ as a mathematical description of the particle.
Conformal mapping representation of single particle
Let $P_1, P_2, \ldots$ be a sequence of particles with $\text{diam}(P_j) = d_j$. Let $\theta_1, \theta_2, \ldots$ be a sequence of angles. Define rotated copies $f_{P_j}^{\theta_j}(z)$ of the maps $\{f_{P_j}\}$ so that $f_{P_j}^{\theta_j}(D_0) = e^{i\theta_j} f_{P_j}(D_0)$. Take $\Phi_0(z) = z$, and recursively define
$$\Phi_n(z) = \Phi_{n-1} \circ f_{P_n}^{\theta_n}(z), \quad n = 1, 2, \ldots.$$
This generates a sequence of conformal maps $\Phi_n : D_0 \to D_n = \mathbb{C} \setminus K_n$, where $K_{n-1} \subset K_n$ are growing compact sets, or clusters.
The slit model after a few arrivals with $d = 1$
Generalised Hastings-Levitov clusters
By choosing the sequences $\{\theta_j\}$ and $\{d_j\}$ in different ways, it is possible to describe a wide class of growth models.
In the Hastings-Levitov family of models $HL(\alpha)$, $\alpha \in [0, 2]$, the $\theta_j$ are chosen to be independent uniform random variables on the unit circle which corresponds to the attachment point at the $n$th step being distributed according to harmonic measure at infinity for $K_{n-1}$. The particles are usually taken to be “slits” with diameters taken as $d_j = d/|\Phi'_{j-1}(e^{i\theta_j})|^{\alpha/2}$. Heuristically, the case $\alpha = 1$ corresponds to the Eden model (biological cell growth) and the case $\alpha = 2$ is a candidate for off-lattice DLA.
HL(0) cluster with 25000 particles for $d = 0.02$
Anisotropic Hastings-Levitov, AHL($\nu$), is a variant of the HL(0) model in which $\theta_1, \theta_2, \ldots$ are i.i.d. random variables on the unit circle with common law $\nu$ and $d_j = d$.
Models can be further generalised by allowing $P_1, P_2, \ldots$ to be chosen randomly from a class of suitable shapes, even with $d_1, d_2, \ldots$ i.i.d. random variables (independent of $\{\theta_j\}$) satisfying certain conditions, however our results are not sensitive to these changes.
Motivation for anisotropic models
The use of more general distributions for the angles is a way of introducing localization in the growth, such as can be observed in actual DLA.
Anisotropic versions of DLA can be used to model natural processes such as the formation of hoar frost and it is suggested that anisotropic Hastings-Levitov models may provide a description for the growth of bacterial colonies where the concentration of nutrients is directional.
Simulations suggest that anisotropic Hastings-Levitov clusters show less variation as $\alpha$ changes, than isotropic models.
A DLA cluster of size 100 million
Simulation due to Henry Kaufman (Yale)
A DLA cluster on the square lattice of size 4096
Simulation due to Vincent Beffara (ENS Lyon)
Loewner chains
A general way to describe growing (random or deterministic) compact sets is to use Loewner chains.
A decreasing Loewner chain is a family of conformal mappings
\[ f_t : D_0 \to \mathbb{C} \setminus K_t, \quad \infty \mapsto \infty, \quad f'_t(\infty) > 0, \]
onto the complements of a growing family of compact sets, called hulls, with
\[ K_{t_1} \subset K_{t_2} \quad \text{for} \quad t_1 < t_2. \]
We always take \( K_0 \) to be the closed unit disk. The capacity of each \( K_t \) is given by
\[ \text{cap}(K_t) = \lim_{z \to \infty} \frac{f_t(z)}{z}. \]
Loewner chains driven by probability measures
Let $\mathcal{P} = \mathcal{P}(\mathbb{T})$ denote the class of probability measures on the unit circle $\mathbb{T}$. Under some natural assumptions on the function $t \mapsto \text{cap}(K_t)$, such a chain can be parametrized in terms of families $\{\mu_t\}_{t \geq 0}$, $\mu_t \in \mathcal{P}(\mathbb{T})$.
More precisely, the conformal mappings $f_t$ satisfy the Loewner-Kufarev equation
$$\partial_t f_t(z) = z f'_t(z) \int_{\mathbb{T}} \frac{z + \zeta}{z - \zeta} d\mu_t(\zeta),$$
with initial condition $f_0(z) = z$. Conversely, for any family $\{\mu_t\}_{t \geq 0}$, $\mu_t \in \mathcal{P}(\mathbb{T})$, the solution to (1) exists and is a Loewner chain.
In the case of pure point masses
\[ \mu_t = \delta_{\xi(t)}, \]
where \( |\xi(t)| = 1 \), the Loewner-Kufarev equation reduces to the equation
\[ \partial_t f_t(z) = z f'_t(z) \frac{z + \xi(t)}{z - \xi(t)}, \]
which was originally introduced by Loewner in 1923. The function \( \xi(t) \) is usually called the driving function.
Slit mappings
The choice $\xi(t) = 1$ produces as solutions the basic slit mappings $f_{d(t)} : D_0 \to D_0 \setminus (1, 1 + d(t)]$, with slit lengths $d(t)$ given by the explicit formula
$$d(t) = 2e^t(1 - \sqrt{1 - e^{-t}}) - 2.$$
We can recover (the slit version of) the HL(0) mappings $\Phi_n$ by driving the Loewner equation with a non-constant point mass at
$$\xi(t) = \exp \left( i \sum_{j=1}^{n} \theta_j \chi_{[T_{j-1}, T_j]}(t) \right),$$
where the times $T_j$ relate to the slit lengths $d$ via the formula (2).
General particle mappings
For $k = 1, \ldots, n$, set
$$T_k = k \log \text{cap}(K_0 \cup P),$$
and let $\xi_k(t)$, $t \in [T_{k-1}, T_k)$, be the (rotated) driving function for the particle $P_k$. Set
$$\xi^n(t) = \exp \left( i \sum_{k=1}^n \chi_{[T_{k-1}, T_k)}(t) \xi_k(t) \right).$$
Then $\delta_{\xi^n(t)}$ is the measure that drives the evolution of the AHL clusters. That is, the mapping $\Phi_n$ is the solution to the Loewner-Kufarev equation
$$\partial_t f_t(z) = z f_t'(z) \int_{\mathbb{T}} \frac{z + \zeta}{z - \zeta} d\delta_{\xi^n(t)}(\zeta)$$
with $f_0(z) = z$, evaluated at time $t = T_n$.
Choosing absolutely continuous driving measures
\[ d\mu_t(\zeta) = h_t(\zeta)|d\zeta| \]
results in the growth of the clusters no longer being concentrated at a single point. In the simplest case \( d\mu_t(\zeta) = |d\zeta|/2\pi \), the Loewner-Kufarev equation reduces to
\[ \partial_t f_t(z) = zf'_t(z), \]
and we see that \( f_t(z) = e^t z \), so that \( K_t = e^t K_0 \). Absolutely continuous driving measures arise naturally as limits in connection with the anisotropic HL(0) clusters.
Continuity properties of the Loewner equation
Our goal is to describe the macroscopic shape of the anisotropic HL clusters in the limit where the particle sizes converge to zero. In order to do this, we need the solutions to the Loewner-Kufarev equation (1) to be “close” at time $T$ if the driving measures are “close” in some suitable sense.
**Theorem**
Let $0 < T < \infty$. Let $\mu^n = \{\mu^n_t\}_{t \geq 0}$, $n = 1, 2, \ldots$, and $\mu = \{\mu_t\}_{t \geq 0}$ be families of measures in $\mathcal{P}$. Let $m$ denote Lebesgue measure on $[0, \infty)$, and suppose that the measures $\mu^n_t \times m$ converge weakly on $S = \mathbb{T} \times [0, T]$ to the measure $\mu_t \times m$ as $n \to \infty$.
Then the solutions $\{f^n_T\}$ to (1) corresponding to the sequence $\{\mu^n\}$ converge to $f_T$, the solution corresponding to $\mu$, uniformly on $S$.
The shape theorem
For fixed $T \in (0, \infty)$, set $T_n = n \log \text{cap}(K_0 \cup P)$. Then by an appropriate version of the strong law of large numbers it can be shown that $\delta_{\xi^n(t)} \times m_{[0, T_n]}$ converges to $\nu \times m_{[0, T]}$ as $d \to 0$ with respect to the weak topology.
Therefore, if
$$\Phi_n = f_{P_1}^{\theta_1} \circ \cdots \circ f_{P_n}^{\theta_n},$$
then $\Phi_n$ converges to $\Phi$ uniformly on compacts almost surely as $d \to 0$, where $\Phi$ denotes the solution to the Loewner-Kufarev equation driven by the measures $\{\nu_t\}_{t \geq 0} = \{\nu\}_{t \geq 0}$ and evaluated at time $T$.
Angles chosen in an interval
For $\eta \in (0, 1]$, let $\theta_j$ be chosen uniformly in $[0, \eta]$. Then
$$d\nu(e^{2\pi ix}) = \frac{\chi_{[0,\eta]}(x)dx}{\eta}.$$
The clusters converge to the hulls of the Loewner chain described by the equation
$$\partial_t f_t(z) = zf_t'(z) \left(1 + \frac{2}{\eta} \arctan \left[ \frac{e^{i\pi\eta} \sin(\pi\eta)}{z - e^{i\pi\eta} \cos(\pi\eta)} \right] \right).$$
The slit model on the half circle
Simulation of $AHL(\nu)$ and limiting Loewner hull, for $d = 0.02$ after 25000 arrivals, corresponding to $d\nu(e^{2\pi ix}) = 2\chi_{[0,1/2]}(x)dx$.
Angles chosen from a density with $m$-fold symmetry
For fixed $m \in \mathbb{N}$, choose $\theta_j$ distributed according to the density
$$d\nu(e^{2\pi ix}) = 2 \sin^2(m\pi x) dx.$$
The clusters converge to the hulls of the Loewner chain described by the equation
$$\partial_t f_t(z) = zf_t'(z) \left(1 - \frac{1}{z^m}\right).$$
The slit model for a measure with 3-fold symmetry
Simulation of $AHL(\nu)$ and limiting Loewner hull, for $d = 0.02$ after 25000 arrivals, corresponding to $d\nu(e^{2\pi ix}) = 2\sin^2(3\pi x)dx$.
For the mapping associated to a particle $P$, write $g_P$ for the inverse mapping from $D_1 \to D_0$. There exists a unique $\gamma_P$ that restricts to a continuous map from the interval $(0, 1)$ to itself, such that
$$g_P(e^{2\pi ix}) = e^{2\pi i \gamma_P(x)}, \quad x \in (0, 1).$$
Set $\Gamma_n = g_{P_n} \circ \cdots \circ g_{P_1}$, where $g_{P_n} = (f_{P_n}^{\theta_n})^{-1}$, so that $\Gamma_n : D_n \to D_0$. The mappings $\Gamma_n$ induce a flow on the unit circle and this flow describes the evolution of the harmonic measure on the cluster boundary, as particles are added to the cluster.
The fluid limit of the process
Suppose that particles are added at rate $\log \text{cap}(K_0 \cup P)$. Let $X$ be a flow map corresponding to a lifting of $\Gamma$ onto the real line. Let $\phi$ be the flow map giving the solution to the deterministic ordinary differential equation
$$\dot{\phi}_{(s,t]}(x) = H[h_\nu](\phi_{(s,t]}(x)), \quad \phi_{(s,s]}(x) = x;$$
where
$$H[h_\nu](\xi) = \text{p.v.} \frac{1}{2\pi} \int_0^1 \cot(\pi(\xi - z)) h_\nu(z) dz$$
is the Hilbert transform of the measure $d\nu = h_\nu dx$.
Then, as $d \to 0$, $X$ converges to $\phi$ in probability (as flow maps).
Let $x_1, \ldots, x_n$ be a positively oriented set of points in $\mathbb{R}/\mathbb{Z}$ and set $x_0 = x_n$. Set $K_t = K_{[\log \text{cap}(K_0 \cup P)^{-1} t]}$. For $k = 1, \ldots, n$, write $\omega^k_t$ for the harmonic measure in $K_t$ of the boundary segment of all fingers in $K_t$ attached between $x_{k-1}$ and $x_k$. Then, in the limit $d \to 0$, $(\omega^1_t, \ldots, \omega^n_t)_{t \geq 0}$ converges weakly in $D([0, \infty), [0, 1]^n)$ to $(\phi_{(0, t]}(x_1) - \phi_{(0, t]}(x_0), \ldots, \phi_{(0, t]}(x_n) - \phi_{(0, t]}(x_{n-1}))_{t \geq 0}$.
A geometric consequence of this result is that the number of infinite fingers of the cluster converges to the number of stable equilibria of the ordinary differential equation $\dot{x}_t = b(x_t)$, and the positions at which these fingers are rooted to the unit disk converge to the unstable equilibria of the ODE.
Stochastic fluctuations about the limit
For fixed \((s, x) \in [0, \infty) \times \mathbb{R}\), define
\[ Z_t^P = (\log \text{cap}(K_0 \cup P)\rho(P))^{1/2}(X_{(s,t]}(x) - \phi_{(s,t]}(x)) \]
and let \(Z_t\) be the solution to the linear stochastic differential equation
\[ dZ_t = \sqrt{h_\nu(\phi_{(s,t]}(x))}dB_t + b'(\phi_{(s,t]}(x))Z_t dt, \quad t \geq s, \]
starting from \(Z_s = 0\), where \(B_t\) is a standard Brownian motion.
Then, as \(d \to 0\), the processes \(Z_t^P \to Z_t\) in distribution.
Note that if \(\phi_{(s,t]}(x)\) stays off the support of \(h_\nu\), then \(Z_t = 0\) for all \(t \geq s\). Also observe that in the case where \(\nu\) is the uniform measure on the unit circle,
\[(\log \text{cap}(K_0 \cup P)\rho(P))^{1/2}(X_{(s,t]}(x) - x)_{t \geq s}\] converges to standard Brownian motion, starting from 0 at time \(s\).
Angles chosen in an interval
For
\[ d\nu(e^{2\pi ix}) = \frac{\chi_{[0,\eta]}(x)dx}{\eta}, \]
the boundary flow converges to the solution to the ordinary differential equation
\[ \dot{\phi}_{(s,t]}(x) = \frac{1}{2\pi^2\eta} \log \left| \frac{\sin(\pi \phi_{(s,t]}(x))}{\sin(\pi (\phi_{(s,t]}(x) - \eta))} \right| \]
with \( \phi_{(s,s]}(x) = x \). In the special case \( \eta = 1/2 \), we obtain the equation
\[ \dot{\phi}_{(s,t]} = \frac{1}{\pi^2} \log |\tan(\pi \phi_{(s,t]}(x))|. \]
The slit model on the half circle
Simulation of evolution of harmonic measure on the boundary of AHL($\nu$) and limiting ODE, for $d = 0.02$ after 25000 arrivals, corresponding to $d\nu(e^{2\pi ix}) = 2\chi_{[0,1/2]}(x)dx$.
Note the absence of random fluctuations in the region $(1/2, 1)$
Angles chosen from a density with $m$-fold symmetry
For fixed $m \in \mathbb{N}$, and
$$d\nu(e^{2\pi ix}) = 2\sin^2(m\pi x)dx$$
the boundary flow converges to the solution to the ordinary differential equation
$$\dot{\phi}_{(s,t]}(x) = -\frac{1}{2\pi} \sin(2\pi m\phi_{(s,t]}(x)), \quad \phi_{(s,s]}(x) = x.$$
The slit model for a measure with 3-fold symmetry
Simulation of evolution of harmonic measure on the boundary of AHL($\nu$) and limiting ODE, for $d = 0.02$ after 25000 arrivals, corresponding to $d\nu(e^{2\pi ix}) = 2\sin^2(3\pi x)dx$.
Selected References
[1] L. Carleson, N. Makarov, *Aggregation in the plane and Löwner’s equation*, Commun. Math. Phys 216 (2001), 583-607.
[2] M.B. Hastings and L.S. Levitov, *Laplacian growth as one-dimensional turbulence*, Physica D 116 (1998), 244-252.
[3] F. Johansson Viklund, A. Sola, A. Turner, *Scaling limits of anisotropic Hastings-Levitov clusters*. To appear in AIHP.
[4] J. Norris, A. Turner, *Planar aggregation and the coalescing Brownian flow*, arxiv:0810.0211.
[5] M.N. Popescu, H.G.E. Hentschel, F. Family, *Anisotropic diffusion-limited aggregation*, Phys. Rev. E 69 (2004).
[6] S. Rohde, M. Zinsmeister, *Some remarks on Laplacian growth*, Topology Appl. 152 (2005), 26-43.
|
Identification of weeping crabapple cultivars by microsatellite DNA markers and morphological traits
Leena Lindén\textsuperscript{a,}\textsuperscript{*}, Mattias Iwarsson\textsuperscript{b}
\textsuperscript{a} Department of Agricultural Sciences, Box 27, FI-00014 University of Helsinki, Finland
\textsuperscript{b} Swedish Biodiversity Centre, Box 7007, SE-750 07 Uppsala, Sweden
\textbf{Article history:}
Received 25 June 2014
Received in revised form 12 September 2014
Accepted 15 September 2014
\textbf{Keywords:}
Cultivar identification
Fingerprinting
Landscape plant
Malus
Morphological trait
SSR
\textbf{Abstract}
Ornamental crabapples are small landscape trees with charming flowers, colourful fruits and many growth forms. The first weeping crabapple cultivars, \textit{Malus prunifolia ‘Pendula’} and ‘Pendula Nova’, were described in Sweden around 150 years ago. Our study was aimed at identification and characterization of weeping crabapple clones by microsatellite markers and morphological traits. We analysed 13 Swedish and Finnish trees and 8 reference accessions including \textit{M. prunifolia ‘Pendula’} and three international cultivars belonging to its progeny. The 21 trees represented 13 distinct genotypes. Five local trees were identified as the historical ‘Pendula’, assumed to be extinct from the nursery trade. On the basis of morphological traits and historical records, two old Swedish trees were concluded to represent ‘Pendula Nova’. The authenticity of the trees could not be confirmed by DNA markers because no known plant of the old cultivar was found in botanical collections. The Finnish clone ‘Hyvingiensis’ proved unique among the crabapple accessions studied. ‘Hyvingiensis’ was probably raised from seed at the Finnish State Railways Nurseries about 110 years ago. Several mislabellings were revealed among both the local and the reference samples. A novel identification key was created to aid discrimination between the clones by their morphological traits. A combination of DNA fingerprints, comparison of morphological traits and tracing information in relevant archives and old garden literature proved useful for solving the origin and identity of weeping crabapples. The results contribute to conservation of garden plants and stabilization of horticultural nomenclature.
© 2014 Elsevier B.V. All rights reserved.
\section*{1. Introduction}
Ornamental crabapples (\textit{Malus spp.}) are small trees and shrubs valued for their charming flowers, handsome summer and autumn foliage, and colourful fruits that often remain on the tree long after leaf fall. Crabapples provide a wide range of growth forms as well as flower and fruiting characteristics. Many crabapples are more winter hardy than the common apple (\textit{Malus domestica}) and the fruits of several cultivars are suitable for economic uses, facts that have certainly raised the interest in growing crabapples at high latitudes.
Crabapples are distinguished from apple trees grown for fruit production by fruit size: members of the genus \textit{Malus} that have fruits 2 in. (ca. 5 cm) or less in diameter are considered as crabapples (Wyman, 1955). Some wild \textit{Malus} species and varieties are grown as ornamental crabapples, but most of the cultivated forms are hybrids between different \textit{Malus} taxa. In the twentieth century, breeding of ornamental crabapples was carried out mainly in Canada and in the USA, where approximately 400–600 different forms and cultivars are grown (Dirr, 2009).
A special group of ornamental crabapples is those with pendulous branches, the weeping or hanging cultivars. The first two weeping \textit{Malus} varieties described in horticultural literature arose in 1860 and 1873 at the experimental station of the Royal Swedish Academy of Agriculture in Stockholm (Lindgren, 1878a, 1878b). The clones were named as \textit{Pyrus prunifolia pendula} and \textit{P. prunifolia pendula nova}. The earlier of them, \textit{Malus prunifolia ‘Pendula’} according to current nomenclature, proved to be very winter hardy (up to 66°N) and was soon introduced to tree nurseries in Europe and North America (Lindgren, 1878a, 1878b).
The classic weeping crabapple cultivars, ‘Excellenz Thiel’ (introduced by the German nursery L. Späth in 1909), ‘Oekonomierat Echtermeyer’ (introduced by Späth in 1914) and ‘Red Jade’ (introduced by the Brooklyn Botanic Garden in 1953), belong to the first- and second-generation progeny of \textit{M. prunifolia ‘Pendula’}. ‘Excellenz Thiel’ is a seedling of \textit{M. p. ‘Pendula’}, with \textit{M. floribunda} as the putative pollen parent (Späth, 1909).
while ‘Red Jade’ is an open-pollinated seedling of ‘Excellenz Thiel’ (Jefferson, 1970). The pink-flowering ‘Oekonomierat Echtermeyer’ is described as a hybrid between ‘Excellenz Thiel’ and *Malus pumila var. niedzwetzkiana* (Jefferson, 1970). *Malus ‘Elise Rathke’* (first recorded in 1884 and in our opinion correctly named as *M. domestica* ‘Pendula’) is another old weeper that is not a crab, but rather a common apple on grounds of fruit diameter, 5.9 cm in average, according to the database of UK National Fruit Collections (http://www.nationalfruitcollection.org.uk).
In the second half of the twentieth century, some 50 new weeping crabapple cultivars were released (Fiala, 1994). Nearly all the new weepers were bred in the USA and only a few of them are marketed in Europe, to judge from the Plant Finder of the Royal Horticultural Society (http://www.rhs.org.uk/plants/) and the name list of the European Nurserystock Association (http://www.internationalplantnames.com/).
The two oldest weepers, *M. prunifolia* ‘Pendula’ and ‘Pendula Nova’, have not been available in Sweden for a long time. Andréasson and Wedelsbäck Bladh (2011) noted both cultivars in the catalogues of two Swedish nurseries between 1864 and 1900, but not thereafter. From an historical inventory list (Anon., 1879), we know that both were imported to Finland before the end of the nineteenth century. *M. prunifolia* ‘Pendula’ was cited in Finnish garden literature until the 1940s (e.g. Sorma, 1932; Schalin, 1935) and marketed by local nurseries until the early 1960s (e.g. Harviala Plantskolor Och Växthus, 1939; Olsson, 1962). Most Finnish and a few Swedish tree nurseries currently offer a white-flowering crabapple with strongly weeping branches, named as *Malus ‘Hyvingiensis’*. The cultivar was first mentioned in garden literature in 1953 (Lehtonen and Jokela, 1953) and it was originally spread by the Finnish State Railways Nurseries (active 1873–1990 at Hyvinkää), but the origin and uniqueness of ‘Hyvingiensis’ remain uncertain.
The aim of our research was to identify and characterize weeping crabapple clones cultivated in old parks in Sweden and Finland. Our specific objectives were (1) to find out if extant plants of the Swedish cultivars, ‘Pendula’ and ‘Pendula Nova’, can still be found, (2) to determine the cultivar identity of the Finnish clone ‘Hyvingiensis’ and (3) to confirm cultivar names of a few weeping crabapples grown in Swedish and Finnish botanical collections. For cultivar identification, we used microsatellite (SSR)-based DNA markers and morphological traits.
### 2. Materials and methods
#### 2.1. Plant material
The 13 local accessions studied (Table 1) involved un-named crabapple trees recorded in connection with park inventories or through observations of plant experts as well as a few doubtfully named trees in botanical gardens or arboreta. For ‘Hyvingiensis’, an about 65 years old tree of documented origin at the Finnish State Railways Nurseries was used as the authentic accession and a tree propagated by Ahonen’s Nursery represented plant material currently sold as ‘Hyvingiensis’. Reference samples of the weeping cultivars ‘Elise Rathke’, ‘Excellenz Thiel’, ‘Oekonomierat Echtermeyer’, *M. prunifolia* ‘Pendula’ and ‘Red Jade’ were obtained from botanical gardens in Europe and North America (Table 1). *M. prunifolia* ‘Pendula Nova’ could not be traced in any of the 25 plant collections.
| Accession name | Location | Age and origin |
|----------------|----------|---------------|
| 1 *Malus* sp. | Sunnersta Manor, Uppsala, Sweden | Ca. 100-year-old tree in the park of the Manor House, origin unknown |
| 2 *Malus* sp. | Ulleråker Hospital, Uppsala, Sweden | Ca. 130-year-old tree in the hospital park, origin unknown |
| 3 *Malus* sp. | Mattias Iwarsson, Uppsala, Sweden | A young graft on A2 from a ca. 50-year-old tree of unknown origin in the Botanical Garden, Uppsala University, Sweden |
| 4 *Malus* sp. | Mattias Iwarsson, Uppsala, Sweden | A young graft on A2 from a ca. 130-year-old tree of unknown origin at the Park Maria Kronbergs Minne in Falun, Sweden |
| 5 *Malus* sp. | Lindesberg Church Square, Sweden | Ca. 100-year-old tree at the square close to the church, origin unknown |
| 6 *Malus* ‘Hyvingiensis’ | Hyvinkää Railway Park, Finland | Planted in 1953, originating from the Finnish State Railways Nurseries |
| 7 *Malus* ‘Hyvingiensis’<sup>a</sup> | Ahonen’s Nursery, Karttula, Finland | A young tree grafted on seedling rootstock, scion originating from a mother plant of Ahonen’s own |
| 8 *Malus* sp. | Kauppilanaukio Park, Hyvinkää, Finland | Planted in 1997, purchased from an unknown Finnish nursery |
| 9 *Malus* sp. | Esplanadi Park, Helsinki, Finland | Planted in 1998, purchased from Sundberg Nurseries, Lohja, Finland |
| 10 *Malus* sp. | Annanpuisto Park, Tuusula, Finland | Planted in 1998, purchased from an unknown Finnish nursery. Received in 1952 from Magnus Johnson’s Nursery, Södertälje, Sweden, a putative hybrid between *M. × zumi* ‘Calocarpa’ and *M. ‘Oekonomierat Echtermeyer’* |
| 11 *Malus* hybr.1952–3893 | Göthenburg Botanical Garden, Sweden | Planted in 1990, purchased from Viksten’s Nursery, Tammela, Finland |
| 12 *Malus* “Elise Rathke”<sup>a</sup> | Botanic Garden of the University of Turku, Finland | Planted in 1990, purchased from Viksten’s Nursery, Tammela, Finland |
| 13 *Malus* “Red Jade”<sup>a</sup> | Kellokoski-Ohkola Arboretum, Tuusula, Finland | Planted in 1998, purchased from Lalla’s Nursery, Järvenpää, Finland |
Reference cultivars with accession numbers
| Accession name | Location | Age and origin |
|----------------|----------|---------------|
| 14 *M. prunifolia* ‘Pendula’ 1982.0273<sup>A</sup> | Sir Harold Hillier Gardens, UK | Age and origin unknown |
| 15 *M. prunifolia* ‘Pendula’<sup>b</sup> | Dubrava Arboretum, Lithuania | Planted in 2001 or 2002, origin unknown |
| 16 *Malus* ‘Excellenz Thiel’ H901501 | RHS Garden, Hyde Hall, UK | Planted between 1955 and 1993, origin unknown |
| 17 *Malus* ‘Oekonomierat Echtermeyer’ 9069-1937A | Montreal Botanical Garden, Canada | Received from Herm. A. Hesse Nursery, Weener-Ems, Germany |
| 18 *Malus* ‘Red Jade’ 532-75 | Arnold Arboretum, USA | Received from Ruth Birkhoff, Cambridge, USA |
| 19 *Malus* ‘Elise Rathke’ 56-99 | Arnold Arboretum, USA | Received from Jac. Jurissen & Sons Nursery, Naarden, the Netherlands |
| 20 *Malus* ‘Elise Rathke’ 1977.3941<sup>Z</sup> | Sir Harold Hillier Gardens, UK | Age and origin unknown |
| 21 *Malus baccata* ‘Pendula’<sup>b</sup> | Botanical Garden of Tartu University, Estonia | Age and origin unknown |
<sup>a</sup> Cultivar names to be confirmed are enclosed in double quotation marks to set them apart from the accessions of the reference panel.
<sup>b</sup> Accession number not available.
that were asked. Instead, we received a sixth reference sample of *M. baccata* ‘Pendula’ from the Botanical Garden of the Tartu University.
Young leaf samples of the Swedish and Finnish trees were collected in 2009 and 2012. Reference samples were acquired in 2009, 2010 and 2012. All samples were shipped to the Department of Agricultural Sciences, University of Helsinki, sealed in plastic bags and stored at $-20^\circ$C until DNA analysis, performed in 2012.
### 2.2. DNA isolation and SSR analysis
For DNA extraction, 1 cm$^2$ of leaf tissue per genotype was ground to a fine powder with a Retsch Mixer Mill MM400 (Retsch GmbH, Germany). Genomic DNA was extracted using E.Z.N.A.® plant DNA kit (Omega Bio–Tek Inc., USA) following the supplier’s instructions. The DNA isolates were kept at $-20^\circ$C before polymerase chain reactions (PCR).
Seven SSR primer pairs, originally developed for the domestic apple (Gianfranceschi et al., 1998; Liebhard et al., 2002), were used: CH01d03, CH01h02, CH02c06, CH02c09, CH02c11, CH02d08 and COL. All chosen primers have been utilized for varietal identification of apple genotypes (e.g. Laurens et al., 2004; Guarino et al., 2006; Garkava-Gustavsson et al., 2013). The forward primers were fluorescently labelled with HEX®, FAM® or TET® dyes.
For PCR amplifications, 2 µl of genomic DNA was used in an 8 µl reaction containing 10× reaction buffer (KCl), 1.5 mM MgCl$_2$, 2.0 mM dNTP mixture, 0.3/0.6 U Taq DNA polymerase (Finnzymes, Thermo Fisher Scientific, USA) and 0.3 µM of both primers. The PCR reactions were performed using a Mastercycler gradient (Eppendorf) with the following thermal program: (1) 95 °C for 2 min, (2) 28 cycles of 94 °C for 20 s, 55 or 60 °C (depending on the primer) for 40 s and 65 °C for 45 s and (3) 65 °C for 5 min.
The PCR products were separated in the Sequencing Laboratory of the Institute of Biotechnology, University of Helsinki, by a capillary electrophoresis system (3730 DNA Analyzer; Applied Biosystems Inc., USA) and fragment sizes were determined by Peak Scanner Software 1.0 (Applied Biosystems, Inc., USA). In cases where only one peak was visible, its size was recorded twice since the locus was considered homozygous.
### 2.3. Morphological traits
The local accessions were examined for their morphological characteristics, adopted from the descriptor list for ornamental crabapples published by the International Union for the Protection of New Varieties of Plants (UPOV, 2003). Measurements were taken on 3–10 well developed leaves, flowers and fruits. To separate between the accessions in a key, observations on a set of additional characters (not UPOV) were used. No morphological traits were recorded for tree no. 7.
The plants were documented by taking digital images of both the general appearance and morphological details. Dried voucher specimens of the Swedish trees are preserved at the Museum of Evolution, Uppsala University, and those of the Finnish trees are kept at the Herbarium of the Finnish Museum of Natural History, University of Helsinki.
### 2.4. Data analyses
The number of alleles per locus, the effective number of alleles and the observed and expected heterozygosity were computed from the allelic profiles. For calculation of genetic distances between the accessions, the allele sizes were transformed into binary scores because two of the accessions appeared to be triploid. The measures of genetic variation and the genetic distances were calculated with GenAIEx 6.5 (Peakall and Smouse, 2012). For visualization of the similarity relationships, a dendrogram was constructed by the unweighted pair-group method with arithmetic average (UPGMA) in the MEGA software version 5.10 (Tamura et al., 2011).
The cultivar identity of the 13 local accessions was investigated by comparing their DNA profiles with those of the 8 reference samples. The assumed parent–offspring relationships were checked by comparing the multilocus allelic profiles of the putative relatives. The morphological traits were used to confirm the cultivar identifications indicated by allelic data. The phenotypic differences between the local accessions were summarized in an identification key.
### 3. Results and discussion
#### 3.1. SSR polymorphism
The primer pairs CH01h02 and CH02c11, both of the two primer pairs amplified two different loci, one of which was monomorphic in all accessions. The two monomorphic loci were discarded from data analyses. The seven polymorphic loci amplified 62 alleles in the entire set of accessions studied. The two triploid accessions displayed four unique alleles not found in the diploid genotypes. In the 19 diploid genotypes, the average number of alleles per locus was 8.3 (Table 2), which is somewhat less than the average allele number (9.3) in the 85 traditional Swedish and Finnish domestic apple cultivars studied by Garkava-Gustavsson et al. (2013). Other studies on domestic apple germplasm collections revealed an average of 9.2 alleles (Guarino et al., 2006) and 9.7 alleles (Gasi et al., 2010) per locus, respectively. The numbers of both accessions and SSR loci were smallest in our study, which may explain the differences.
In the present set of accessions, the expected heterozygosity varied from 0.72 to 0.85 per locus (Table 2) reflecting the high frequency of cross-pollination and self-incompatibility in the genus *Malus*. The mean expected heterozygosity values reported for traditional domestic apple cultivars range from 0.74 (Garkava-Gustavsson et al., 2013) to 0.81 (Guarino et al., 2006).
#### 3.2. Morphological traits
All 13 local accessions had single, white flowers often with a shade of pink on unopened flower buds and on the outer side of petals. ‘Hyvingiensis’ (no. 6) and ‘Red Jade’ (no. 13) had large flowers with an average diameter more than 6 cm, while the flowers of the other accessions were of medium size, approx. 5 cm in diameter.
The leaves of the local accessions were green and unlobed. The leaf margin was crenate to serrate in all accessions except for ‘Hyvingiensis’ that had doubly serrate leaf margins. Accessions no. 11 and 13 had small fruits with a deciduous calyx. The other trees had medium-sized fruits with a persistent calyx, apart from tree no. 8, some fruits of which shed their calyx at an early stage. For all local accessions, the predominant colour of the mature fruit skin was yellow with a varying amount of orange or red blush on the exposed side.
The morphological characteristics distinguishing the local accessions were summarized as an identification key (Fig. 1). Four accessions were relatively easy to discriminate: in accession no. 2, the fruit stalk was remarkably short, no. 6 was distinguished by its large, pure white flowers and thick leaves with doubly serrate margins, and nos. 11 and 13 had small berry-like fruit. The remaining accessions were distinguishable only by the minor differences in their vegetative parts (Fig. 1).
#### 3.3. Cultivar identification
The 13 local accessions represented six distinct genotypes. One Swedish (no. 5) and four Finnish trees (nos. 8, 9, 10 and 12)
1. Fruit with deciduous calyx…………………Malus hybrid, no. 11 and Malus “Red Jade”, no. 13
Fruit with persistent calyx …………………………………………………………………………………2
2. Fruit stalk short, 2–16 mm …………………………………………………………………………………Malus sp., no. 2
Fruit stalk long 16–50 mm, often with two glands present at the base ……………………………3
3. Leaf margin doubly serrate, flower buds white …………………Malus ‘Hyvingiensis’, no. 6
Leaf margin crenate to serrate, flower buds pinkish …………………………………………………4
4. Young shoots and buds grey or brown…Malusprunifolia ‘Pendula’, nos. 5, 8, 9, 10 and 12
Young shoots and buds brownish-red to red ……………………………………………………………5
5. Blade length of the 4th to 6th leaf < 6 cm……………………………………………………………………Malus sp., no. 1
Blade length of the 4th to 6th leaf > 6 cm ………………………………………………………………Malus sp., no. 3
Fig. 1. Identification key to the weeping ornamental crabapples studied for their morphological traits.
Table 2
Observed variation at 7 microsatellite loci tested in 19 diploid weeping crabapple accessions.
| SSR locus | Size range (in bp) | Number of alleles | Effective number of alleles | Observed heterozygosity | Expected heterozygosity |
|-----------|--------------------|-------------------|----------------------------|-------------------------|------------------------|
| CH01d03 | 127–189 | 10 | 5.65 | 1.00 | 0.82 |
| CH01h02 | 227–273 | 10 | 6.86 | 1.00 | 0.85 |
| CH02c06 | 216–252 | 7 | 6.00 | 0.92 | 0.83 |
| CH02c09 | 231–259 | 7 | 4.97 | 0.75 | 0.80 |
| CH02c11 | 201–243 | 9 | 5.24 | 1.00 | 0.81 |
| CH02d08 | 204–257 | 9 | 5.88 | 0.92 | 0.83 |
| COL | 215–241 | 6 | 3.51 | 0.50 | 0.72 |
| Mean | | 8.3 | 5.44 | 0.87 | 0.81 |
Accessions showing an identical or triploid allelic profile were excluded from calculation of the diversity parameters.
displayed an SSR profile identical with the reference cultivar no. 14, M. prunifolia ‘Pendula’ from the Sir Harold Hillier Gardens. The morphological traits of the five local accessions were similar, with fruit size and colour matching those of M. prunifolia ‘Pendula’ as described by Lindgren (1878a, 1878b). However, our reference panel included another accession (no. 15) named as M. prunifolia ‘Pendula’, but its allele profile differed from that of the previous accessions (Table 3). Because fruits of accession no. 15 from the Dubrava Arboretum proved to be dissimilar (small and always without calyx) from Lindgren’s description of M. prunifolia ‘Pendula’, the Dubrava tree was regarded as mislabelled.
Unfortunately, the origin of M. prunifolia ‘Pendula’ from the Sir Harold Hillier Gardens is unknown. Nevertheless, the authenticity of the six ‘Pendula’ trees is supported by an additional Swedish M. prunifolia ‘Pendula’ accession that has grown at the Bergius Botanic Garden, Stockholm since 1925 at the latest, and was discovered late in our study. The morphological characteristics of this accession were consistent with those of the five previous ‘Pendula’ trees. The Bergius Botanic Garden is located next to the grounds once occupied by the experimental station of the Royal Swedish Academy of Agriculture (Lange, 2000), where M. prunifolia ‘Pendula’ originated.
While looking for M. prunifolia ‘Pendula’ in local plantings, we quite unexpectedly found four relatively young Finnish trees representing this 150-year-old cultivar, which has not been listed in domestic nursery catalogues for 50 years. Practically all apple trees planted in Finland are produced by local nurseries, as the cold climate restricts cultivation of most international cultivars. It seems probable that some Finnish nurseries hold mother stock of M. prunifolia ‘Pendula’, mislabelled and sold most likely as Malus ‘Hyvingiensis’, the only white-flowering weeper common in present-day Finnish nursery trade.
The two ‘Hyvingiensis’ accessions (nos. 6 and 7) had identical DNA profiles, unique from the rest of the accessions analysed (Table 3). Three distinguishable alleles were determined at four loci in both ‘Hyvingiensis’ samples, indicating triploidy of the clone. The field observations also were consistent with polyploidy: ‘Hyvingiensis’ has thick, large leaves, large flowers and big fruits with poor seed set.
Table 3
Allele sizes (in bp) at 7 loci for 14 weeping crabapple accessions analysed in this study.
| Accession name | SSR primer pairs | CH01d03 | CH01h02 | CH02c06 | CH02c09 | CH02c11 | CH02d08 | COL |
|-------------------------|------------------|---------|---------|---------|---------|---------|---------|-----|
| Local accessions | | | | | | | | |
| 1 Malus sp | | 141:150 | 252:254 | 250:252 | 244:244 | 217:229 | 212:250 | 215:231 |
| 3 Malus sp | | 142:150 | 248:250 | 250:252 | 244:244 | 217:229 | 212:250 | 231:231 |
| 4 Malus sp | | 142:158 | 237:250 | 242:252 | 233:250 | 205:229 | 212:218 | 231:231 |
| 6 Malus ‘Hyvingiensis’ | | 137:141:150 | 235:237:239 | 242:246 | 233:250 | 225:229:233 | 212:218 | 229:231:233 |
| 11 Malus hybr. | | 129:150 | 250:273 | 216:250 | 231:246 | 201:215 | 212:226 | 215:241 |
| 13 Malus “Red Jade” | | 139:150 | 227:239 | 238:246 | 231:250 | 215:229 | 214:214 | 231:231 |
| Reference accessions | | | | | | | | |
| 14 M. prunifolia ‘Pendula’ | | 141:150 | 239:254 | 238:246 | 233:250 | 215:233 | 204:214 | 231:241 |
| 15 M. prunifolia ‘Pendula’ | | 127:150 | 250:252 | 216:238 | 231:233 | 215:229 | 214:216 | 241:241 |
| 16 Malus × scheideckeri ‘Excellenz Thiel’ | | 137:141 | 237:254 | 216:250 | 231:254 | 205:229 | 216:226 | 231:231 |
| 17 M. × gloriaea ‘Oekonomierat Echtermeyer’ | | 150:189 | 237:254 | 216:216 | 231:246 | 215:243 | 212:214 | 241:241 |
| 18 Malus ‘Red Jade’ | | 127:160 | 231:254 | 238:248 | 231:231 | 223:229 | 214:216 | 237:241 |
| 19 Malus ‘Elise Rathke’ | | 137:141 | 237:242 | 248:252 | 246:259 | 207:217 | 227:257 | 221:237 |
| 20 Malus ‘Elise Rathke’ | | 135:139 | 237:242 | – | 246:259 | 207:217 | 227:257 | 221:237 |
| 21 Malus baccata ‘Pendula’ | | 127:150 | 227:254 | 238:246 | 231:246 | 207:229 | 204:214 | 215:235 |
Allelic profiles are not shown for the remaining seven accessions that were identical to those of other local or reference trees.
The results of this study proved the authenticity of ‘Hyvingiensis’ plants currently grown by one local nursery. In an earlier SSR analysis, four ‘Hyvingiensis’ plants from four different nurseries were shown to be identical with the 65-year-old tree no. 6 of known origin (Vuorinen, 2012). The name *P. prunifolia pendula* ‘Hyvingiensis’ seems to have been recorded for the first time in an old inventory list, preserved in the archive of the Finnish State Railways Nurseries, that was written between 1893 and 1903 by the then head gardener. The cultivar name is quoted in the same form in a contemporary article on the Finnish State Railways Nurseries (Kornman, 1904). Another report of the same year refers to “a weeping crabapple grown from local seed at the State Railways Nurseries” (Karsten, 1904). ‘Hyvingiensis’ seems thus to have arisen around the turn of the previous century, though it was not described until the early 1950s (Lehtonen and Jokela, 1953).
Two of the un-named Swedish accessions (nos. 2 and 4) gave the same band profile for all but one marker (CH01b02), where no fragment was amplified from accession no. 2. The two accessions were assumed to be identical. Another two of the Swedish accessions (nos. 1 and 3) had similar allele sizes at four loci, but slightly different bands at the remaining three loci (Table 3). The accessions could not be properly compared by their morphological traits because tree no. 3 was a young graft with few flowers and fruits. Judging from the vegetative characteristics and the origin of the trees, accessions no. 1 and 3 seem to represent different genotypes.
One of our original targets was to determine whether *M. prunifolia* ‘Pendula Nova’ can still be found in old plantings. The fruit size, colour and taste of accession no. 2 seemed to match Lindgren’s short description of ‘Pendula Nova’ (Lindgren, 1878a, 1878b). The original growing sites of accessions no. 2 and no. 4 are both parks constructed in the 1880s. The Ulleråker hospital tree no. 2 was described 60 years ago in a dendrological publication as “an old, unidentified weeping crabapple” (Hylander, 1955). In its early days, the hospital ran a plant nursery that sold seed and living plants to the public. In 1888, the nursery’s catalogue offered both old Swedish weepers (“*P. prunifolia pendula* and *P. prunifolia pendula nova*”) for sale (Anon., 1888). From these morphological and historical data, we suggest that accessions no. 2 and 4 may very likely represent the historical ‘Pendula Nova’.
The hybrid accession no. 11 from Gothenburg Botanical Garden shared at least one allele with its presumed parent cultivar ‘Oekonomierat Echtermeyer’ at all but one locus (Table 3). The small difference (four base pairs) might have originated from a mutation in either the parental or our reference accession of *Malus* ‘Oekonomierat Echtermeyer’ or have arisen from an experimental error. Further parent–offspring relationships were sought for reference sample no. 16, supposed to represent the cultivar ‘Excellenz Thiel’ that is reported to be both a seedling of *M. prunifolia* ‘Pendula’ and a parent of both ‘Red Jade’ and ‘Oekonomierat Echtermeyer’. Nevertheless, the microsatellite allele profile of the reference accession no. 16 did not lend support to any of these relationships (Table 3). The small discrepancies between the allele profiles of the two ‘Elise Rathke’ accessions (Table 3) were probably due to an error in fragment sizing (for marker CH01d03) and a failure in PCR amplification (for marker CH02c06).
In our data, three local trees (nos. 1, 3 and 13) did not match any of the reference cultivars, and two reference accessions (nos. 15 and 16) seemed not to be true to their names. These five samples represented five unique genotypes. The 100-year-old tree no. 1 could be a local seedling, and the same may hold true for tree no. 3 that was originally discovered as a graft in an apple hedge. The remaining three trees may belong to cultivars that were not included in the reference panel, or may be seedlings. As pointed out by Jefferson (1970) and Fiala (1994), misnamed crabapples are
common in the nursery trade, and one reason for mistaken naming lies in the propagation of cultivars by seed.
In the dendrogram (Fig. 2), the two accessions representing the domestic apple cultivar ‘Elise Ratikhe’, form a group of their own, and the proper ornamental crabapples are arranged in two clusters. *M. prunifolia* ‘Pendula’, its synonyms and descendants comprise one group that also includes *M. baccata* ‘Pendula’ and the two small-fruitied, mislabelled accessions no. 13 and 15 (Fig. 2). The second crabapple group consists of the four old Swedish accessions, the two ‘Hyvingiensis’ trees and the presumably misnamed ‘Excellenz Thiel’ from the RHS Garden Hyde Hall (Fig. 2).
The genetic relationships displayed by the dendrogram (Fig. 2) match the known parentage relations of the cultivars. The morphological traits of fruits are also in agreement with the clustering in Fig. 2: the large-fruited domestic apple accessions form a distinct group, while the first crabapple group is composed of accessions with relatively small fruit and either deciduous or persistent calyx and the second group includes those accessions with larger fruit and persistent calyx. However, as the dendrogram is based on only seven SSR loci per genotype, it could be unstable, and a small change in fingerprint data could alter the topology of the tree.
4. Conclusions
In this study, we were able to identify several old weeping crabapple clones by a combination of microsatellite genotyping, morphological observations, old garden literature and archival research. Living plants of two valuable historical weepers, one of which is the ancestor of three classic crabapple cultivars, were discovered, the origin and authenticity of one domestic cultivar was solved, and the three clones will be preserved in the national gene banks. As indicated by our results, parks and plant collections may hold historically valuable, though unidentified or misnamed garden plant genotypes. The methods presented here could be applied to other ornamental plant groups for stabilization of horticultural nomenclature and for conservation of garden plant genetic resources.
Acknowledgements
This research was financially supported by the Maiju and Yrjö Rikala Horticultural Foundation, the Swedish-Finnish Cultural Foundation and the Municipality of Hyvinkää. We thank Arnold Arboretum, the Bergius Botanical Garden, the Botanical Garden of Tartu University, the Dubrava Arboretum, the Gothenburg Botanical Garden, the Montreal Botanical Garden, the RHS Garden Hyde Hall and the Sir Harold Hillier Gardens for kindly providing us with the reference samples. Lastly, we thank Sini Kerminen for excellent technical assistance.
References
Andréasson, A., Wedelsbäck-Bladh, K., 2011. *Prydnadsträd och prydnadsbuskar hos två svenska plantskolor 1836–1946*, Lantbruksakademiens Experimentalfält (1836–1900) och Vassbo trädskola i Dalarna (1895–1946). POM, Cent. biol. mångfald. CBM:s skrifter, 58. SLU, Alnarp.
Anon., 1879. *Katalog öfver träd och buskar i Statsjernvägarnes trädskolor vid Hyyinge*. Unpubl. Inventory List Preserv. at the Finn. Railw. Mus.
Anon., 1888. *Företeckning öfver Upsala Central-Hospitals Trädgårds Växtsamlingar. Edv. Berlings Boktryckeri*, Uppsala.
Dirr, M., 2009. *Manual of Woody Landscape Plants*. Stipes Publishing Co., Champaign, IL.
European Nurserystock Association, List of Names of Woody Plants and Perennials. http://www.internationalplantnames.com/ (visited 26.11.13).
Fiala, J.L.F., 1994. *Flowering Crabapples, the Genus Malus*. Timber Press, Portland, OR.
Garkava-Gustavsson, L., Muujai, C., Sehic, J., Zborowska, A., Backes, G.M., Hietaranta, T., Antonius, K., 2013. Genetic diversity in Swedish and Finnish heirloom apple cultivars revealed with SSR markers. Sci. Hortic., http://dx.doi.org/10.1016/j.scienta.2013.07.040.
Gasi, F., Simon, S., Pojskic, N., Kurtovic, M., Pejic, I., 2010. Genetic assessment of apple germplasm in Bosnia and Herzegovina using microsatellite and morphological markers. Sci. Hortic., http://dx.doi.org/10.1016/j.scienta.2010.07.002.
Giannasi, D.E., Le, L., Seglias, N., Tarchini, R., Komjanc, M., Gessler, C., 1998. *Simple sequence repeats for the genetic analysis of apple*. Theor. Appl. Gen. 96, 1069–1076.
Guarino, C., Santoro, S., Simone, De L., Cipriani, G., Testolin, R., 2006. *Genetic diversity in a collection of ancient cultivars of apple (Malus × domestica Borkh.) as revealed by SSR-based fingerprinting*. J. Hortic. Sci. Biotechnol. 81, 39–44.
Harviila, Platskontor Oy och Vaxthus, 1939. *Katalog. AA. Karisto Oy. kirjap. Hameenlinna*.
Hylander, N., 1955. *Några ord om Uppsala stads parker och planteringar*. Lustgård 35–36, 10–101.
Jefferson, R.M., 1970. History, progeny, and locations of crabapples of documented authentic origin. In: Natl. Arboretum Contrib. No. 2. Agric. Res. Serv. U.S. Dep. Agric., Washington, DC.
Karsten, O., 1904. *Muistelmia eräältä opinto- ja huviiretkeltä. Puutarha 7*, 68–69.
Kormann, J.K., 1905. *Suomen valtion rautateiden taimistot Hyvinkäällä. Puutarha 8*, 36–37.
Lange, U., 2000. *Experimentalfältet. Kungliga Lantbruksakademiens experiment- och försöksverksamhet på Norra Djurgården i Stockholm 1816–1907*. Skogs-och Lantbruksförl., Meddelanden, pp. 2–10.
Laurens, F., Durel, C.E., Laroche, M., 2004. Molecular characterization of French local apple cultivars. Sci. Hortic. 99, Acta Hort. 663, 639–641.
Lehtonen, V., Jokela, K., 1957. *Taimistotärjä Werner Söderström Oy*. Helsinki.
Liebhard, R., Gianfranceschi, L., Koller, B., Ryder, C., Tarchini, R., Weg Van De, E., Gessler, C., 2002. *Development and characterisation of 140 new microsatellites in apple (Malus × domestica Borkh.)*. Mol. Breed. 10, 217–241.
Lindgren, E., 1878a. *Om uppkommen af nya arter af bland de odlade växterna samt om tvenne å Kongl. Landbruks-Akademiens Experimentalfält upkomma äppel-varieteter med hängande grenar*. Tidn. Trädgårdsld.: 17, 65–66.
Lindgren, E., 1878b. *Pyrus prunifolia pendula und Pyrus prunifolia pendula nova*. Dtsch. Gärt. 11, 152–152.
National Fruit Collection. NFC Database. Available at: http://www.nationalfruitcollection.org.uk/ (visited 26.11.13).
Olsson, P., 1962. *Hinnasto. Tilgman*, Helsinki.
Peakall, R., Smouse, P.E., 2012. GenAlex 6.5: genetic analysis in Excel. Population genetic software for teaching and research—an update. Bioinforma. http://dx.doi.org/10.1093/bioinformatics/bts460.
RHS. Find a Plant. Available at: http://www.rhs.org.uk/plants/ (visited 26.11.13).
Schalin, B., 1935. *Appelträdet närmaste släktningen*. In: Dahlberg, R., Liljelund, V. (Eds.), *Fruktradgården III. Nylands Fruktodl.förening*, Helsingfors, pp. 59–65.
Sorma, V., 1932. *Avomaan koristeputu ja -pensaat*. A.A. Karisto, Hameenlinna.
Späth, L., 1909. *Zwee neue Gehölze*. Mitt. Dtsch. Deutsch. Ges. 18–20, 326.
Tammi, K., Pearson, D., Petersen, N., Stecher, G., Nei, M., Kumar, S., 2011. MEGA5: molecular evolutionary genetics analysis using maximum likelihood, evolutionary distance, and maximum parsimony methods. Mol. Biol. Evol. http://dx.doi.org/10.1093/molbev/msr121.
UPOV, 2003. Ornamental Apple (*Malus* Mill.). Guidelines for the Conduct of Tests for Distinctness, Uniformity and Stability. International Union for the Protection of New Varieties of Plants (UPOV), Geneva http://www.upov.int/en/publications/tg-rom/tg192/tg_192.1.pdf
Vuorinen, K., (Master’s thesis) 2012. *Suomalaisilla taimitarhoilla lisättävien koristeoneihinpuiden lajikeattous*. Dept. of Agric. Sci., University of Helsinki.
Wyman, D., 1955. *Trees for American Gardens*. Macmillan, New York.
|
Relationship between seasonal cold acclimatization and mtDNA haplogroup in Japanese
Takayuki Nishimura¹*, Midori Motoi², Yousuke Niri², Yoshikazu Hoshi³, Ryuichiro Kondo⁴ and Shigeki Watanuki⁵
Abstract
Background: The purpose of this study was to elucidate the interaction between mtDNA haplogroup and seasonal variation that contributes to cold adaptation.
Methods: There were 15 subjects (seven haplotype D subjects and eight haplotype non-D subjects). In summer and winter, the subjects were placed in an environment where the ambient temperature dropped from 27 °C to 10 °C in 30 minutes. After that, they were exposed to cold for 60 minutes.
Results: In summer, the decrease in rectal temperature and increase in oxygen consumption was smaller and cold tolerance was higher in the haplotype non-D group than in the haplotype D group. In winter, no significant differences were seen in rectal temperature or oxygen consumption, but the respiratory exchange ratio decreased in the haplotype D group.
Conclusions: The results of the present study suggest that haplogroup D subjects are a group that changes energy metabolism more, and there appears to be a relationship between differences in cold adaptability and mtDNA polymorphism within the population. Moreover, group differences in cold adaptability seen in summer may decrease in winter due to supplementation by seasonal cold acclimatization.
Keywords: Cold adaptation, Seasonal cold acclimatization, mtDNA haplogroup, Cold exposure, Oxygen consumption
Background
Cold adaptation in humans has long been debated. Modern humans (*Homo sapiens*) spread from Africa to all parts of the world, and it is thought that, in the process, they adapted to various environments, particularly to cold climates, using various strategies. Among these strategies there are different types of cold adaptation that vary with factors such as strength of cold stimulation, food situation, and cultural background. These types of adaptation include: isolative adaptation, in which a person exposed to a constant level of cold exhibits a decrease in skin temperature without a change in core body temperature [1-3]; hypothermic adaptation, in which a person exhibits lower core body temperature [4,5]; isolative hypothermic adaptation, in which a person exhibits both a decrease in skin temperature and lower core body temperature [6,7]; and metabolic adaptation, in which thermogenesis is strengthened [8,9]. These adaptation types are often interpreted as regional characteristics of a population that has been in a certain environment for a long time and may also include genetic adaptations.
When humans are exposed to cold, they exhibit physiological responses to maintain their body temperature. The first response is for blood vessels to constrict and suppress heat loss from the skin surface. In thermoregulation by vasoconstriction, the range of controllable temperature (thermoneutral zone) is small. The response to even colder stimulation is energy metabolism, including shivering thermogenesis (ST) and nonshivering thermogenesis (NST). Differences in physiological responses arise depending on differences in...
the type of adaptation. Examples of this difference are individuals with strong heat insulation ability due to vasoconstriction and individuals who exhibit an early response via metabolism. These variations are not considered statistical errors, but rather physiological differences or physiological polypisms, resulting from individual differences in adaptation strategy. In addition to sex and age, these differences are also influenced by environmental factors associated with aspects such as season, lifestyle habits, genotype, and other genetic factors that act as the base, while these factors are also associated with morphological characteristics [10-12]. The type of physiological response to cold adaptation is thought to be largely built on the interaction between environmental and genetic factors. However, few studies have focused on the genetic factors.
In short-term cold acclimatization studies during which subjects were exposed to cold stimulation in an artificial climate chamber with a room temperature of 5 °C or were immersed in cold water, the heat insulation capacity was reportedly enhanced by a reduction in blood flow following acclimatization associated with a decrease in thermogenesis, a delay in shivering, and a decrease in distal skin temperature [13,14]. In contrast, no changes were seen in core body temperature and mean skin temperature from cold acclimation after cold exposure in a 5 °C room for 2 hours per day for 11 days [15]. According to different studies on relatively long seasonal acclimatization, after acclimatization in winter the shivering and thermogenesis were reported to decrease [2], show no change [16-18], or increase [19-22]. Skin temperature has also been reported to decrease [15], show no change [20], or increase [21,22]. Cold acclimatization in humans is thus reported to have a seasonal component, but there is currently no consensus on the nature of this component. In addition to individual differences, this component may vary with the level of cold stimulation depending on factors such as exposure conditions and amount of clothing. Cold water immersion and other types of strong cold stimulation are often reported to result in a decrease in skin temperature and a decrease in thermogenesis [13,14], suggesting that the strength of the temperature environment may also affect the population in studies on seasonality because the strength of cold stimulation depends on their area of habitation.
Few studies have explained genetic factors related to the cold tolerance response, adaptation, and acclimatization. Generally speaking, morphological differences in mammals among different populations of the same species are based on genetic factors, as specified by Allen’s rule and Bergmann’s rule. From a physiological perspective, metabolic adaptations that rely on a higher basal metabolic rate or thermogenesis, such as those seen in Inuit people, are thought to be genetic adaptations [23], but the underlying mechanism is not yet clear. According to recent reports, mitochondria and their genomes that may influence genetic factors for cold tolerance may partially elucidate this mechanism [24-26]. Mitochondria are the basis for energy metabolism, which serves an important role in the cold tolerance response. The thermogenic response to cold stimulation includes ST by skeletal muscles and NST by internal organs and brown adipose cells, and thermogenesis is performed in these tissues and cells through mitochondria. In many previous studies, changes in the amount of thermogenesis (metabolic rate, oxygen consumption) have been argued to depend on the presence or absence of shivering [2,13,14]. However, recent studies have reported an increase in thermogenesis even in conditions when shivering does not occur [21,22], and they have suggested that NST is involved in cold tolerance [27-29]. The latest studies have shown that brown adipose cells also become activated by cold stimulation in adults, and they may play a part in thermoregulation [30]. According to another report, brown adipose cells are highly active in winter and their activity declines as humans age [31]. Uncoupling protein is also present in mitochondria in skeletal muscles, suggesting that NST takes place there [32,33]. Thoughts on ST and NST have thus changed, and variation within single individuals has been suggested.
As mitochondria have their own genome, this genome may influence functional differences in mitochondria. Because of their evolutionary neutrality, the mitochondrial genome is an important means of understanding human migrations with accompanying age estimates [34]. In recent reports, other researchers have claimed that adaptations to climate have been made by mtDNA regulating the balance of ATP generation and thermogenesis in oxidative phosphorylation of mitochondria [24-26]. Thermogenesis here is not that produced by cold stimulation, but rather by cellular-level thermogenesis released during ATP generation. The principle is similar to that of an engine, in which the process of generating energy from raw materials is never 100% efficient, with some heat always escaping. While the efficiency of ATP generation varies depending on the conditions [25], it has been suggested that mitochondria of populations that have adapted to cold have a basal metabolism that generates heat more easily, so the influence of thermogenesis is greater, even with the same amount of oxygen consumption. This means that a large volume of heat may be obtained in states such as ST and NST and basal metabolism where oxygen is consumed. In relation to this hypothesis, it has often been reported that mtDNA polymorphism influences physiology in modern humans, mostly via energy metabolism systems.
For example, maximum oxygen consumption is small in haplogroup J people [35], there are differences between haplotypes in the risk of developing diabetes or other lifestyle-related diseases that are closely related to energy metabolism [36], there is an association between basal metabolism and mtDNA polymorphism [37], and there is an association between acute altitude sickness and mtDNA polymorphism [38]. In cold adaptation studies carried out by our research group in summer [39], people with haplogroup D – the most common haplotype in Japanese people – showed the same amount of oxygen consumption during cold exposure but a smaller decrease in rectal temperature compared with haplogroup non-D people. The results of this study suggested that mtDNA polymorphism is one factor that causes variation in cold tolerance.
While the above findings suggest that genetic factors have some sort of influence on seasonal acclimatization and acclimatization to repetitive exposure, this has yet to be argued. In particular, there is a need to investigate how genetic factors affect differences in physiological responses between summer, during which there is no cold acclimatization, and winter, during which there is cold acclimatization. There are different arguments about how to define cold tolerance. In the present study, high cold tolerance was defined as a small decrease in core body temperature in response to an increase in energy metabolism. This definition was used because, during cold stimulation that exceeds the zone for insulation thermoregulation by skin vasoconstriction, an increase in energy metabolism is the only means for thermoregulation, and the relationship between energy metabolism and core body temperature becomes very important. Theoretically, a cold tolerance response that relies on inherent genetics is predicted to play a large role in maintaining body temperature in summer and a smaller role in winter due to cold acclimatization. The reason for this difference may be adaptive seasonal variation from summer to winter that has been reported in many previous studies. More specifically, if mtDNA polymorphism is involved in energy metabolism, it may play some part in determining whether metabolic adaptation is exhibited or isolative adaptation is exhibited in response to an equal amount of stimulation. The present study thus focused on mtDNA polymorphism and aimed to elucidate how changes in cold tolerance and season occur with the interaction between the seasonal cold acclimatization and haplogroup. While the sample size may be too small to determine genetic effects, the same methods were used in a study conducted in the summer, and the same subjects were examined to control for body mass index and body surface area as much as possible.
**Methods**
**DNA analysis**
Total DNA was extracted from hair shafts by digestion in extraction buffer using ISOHAIR (Code No. 319–03401; Nippon Gene, Tokyo, Japan).
The mtDNA gene spacer D-loop was amplified by PCR using primers M13RV-L15996 and M13(−21)-H408. The analyzed sequences of the D-loop primers were as follows: mtDNA L15996, 5′-CTCCACCATTTAGCACCCAAAGC-3′; and mtDNA H408, 5′-CTGTTAAAAGTGCAATACCGCCA-3′.
The thermocycling profile consisted of an initial denaturation step at 94 °C for 1 minute, followed by 32 cycles for 30 seconds at 94 °C, 30 seconds at 56 °C, and 75 seconds at 72 °C. Purified DNA was sequenced in both directions using the ABI PRISM 310 Genetic Analyzer (Applied Biosystems, Foster City, CA, USA) with a BigDye Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems).
**Participants**
To find genetic effects, variations in morphological characteristics within subjects were minimized, since cold adaptability depends on morphological characteristics. Participants were divided into haplogroup D (D4) and haplogroup non-D (not common in the northern zone) because the number of participants was limited in this experiment. A total of 15 subjects who participated in the cold exposure experiment, including seven haplotype D (D4) students and eight students of haplotype non-D, were selected so that there were no significant differences in morphological characteristics (height, weight, body mass index, body surface area, body fat), as shown in Table 1. The haplogroups of non-D subjects were M7a (four subjects), M7c (one subject), F2a (one subject), and B4 (two subjects). Body surface area was calculated by Kurazumi’s formula [40], and body fat was calculated by Brozek’s formula [41]. The subjects were born in Fukuoka Prefecture or neighboring prefectures,
| Season | Haplotype | Height (cm) | Weight (kg) | Body mass index | Body surface area (m²) | Body fat (%) |
|--------|-----------|-------------|-------------|-----------------|------------------------|--------------|
| Summer | D (n = 7) | 172.7 ± 5.8 | 62.4 ± 5.6 | 20.8 ± 1.8 | 1.73 ± 0.09 | 13.6 ± 2.2 |
| | Non-D (n = 8) | 171.1 ± 4.1 | 59.1 ± 4.0 | 20.2 ± 1.1 | 1.69 ± 0.06 | 14.1 ± 2.0 |
| Winter | D (n = 7) | 173.1 ± 5.6 | 61.2 ± 5.7 | 20.4 ± 1.8 | 1.72 ± 0.08 | 13.3 ± 2.4 |
| | Non-D (n = 8) | 171.8 ± 3.9 | 59.3 ± 4.6 | 20.1 ± 1.1 | 1.70 ± 0.07 | 14.4 ± 2.3 |
and they did not include any individuals who participated regularly in vigorous sports. The mtDNA analysis was performed with approval from the Ethics Committee for Genome-gene Analysis of the Graduate School of Medicine, Kyushu University. In addition, mtDNA information obtained in our study was anonymously treated and managed by the Gene Therapeutic Information Center of the university.
**Measurements**
The experiments were conducted in Fukuoka during summer (August to September) and winter (February to March), and the mean temperature in summer was 29.0 °C and in winter was 8.5 °C. Measurement sensors were attached to subjects in an environment with a temperature of 27 °C in preparation for the experiment. The subjects rested quietly for 15 minutes in an artificial climate chamber, and then the cold exposure commenced. The artificial climate chamber was programmed so that the ambient temperature dropped to 10 °C in approximately 30 minutes, after which there was exposure to cold (10 °C) for 60 minutes.
The parameters recorded were rectal temperature, skin temperature (seven places), oxygen consumption, blood pressure, electrocardiogram, and a subjective evaluation. The rectal temperature probe was inserted to a depth of 13 cm beyond the anal sphincter. The skin temperature sensors were attached with surgical tape to measurement sites on the forehead, shoulder, chest, forearm, back of the hand, thigh, and dorsal side of the foot. Measurements were made continuously at intervals of 2 seconds using a data logger (LT-8A; Gram Corporation, Saitama, Japan). Mean skin temperature was calculated from the seven-point method of Hardy–DuBois[42]. Distal skin temperature was derived using the following equation involving arm, hand, feet and leg temperatures:
\[
\text{Distal skin temperature} = \left(0.14 \times T_{\text{arm}} + 0.05 \times T_{\text{hand}} + 0.07 \times T_{\text{feet}} + 0.13 \times T_{\text{leg}}\right)/0.39
\]
The weighting factor for each body segment was based on Hardy and Du Bois, 0.39 being the relative surface area of the distal and proximal regions, respectively. Oxygen consumption was measured with a respiratory gas analyzer (AE-300S; Minato Medical Science, Osaka, Japan) through a breathing tube using a mask to measure expired gas (Rudolph mask; Nihon Kohden, Tokyo, Japan).
**Statistical analysis**
Morphological data were compared by unpaired \( t \) test. Physiological data were compared using three-way (haplogroup and season and time) analysis of variance. All data are expressed as the mean ± standard error, and \( P < 0.05 \) was considered significant.
**Figure 1 Rectal temperature.** Changes in rectal temperature (means ± standard error) in summer D (\( \bullet \); \( n = 7 \)), summer non-D (\( \bigcirc \); \( n = 8 \)), winter D (\( \blacksquare \); \( n = 7 \)), and winter non-D (\( \square \); \( n = 8 \)) groups. *\( P < 0.05 \) compared with summer D (\( \bullet \)) and summer non-D (\( \bigcirc \)), *\( P < 0.05 \) compared with summer D (\( \bullet \)) and winter D (\( \blacksquare \)), and *\( P < 0.05 \) compared with summer non-D (\( \bigcirc \)) and winter non-D (\( \square \)). In a post hoc test, rectal temperatures at rest (0 min) were lower in winter than in summer for both groups. Rectal temperatures of haplogroup D subjects when exposed to cold during winter were significantly lower at all time points than exposure during summer (\( P < 0.05 \)). In contrast, non-D subjects had significantly lower rectal temperatures 0 to 40 minutes after the start of exposure compared with summer (\( P < 0.05 \)), but no seasonal difference was seen after 50 minutes. In summer, haplogroup D subjects had higher rectal temperatures than haplogroup non-D subjects 70, 80, and 90 minutes after the start of cold exposure (\( P < 0.05 \)).
Results
Rectal temperature
Analysis of variance revealed significant differences in the main effect of season \((F_{(1,13)} = 14.50, P < 0.005)\) and the main effect of time \((F_{(9,117)} = 36.63, P < 0.001)\) for rectal temperature (Figure 1). The interaction of season and time was significant \((F_{(9,117)} = 2.383, P < 0.05)\). The interaction of season and haplogroup and time was also significant \((F_{(9,117)} = 5.168, P < 0.001)\).
In a post hoc test, rectal temperatures at rest (0 minutes) in a 27 °C room were lower in winter than in summer for both groups. Rectal temperatures of haplogroup D subjects when exposed to cold during winter were significantly lower at all time points than when exposed during summer \((P < 0.05)\). In contrast, non-D subjects had significantly lower rectal temperatures 0 to 40 minutes after the start of exposure compared with summer \((P < 0.05)\), but no significant seasonal difference was seen after 50 minutes. In summer, haplogroup D subjects had higher rectal temperatures than non-D subjects 70, 80, and 90 minutes after the start of cold exposure \((P < 0.05)\).
Changes in rectal temperature
The main effects of season \((F_{(1,13)} = 6.236, P < 0.05)\) and time \((F_{(9,117)} = 36.609, P < 0.001)\) were significant for changes in rectal temperature (Figure 2). There was a significant interaction between group and season \((F_{(1,13)} = 7.106, P < 0.005)\) and between season and time \((F_{(9,117)} = 2.376, P < 0.05)\). The interaction among group, season, and time was also significant \((F_{(9,117)} = 5.170, P < 0.001)\).
In a post hoc test, rectal temperatures were significantly lower from 40 minutes after the start of cold exposure only in the summer non-D subjects \((P < 0.05)\).
Oxygen consumption
The main effect of time was significant \((F_{(9,117)} = 44.815, P < 0.005)\) for oxygen consumption (Figure 3). There was also a significant interaction between group, season, and time \((F_{(9,117)} = 2.57, P < 0.005)\).
In a post hoc test, oxygen consumption of haplogroup D subjects tended to be lower in winter than in summer 20 and 30 minutes after the start of exposure \((P < 0.1)\). In addition, in summer haplogroup D subjects had significantly lower oxygen consumption than non-D subjects at 80 and 90 minutes \((P < 0.05)\). In the non-D subjects, oxygen consumption was significantly lower in winter than in summer 90 minutes after the start of exposure \((P < 0.05)\).
Mean skin temperature
The main effect of time was significant \((F_{(9,117)} = 1424.95, P < 0.001)\) for mean skin temperature (Figure 4). There was a significant interaction between season and time \((F_{(9,117)} = 8.65, P < 0.001)\). There was no interaction among group, season, and time \((F_{(9,117)} = 1.81, p = 0.074)\). In a post hoc test, skin temperature at rest prior to cold exposure was lower in winter than in summer in both haplogroups.
Rectal temperature and oxygen consumption during cold exposure
The haplogroup D subjects showed a similar pattern of changes in summer and winter for rectal temperature and oxygen consumption (Figure 5). In the non-D subjects, rectal temperature decreased irrespective of increasing oxygen consumption in summer, but rectal temperature change showed a similar pattern to that shown by haplogroup D subjects in winter.
**Figure 2** Change in rectal temperature. Changes in rectal temperature \((\Delta T_{re};\) mean ± standard error) in summer D (\(\bullet; n = 7\)), summer non-D (\(\bigcirc; n = 8\)), winter D (\(\blacktriangle; n = 7\)), and winter non-D (\(\square; n = 8\)) groups. In a post hoc test, rectal temperatures were significantly lower from 40 minutes after the start of cold exposure only in the summer non-D group \((P < 0.05)\).
**Respiratory exchange ratio**
The main effects of time ($F_{(9,117)} = 9.666$, $P < 0.001$) and season ($F_{(1,13)} = 6.694$, $P < 0.05$) were significant for the respiratory exchange ratio (Figure 6). There was a significant interaction between season and time ($F_{(9,117)} = 2.512$, $P < 0.05$). More specifically, the respiratory exchange ratio was lower in winter than in summer. In a *post hoc* test, the respiratory exchange ratio in the haplogroup D subjects was significantly lower in winter than in summer 10 and 20 minutes after the start of exposure and tended to be lower at 0, 30, 40, and 50 minutes ($P < 0.1$).
**Distal skin temperature**
The main effect of time was significant ($F_{(9,117)} = 3381.677$, $P < 0.001$) for distal skin temperature (mean...
temperature of the forearm, back of the hand, dorsal side of the foot, and thigh corrected for body surface area; Figure 7). There was a significant interaction between season and time ($F_{(9,117)} = 11.714$, $P < 0.001$), and a tendency towards an interaction among haplogroup, season, and time ($F_{(9,117)} = 3.427$, $P < 0.1$). Mean distal skin temperature in both haplotype groups was significantly lower in winter than in summer 0 and 10 minutes after the start of cold exposure, but no differences were seen after that.
No significant differences were seen in any other index (blood pressure, subjective evaluation).
**Discussion**
The present study aimed to elucidate the relationship between seasonal acclimatization and haplogroup difference, with better cold tolerance defined as a smaller increase in energy metabolism in response to a decrease in rectal temperature. Statistical analyses suggested that the haplotype, season, and duration of exposure influenced both changes in rectal temperature and increases in oxygen consumption due to cold stimulation. More specifically, the results suggested that genetic factors play a part in changes in rectal temperature and the increase in oxygen consumption, and these may be important associations.
**Figure 5 Rectal temperature and oxygen consumption.** Rectal temperature and oxygen consumption during cold exposure in summer D (●; $n = 7$), summer non-D (○; $n = 8$), winter D (■; $n = 7$), and winter non-D (□; $n = 8$) groups. Similar results were seen in the summer and winter D groups and the winter non-D group. Only in the summer non-D group did metabolism increase as rectal temperature dropped, but rectal temperature continued to drop.
**Figure 6 Respiratory exchange ratio.** Changes in respiratory exchange ratio (mean ± standard error) in summer D (●; $n = 7$), summer non-D (○; $n = 8$), winter D (■; $n = 7$), and winter non-D (□; $n = 8$) groups. *$P < 0.1$. **$P < 0.05$ compared with summer D (●) and winter D (■). In a post hoc test, respiratory exchange ratio in the haplogroup D was significantly lower in winter than in summer 10 and 20 minutes after the start of exposure and tended to be lower at 0, 30, 40, and 50 minutes ($P < 0.1$).
The decrease in rectal temperature in summer was smaller in the haplogroup D group than in the non-D group, but no differences were seen between groups in winter (Figures 1 and 2). The thermoregulation response possibly changed due to seasonal cold acclimatization.
In a previous study, we reported that haplogroup D subjects showed strong tolerance to cold exposure in summer, with a small increase in energy, a small decrease in rectal temperature, and high NST in the body core [39]. In winter, haplogroup D subjects showed the same degree of decrease in rectal temperatures as in summer, but a tendency towards smaller energy consumption 20 and 30 minutes after the start of exposure compared with summer (Figure 3). In haplogroup non-D subjects, the decrease in rectal temperature in winter was the same as in haplotype D subjects and was smaller than the decrease in summer, and oxygen consumption was lower in winter than in summer 90 minutes after the start of exposure (Figure 3). Although the time period varied between groups, oxygen consumption was found to decrease in winter. These results provide an explanation for the type of seasonal cold acclimatization shown by haplogroup D subjects. The lack of difference in oxygen consumption 90 minutes after the start of exposure agrees with reports of no change in thermogenesis [16-18], and the decrease in oxygen consumption 20 to 30 minutes after the start of cold stimulation agrees with other studies that report a delay in shivering and a decrease in thermogenesis [13,14]. In contrast, oxygen consumption in non-D subjects was lower in winter than in summer 90 minutes after the start of exposure, which agrees with reports of a decrease in thermogenesis [2].
Cold acclimatization has been explained as enhanced insulation function and a change in metabolism. However, results for the distal skin temperature did not show a seasonal difference in the heat loss suppression response (Figure 7). Furthermore, rectal temperature decreased in winter, suggestive of acclimatization that is similar to isolative hypothermic adaptation in both group. However, a characteristic of haplogroup D subjects is a significantly lower respiratory exchange ratio in winter than in summer throughout the transition (Figure 6). Haplogroup D subjects may metabolize more lipids in winter than in summer, suggesting involvement of NST by lipid metabolism. For Japanese people, cold acclimatization has been reported to metabolize lipids better in winter and they also have a higher basal metabolic rate in winter [43]. In addition, thyroid hormone increases in winter for increased NST [44]. In association with these observations, although individual differences exist, brown adipose cells are generally more active in winter than in summer [31]. As seen by the decrease in the respiratory exchange ratio in haplotype D subjects during the first half of cold exposure in winter, it is possible that brown adipose cell activity begins increasing immediately after exposure, leading to an increase in NST from brown adipose cells that precedes ST. This hypothesis suggests that metabolism in mitochondria as a base may come to depend on lipids, irrespective of ST and NST. In either case, the amount of heat is larger in metabolism of lipids, indicating that efficient energy consumption occurred. The characteristic of haplogroup D individuals being good at metabolizing lipids may be indirectly related to a resistance to obesity and lifestyle-related diseases in this
haplotype [45]. In other words, these results suggest that the type of seasonal cold acclimatization shown by haplogroup D is based on suppressing heat loss and is more dependent on lipids. Rather than ST, increased activity of brown adipose cells leads to more efficient NST. As a result, oxygen consumption decreases during the initial stage following the onset of cold stimulation (20 to 30 minutes), more energy is saved overall compared with summer, and rectal temperature is maintained. Haplogroup D people may have a type of seasonal cold acclimatization that relies on more efficient metabolism.
Acclimatization in haplogroup non-D may rely more on insulation compared with haplogroup D. If NST in brown adipose cells increased, the decrease in respiratory exchange ratio would also be large in the haplogroup non-D. However, the decrease in haplogroup non-D was smaller than that observed in haplogroup D subjects, and there was no significant seasonal difference (Figure 6). In contrast to haplogroup D, there is little variation in the respiratory exchange ratio, so variation in lipid metabolism would also be expected to be small. Haplotype non-D subjects reduce their core body temperature and suppress the loss of heat from the body surface, thereby suppressing the decrease in rectal temperature as well as the rise in thermogenesis. This is suggestive of hypothermic/isolative adaptation cold acclimatization. These results agree with previous studies reporting lower skin and core body temperatures [13,16] and a decrease in thermogenesis in winter [2,13-15]. This is the most frequently reported type of cold acclimatization.
The above results suggest that the hypothesis for cold tolerance in summer is correct from the perspective of maintaining rectal temperature. That is, genetic effects become apparent from differences in latent thermoregulation capabilities to respond to cold stimulation that seem novel during this time. The difference in cold tolerance in winter may be reduced, because the effects of seasonal cold acclimatization are comparatively larger in the non-D group than in the D group. This is the first study to demonstrate different forms of acclimatization in different haplotypes. More specifically, it is possible that haplogroup D people use a more metabolic cold acclimatization method, while haplogroup non-D people use cold acclimatization that relies on suppression of heat loss, resulting in a reduction in the difference in cold tolerance in winter. This suggests that variations in cold tolerance responses and types of seasonal cold acclimatization are related to mtDNA polymorphism and are influenced by genetics. In addition to differences in experimental conditions and seasonal factors, the lack of consistency with results from previous studies may be due to the existence of physiological polytypism among populations, with some relying on metabolism and some relying more on insulation.
In future studies, a genome-wide analysis approach is needed to quantify brown adipose cells involving basal metabolic rate or thyroid hormone and examine other genetic factors in addition to mtDNA polymorphism, as well as a larger sample size to enable statistical analysis. In particular, it is necessary to continue accumulating physiological and anthropological data in order to determine whether the group differences observed in the present study were the result of functional differences due to mtDNA polymorphism or due to another factor, such as population structure or the hitchhiking effect. These findings are probably important for discussions of physiological anthropology.
**Conclusions**
Inter-group differences in rectal temperature and oxygen consumption are seen in summer but not in winter. This may be because cold tolerance is supplemented by seasonal acclimatization. The change in the respiratory exchange ratio suggests that the haplogroup D changes metabolism more, while haplogroup non-D rely more on insulation. This may mean that there is a relationship between mtDNA polymorphism and physiological polytypism, with a tendency towards either metabolic adaptation or isolative adaptation within population.
**Abbreviations**
mtDNA: mitochondrial DNA; PCR: polymerase chain reaction; ST: shivering thermogenesis; NST: nonshivering thermogenesis.
**Competing interests**
The authors declare that they have no competing interests.
**Authors' contributions**
TN and SW contributed to the design of the experiments. MM and YN contributed to data collection and analysis. YH and RK contributed to the genetic analysis. All authors read and approved the final manuscript.
**Acknowledgements**
We thank Prof. Hiroki Oota for the helpful comments and discussion of genetic religion about this study. This work was funded by the Grant-in-Aid for Scientific Research (A) (23247044).
**Author details**
1Graduate School of Design, JSPS Research Fellow DC, Kyushu University, Fukuoka 815-8540, Japan. 2Graduate School of Integrated Frontier Sciences, Kyushu University, Fukuoka 812-8581, Japan. 3School of Agriculture, Tokai University, Kumamoto 869-1404, Japan. 4Department of Forest and Forest Products Sciences, Faculty of Agriculture, Kyushu University, Fukuoka 812-8581, Japan. 5Department of Human Science, Faculty of Design, Kyushu University, Fukuoka 815-8540, Japan.
Received: 10 April 2012 Accepted: 13 August 2012 Published: 28 August 2012
**References**
1. Hammel HT, Elsner RW, Le Messurier DH, Andersen HT, Milan FA: Thermal and metabolic responses of the Australian Aborigines to moderate cold in summer. *J Appl Physiol* 1959, 14:605–615.
2. Davis TR, Johnston DR: Seasonal acclimatization to cold in man. *J Appl Physiol* 1961, **16**:231–234.
3. Bittel J: The different types of general cold adaptation in man. *Int J Sports Med* 1992, **13**:172–176.
4. Carlson LD, Burns HL, Holmes TH, Webb PP: Adaptive changes during exposure to cold. *J Appl Physiol* 1953, **5**:672–676.
5. Keatinge WR: The effect of repeated daily exposure to cold and of improved physical fitness on the metabolic and vascular response to cold air. *J Physiol* 1961, **157**:209–220.
6. Andersen KL, Løying N, Nelms JD, Wilson O, Fox RH, Bolstad A: Metabolic and thermal response to a moderate cold exposure in nomadic Lapps. *J Appl Physiol* 1960, **15**:649–653.
7. Scholander PF, Hammel HT, Hart JS, Lemessurier DH, Steen J: Cold adaptation in Australian aborigines. *J Appl Physiol* 1968, **13**:211–218.
8. Crile GW, Quiring DP: Indian and Eskimo metabolisms. *J Nutr* 1939, **18**:361–368.
9. Hart JS, Sabeen HB, Hildes JA, Decopas F, Hammel HT, Andersen KL, Irving L, Fog G: Thermal and metabolic responses of coastal Eskimos during a cold night. *J Appl Physiol* 1962, **17**:953–960.
10. Castellani JW, Young AJ, Kain JE, Sawka MN: Thermoregulatory responses to coldwater at different times of day. *J Appl Physiol* 1999, **87**:243–246.
11. Van Someren EJ, Kaymann RJ, Schreuder EJ, Daanen HA, Swaab DF: Circadian and age-related modulation of thermoreception and temperature regulation: mechanisms and functional implications. *Ageing Res Rev* 2002, **1**:721–778.
12. Maeda T, Sugawara A, Fukushima T, Higuchi S, Ishibashi K: Effects of lifestyle, body composition, and physical fitness on cold tolerance in humans. *J Physiol Anthropol Appl Human Sci* 2005, **24**:439–443.
13. Budd GM, Brotherhood JR, Beasley FA, Hendrie AL, Jeffrey SE, Lincoln GJ, Solaga AT: Effects of acclimatization to cold baths on men’s responses to whole-body cooling in air. *Eur J Appl Physiol Occup Physiol* 1993, **67**:438–449.
14. Hesslink RL Jr, D’Alesandro MM, Armstrong DW 3rd, Reed HL: Human cold air habituation is independent of thyroxine and thyrotropin. *J Appl Physiol* 1992, **72**:2134–2139.
15. Leppäläluoto J, Korhonen I, Hassi J: Habituation of thermal sensations, skin temperatures, and norepinephrine in men exposed to cold air. *J Appl Physiol* 2001, **90**:1211–1218.
16. Araki T, Toda Y, Inoue Y, Tsujino A: The effect of physical training on cold tolerance. *Jpn J Phys Fitness Sports Med* 1978, **27**:149–156.
17. Lee YH, Tokura H: Seasonal adaptation of thermal and metabolic responses in men wearing different clothing at 10 degrees C. *Int J Biometeorol* 1993, **37**:36–41.
18. Inoue Y, Nakao M, Ueda H, Araki T: Seasonal variation in physiological responses to mild cold air in young and older men. *Int J Biometeorol* 1995, **38**:131–136.
19. Yasukouchi A, Yamasaki K, Iwanaga K, Fujiwara M, Sato H: Seasonal effects on the relationships between morphological characteristics and decrement of rectal temperature in a cold environment. *Ann Physiol Anthrop* 1983, **2**:39–44. in Japanese with English abstract.
20. Sato H, Yamasaki K, Yasukouchi A, Watanuki S, Iwanaga K: Sex differences in human thermoregulatory response to cold. *J Hum Ergol* 1988, **17**:57–65.
21. Mäkinen TM, Pääkönen T, Palinkas LA, Rintamäki H, Leppäläluoto J, Hassi J: Seasonal changes in thermal responses of urban residents to cold exposure. *Comp Biochem Physiol A Mol Integr Physiol* 2004, **139**:229–238.
22. v O a MJ, van Marken Lichtenbelt WD, Van Steenhoven A, Westerterp KR: Seasonal changes in metabolic and temperature responses to cold air in humans. *Physiol Behav* 2004, **82**:545–553
23. Leonard WR, Snodgrass JJ, Sorensen MV: Metabolic adaptation in indigenous Siberian populations. *Annu Rev Anthropol* 2005, **34**:451–471.
24. Wallace DC: A mitochondrial paradigm of metabolic and degenerative diseases, aging, and cancer: a dawn for evolutionary medicine. *Annu Rev Genet* 2005, **39**:359–407.
25. Mishmar D, Ruiz-Pesini E, Golik P, Macaulay V, Clark AG, Hosseini S, Brandon M, Eskei Y, Chen E, Brown MD, Sukernik RI, Olckers A, Wallace DC: Natural selection shaped regional mtDNA variation in humans. *Proc Natl Acad Sci USA* 2003, **100**:171–176.
26. Balloux F, Handley LJ, Jombart T, Liu H, Manica A: Climate shaped the worldwide distribution of human mitochondrial DNA sequence variation. *Proc Biol Sci* 2009, **276**:3447–3455.
27. Lesna’ I, Wyb’ral S, Jansky L, Zeman V: Human nonshivering thermogenesis. *J Therm Biol* 1999, **24**:63–69.
28. Wybiral S, Lesna I, Jansky L, Zeman V: Thermoregulation in winter swimmers and physiological significance of human catecholamine thermogenesis. *Exp Physiol* 2000, **85**:321–326.
29. Astrup A, Bulow J, Madsen I, Christensen NJ: Contribution of BAT and skeletal muscle to thermogenesis induced by ephedrine in man. *Am J Physiol Endocrinol Metab* 1985, **248**:507–515.
30. van Marken Lichtenbelt WD, Vanhommerig JW, Smulders NM, Drossaerts JM, Kemerink GJ, Bouvy ND, Schrauwen P, Teule GJ: Cold-activated brown adipose tissue in healthy men. *N Engl J Med* 2009, **360**:1500–1508.
31. Saito M, Okomatsu-Ogura Y, Matsushita M, Watanabe K, Yoneshiro T, Nio-Kobayashi J, Iwanaga T, Miyagawa M, Karney T, Nakada K, Kawai Y, Tsujiaki M: High incidence of metabolically active brown adipose tissue in healthy adult humans: effects of cold exposure and adiposity. *Diabetes* 2009, **58**:1526–1531.
32. Wijers SLJ, Smit E, Saris WHM, Mariman ECM, van Marken Lichtenbelt WD: Cold- and overfeeding-induced changes in the human skeletal muscle proteome. *J Proteome Res* 2010, **9**:2226–2235.
33. van Marken Lichtenbelt WD, Schrauwen P: Implications of nonshivering thermogenesis for energy balance regulation in humans. *Am J Physiol Regul Integr Comp Physiol* 2011, **301**:285–295.
34. Shinoda K: Analysis of DNA variations reveals the origins and dispersal of modern humans. *J Geogr* 2009, **118**:311–319.
35. Marcuello A, Martinez-Redondo D, Dahmani Y, Casajús JA, Ruiz-Pesini E, Montoya J, Lopez-Pérez MJ, Diez-Sánchez C: Human mitochondrial variants influence on oxygen consumption. *Mitochondrion* 2009, **9**:27–30.
36. Fuku N, Park KS, Yamada Y, Nishigaki Y, Cho YM, Matsuo H, Segawa T, Watanabe S, Kato K, Yokoi K, Nozawa Y, Lee HK, Tanaka M: Mitochondrial haplogroup N9a confers resistance against type 2 diabetes in Asians. *Am J Hum Genet* 2007, **80**:407–415.
37. Tranah GJ, Manini TM, Lohman KK, Nalls MA, Kritchevsky S, Newman AB, Harris TB, Miljkovic I, Biffi A, Cummings SR, Liu Y: Mitochondrial DNA variation in human metabolic rate and energy expenditure. *Mitochondrion* 2011, **1**:1–7.
38. Li FX, Ji FY, Zheng SZ, Yao W, Xiao ZL, Qian GS: MtDNA haplogroups M7 and B in southwestern Han Chinese at risk for acute mountain sickness. *Mitochondrion* 2011, **11**:553–558.
39. Nishimura M, Motoi M, Hoshi Y, Kondo R, Watanuki S: Relationship between mitochondrial haplogroup and psychophysiological responses during cold exposure in a Japanese population. *Anthropol Sci* 2011, **119**:265–271.
40. Kurazumi Y, Tsuchikawa T, Kakutani K, Torii T, Matsubara N, Horikoshi T: Evaluation of the conformability of the calculation formula for the body surface area of the human body. *Jpn J Biometeorol* 2009, **39**:101–106.
41. Brozek J, Grande F, Anderson JT, Keys A: A densitometric analysis of body composition: revision of some quantitative assumptions. *Ann N Y Acad Sci* 1963, **110**:113–140.
42. Hardy JD, Dubois EF: The technique of measuring radiation and convection. *J Nutr* 1938, **5**:461–475.
43. Sasaki J, Kumagae G, Sata T, Ikeda M, Tsutsumi S, Arakawa K: Seasonal variation of serum high density lipoprotein cholesterol levels in men. *Atherosclerosis* 1983, **48**:167–172.
44. Klingenspor M: Cold-induced recruitment of brown adipose tissue thermogenesis. *Exp Physiol* 2003, **88**:141–148.
45. Tanaka M, Takeyasu T, Fuku N, Li-Jun G, Kurata M: Mitochondrial genome single nucleotide polymorphisms and their phenotypes in the Japanese. *Ann N Y Acad Sci* 2004, **1011**:7–20.
doi:10.1186/1880-6805-31-22
Cite this article as: Nishimura et al.: Relationship between seasonal cold acclimatization and mtDNA haplogroup in Japanese. *Journal of Physiological Anthropology* 2012 31:22.
|
A special meeting of the Session of The Church of the Covenant was held in the dining room on Wednesday, October 14, 2015. The Rev. Dr. Stuart D. Broberg called the meeting to order with a prayer at approximately 7:05 p.m. Dr. Broberg reviewed a letter he wrote to the Session (see Attachment 1). In his review of the letter, he recommended that the Session should adopt a resolution that confirms that this congregation can decide to nominate whoever it wants so long as those decisions confirm with the current Book of Order.
Dr. Broberg then introduced Mr. Sam Foreman, an attorney, whom the Session has asked to assist us in evaluating the ongoing nominating issue. Dr. Broberg thanked Mr. Foreman for his assistance to the Session to help us work through our decision-making process, including providing the Session with the memo he will be reviewing tonight.
Mr. Foreman addressed the Session. He provided his thoughts on the August 27, 2001 Policy (see Attachment 2). He said the resolution was fine until challenged by a higher body, at which point he said it would likely very be overturned. He felt that, specifically, the first sentence in the fourth bullet on that page would now be considered to be inappropriate. He advised that the Session should reconsider that bullet, not the whole page. Mr. Foreman provided a memo to the Session, in which he summarized his conclusions on this matter (see Attachment 3).
Mr. Foreman said that, in his opinion based his review of the current Book of Order and The Confessions, it is still up to the Nominating Committee and the congregation to decide on whether or not any person is qualified to be an officer. He agreed with Dr. Broberg’s above-stated idea about this congregation (through the Nominating Committee) can decide who to nominate. He recommended that the Session change the 2001 Policy to prevent someone from taking a Presbyterian judicial action.
Dr. Broberg then summarized Mr. Foreman’s main points: 1) The August 27, 2001 Policy is all right until something happens to not make it all right, such as being challenged and overturned by a higher body; 2) That Policy would lose in such challenges because it picks out a specific provision of The Book of Order and that type of restating of The Confessions and/or the Book of Order is frowned upon; and, 3) the Nominating Committee (which reports to the congregation, not to the Session) must consider all situations when selecting nominees. All agreed that, based on the information provided by Mr. Foreman and Dr. Broberg, that our congregation can
nominate and elect whomever it wants to so long as those procedures comply with the current Book of Order and The Confessions.
There was discussion about the information and documents provided by Dr. Broberg and Mr. Foreman. There was discussion about how we could address the 2001 Policy. There was discussion about what would happen to the current Nominating Committee’s slate if that Policy was rescinded. There was discussion about what the 2001 Policy meant and what legal weight it carries.
All agreed that the Session did not want to pick any sides in this issue, or to even give the appearance of picking a side. All agreed after much discussion that it is very important to let each member of the congregation voice their opinion through the standard congregational voting process. Dr. Broberg then advised the congregation about how the voting process would be handled if there were nominations from the floor. He noted that he has gone through “nominations from the floor” scenarios many times in his career, and feels very comfortable with how to deal with that if it occurs at the congregational meeting on November 22.
After much discussion, it was agreed and noted that the August 27, 2001 document represented only a resolution and Statement of Belief by the Session in place at that time. That August 27, 2001 document represented the views and votes of the Session at that time, and was correct with the Book of Order at that time. But it was not a Policy of The Church of the Covenant. And it does not comply with the current Book of Order, and it can be rescinded.
Gordon Core then moved and April Betzner seconded the following motion: *The first sentence in the fourth bullet of the Statement of Belief presented in the August 27, 2001 minutes of the Session conflicts with the current Book of Order, based on legal advice received by the current Session. Therefore, that Statement is not binding on the Nominating Committee. As stated in the current Book of Order, G-2.0105, “Freedom of Conscience with respect to the interpretation of scripture is to be maintained.”* This motion was unanimously approved by the Session.
Next, Sue Denmead moved and Linda Grimm seconded a motion noting the Session’s gratitude to Mr. Sam Foreman for his outstanding assistance to us in this matter. This motion passed unanimously.
Dr. Broberg then reviewed a memo he wrote that outlined the procedures for dealing with nominations from the floor during congregation elections. This memo is presented as Attachment 4. After reviewing this memo, the Session had many questions and considered a motion to adopt the memo as a guide for the next congregation meeting in case nominations were done from the floor. But after this discussion, Jim Little withdrew his motion.
Dr. Broberg asked that the people who had met previously with: 1) Denise Gibson; 2) the person who has brought the August 27, 2001 document to the Nominating Committee; and, 3) Rev. Craig Kephart (Washington Presbytery Executive Presbyter), to meet with these three people again. The purpose of this second meeting is to keep those people informed about the Session’s activities and intentions.
Dennis Myers moved and Sue Denmead seconded a motion to adjourn this special Session meeting. There was no discussion and the Session unanimously approved the motion.
Dr. Broberg closed the meeting with prayer at 8:37 p.m.
Respectfully submitted,
Jonathan M. Pachter
Jonathan M. Pachter, Clerk of Session
Letter to Session
October 14, 2015
Dear Session Members:
Thank you for being part of the Lord’s process in seeking to do the right thing, not to harm people in our church family, and being obedient to God’s Word. It is not an easy task but I believe we are rising to the occasion and balancing all of these good things along the way.
We have sought to receive wise counsel from an attorney related to our policies and my hope is that we will listen carefully and take the advice we have been given and do the right thing by seeking to follow the law and The Book of Order.
Then, my hope is that we will also review and approve the Nominations from the Floor Memo, prepared with the advice of the Stated Clerk of Washington Presbytery, John Rodgers. He has given us guidance related to The Book of Order, Roberts Rules of Order, Revised, and our own church By-laws. I would hope we would continue to endorse processes that are fair, transparent and seeking to be faithful to the rules.
I would hope we would also seek to follow up with the folks we sent teams to speak with in another round of conversation; good communication and people not being surprised by our decisions will be helpful in allowing us all to respond in a good way to what we are hoping to achieve. Again, I believe that we can find a way that is pleasing in the sight of God and our best options under all the many and varying opinions related to these important matters.
Then, I would hope we would plan a process how to inform our congregation, through some sort of information meeting, so they can best understand how we have tried to take steps that are reasonable, representative of the entire congregation, as well as being according to the rules. We must attempt to keep both sides of this debate over these important matters in focus, losing sight of neither side in the process, and endeavoring to anticipate and address those concerns. Good, faithful and transparent communication will be important to retaining the trust of our congregation.
Then, I would hope that we would be certain to draft language in policies that assure The Church of the Covenant lifts up our God-given right to elect officers and pastors whom we alone choose, without the interference of the civil authorities or higher councils of the church, through representative processes controlled by the members themselves. Our church polity, our system of governance, the ways we nominate and elect our own leaders, are not just haphazard ways of governing ourselves. These are heartfelt Christian beliefs that come from the Bible as the Word of God, The Confessions of Faith of the PC (USA), as well as the Book of Order (church constitution) of our church. The way we govern ourselves in a representative system of church governance is as much an expression of faith as is The Apostles Creed.
Lastly, I would reiterate our need to pray together as a session in seeking God’s will collectively, which will be better than any one of our ideas individually. I am praying for you and I believe in your ability to walk with God through this important time. I would ask that this letter would be included in the minutes of this called Session Meeting, October 14, 2015 and as Moderator I do so stipulate.
Faithfully,
Dr. Stu Broberg
The Session of The Church of the Covenant (PCUSA) of Washington, PA responds to the 213th General Assembly’s actions with the following convictions and intentions:
- The recent statement on the Lordship of Jesus Christ and his saving work is woefully weak in comparison with existing statements already recorded in our Book of Confessions. They are based on such passages as John 14:6 where Jesus says: “I am the way, and the truth and the life. No one comes to the Father except through me.”; and Acts 4:12, where scripture indicates the conviction of the apostles: “There is salvation in no one else, for there is no other name under heaven given among mortals by which we must be saved.”
- The 213th General Assembly’s statement allows for the continued persistence of an increasing universalism in our denomination, which denies the need for repentance of all sin for salvation through Jesus Christ, and violates the God-given free will of those who foolishly choose to reject Jesus as Savior and Lord. Repentance of sin and confession of faith are necessary for salvation.
- The standard revealed for salvation is clear in Romans 10:9, 10. “If you confess with your lips that Jesus is Lord and believe in your heart that God raised him from the dead, you will be saved. For one believes with the heart and so is justified, and one confesses with the mouth and so is saved.”
- We also affirm the standard for ordination found in Amendment B (1996) as reflecting God’s divine intentions for humankind, and congruency with Biblical principles for those who seek the call of ordained leadership in the church. Furthermore, we believe that all professing Christians are called by God to exhibit faithfulness between a man and a woman in marriage, and to uphold the standard of chastity in singleness.
- Our intention is to continue to proclaim Jesus Christ as the only Savior for the world and the only and pre-eminent Lord of Life.
We intend to stand firm on these basic beliefs regardless of denominational actions. If necessary, we will respond appropriately in peace and love, yet with the firmness of our convictions in a positive way. It is our intention to let the community and the world know who we are and where we stand on these matters in a proactive way, without impugning those who would take another position.
It was moved and seconded that the above statement of belief be adopted and shared with the congregation, the churches and pastors of Washington Presbytery, the Presbytery staff, and the General Assembly Council. Motion passed. Rev. Meyer announced that a copy of this statement will be placed in the *Messenger* and a couple of the upcoming bulletins.
Rev. Meyer also shared an article from *Presbyterians Today*, which responded to a question regarding the standing of homosexuals as Christians. The author, James Ayers, stated that Christian tradition holds that homosexual practice is a sin. Whether that traditional stand is a mistake must be
MEMORANDUM
TO: The Session of the Church of the Covenant
267 E. Beau St.
Washington, PA 15301
FROM: Samuel H. Foreman
DATE: October 12, 2015
Question Presented
Whether the resolution adopted by the Church of the Covenant, dated Aug. 27, 2001, is valid and binding upon the Church, and does it act as a *per se* bar to consideration of the candidacy for ordination in the office of Deacon of a non-chaste single person.
Brief Answer
The Church’s resolution of August 27, 2001 does not appear to be valid in light of judicial precedent within the denomination, which has struck down similar resolutions which purport to select a single aspect of the Confessions to serve as a bar to candidacy for ordination. However, even in the absence of such a resolution, it is entirely within the right and power of the Session or council considering any applicant’s candidacy for ordination to deny their application on the basis that they do not live in faithful heterosexual marriage or chaste singleness.
Answer
The resolution of August 27, 2001, reads, in pertinent part:
*We also affirm the standard for ordination found in Amendment B (1996) as reflecting God’s divine intentions for humankind, and congruency with Biblical principles for those who seek the call of ordained leadership in the church. Furthermore, we believe that all professing Christians are called by God to exhibit faithfulness between a man and a woman in marriage, and to uphold the standard of chastity in singleness.*
This paragraph relates to standards for ordination, and sets forth a prohibition against adultery and non-chastity, and requires commitment to monogamy or celibacy for someone called to a position involving ordination.\(^1\)
First, it has been suggested that the case of *Session of the First Presbyterian Church of Washington, 1793, v. Presbytery of Washington*, 218-15, is controlling on this issue. I do not think the decision of the GAPJC in the Washington Presbytery matter is binding, and I think it is distinguishable from the resolution under consideration here. The resolution at issue in Remedial Case 218-15 read, in pertinent part:
“Therefore, any departure from ordination standards mandated in the Book of Order, unless repented of, shall bar a candidate from ordination and/or installation. . .”
The salient point of the GAPJC holding, affirming the Synod’s opinion, is in the third specification of error, in the Synod’s holding that the Washington Presbytery’s Resolution A “provided an unqualified bar to ordination or installation in Washington Presbytery of anyone who is unrepentant of any act or thought that the Confessions call sin.” Essentially, Resolution A was found invalid because it was overbroad, and might bar a candidate for failure to observe the sabbath in some fashion on Sunday, marriage to a Catholic, or commission of a similar lesser transgression which the Confessions deem sin.
This does not provide much guidance as to the Aug. 27, 2001 resolution, however. While the Washington Presbytery Resolution, in Case 218-5, was struck down because it broadly barred anyone from ordination if they committed any sin unrepentantly, and could therefore be enforced arbitrarily, one might argue that the Church of the Covenant’s Resolution of
---
\(^1\) Amendment B was a successful effort to amend the Book of Order by adding the following language:
“Those who are called to office in the church are to lead a life in obedience to Scripture and in conformity to the historic confessional standards of the church. Among these standards is the requirement to live either in fidelity within the covenant of marriage between a man and a woman, or chastity in singleness. Persons refusing to repent of any self-acknowledged practice which the confessions call sin shall not be ordained and/or installed as deacons, elders, or ministers of the Word and Sacrament. (G-6.0106.b).”
Aug. 27, 2001 is valid, because it is quite *specific* in the standard which it requires of applicants for ordination. Instead of broadly barring anyone from serving in a position of leadership for any transgression of the principles of the Confessions, it specifies specifically that no one may be ordained who fails to abide by the Confessions pertaining to marriage and chastity.
However, there is another case decided by the General Assembly Permanent Judicial Commission which is more directly on point: *Bush v. Pittsburgh Presbytery*, 218-10. In that case, the Pittsburgh Presbytery adopted a resolution which read, in pertinent part:
Resolves that no exceptions to the requirement that all Ministers of Word and Sacrament must “live either in fidelity within the covenant of marriage between a man and a woman or in chastity in singleness” (Book of Order, G-6.0106b) will be allowed within the jurisdiction of this Presbytery;
and
Resolves that Ministers of Word and Sacrament shall be prohibited from conducting same-sex marriages within the jurisdiction of this Presbytery. [*The GAPJC decision does not address this portion of the resolution*]
The GAPJC held that the Presbytery could not adopt an ordination standard which purported to be different than that required in the Book of Order (which then contained the express “fidelity and chastity” requirement), and that to allow a Presbytery to state that “in this Presbytery, no one can be ordained who violates X,” wrongly implied that this might be permissible in some other Presbytery, when, in fact, all Presbyteries must abide by the same rules. It wrote:
“Adopting statements about mandatory provisions of the Book of Order for ordination and installation of officers falsely implies that other governing bodies might not be similarly bound; that is, that they might choose to restate or interpret the provisions differently, fail to adopt such statements, or possess
some flexibility with respect to such provisions. Restatements of the Book of Order, in whatever form they are adopted, are themselves an obstruction to the same standard of constitutional governance no less than attempts to depart from mandatory provisions.”
The GAPJC reached the following specific holdings:
2. Examinations of Candidates: Ordaining and installing bodies must examine candidates for ordination and/or installation individually. The examining body is best suited to make decisions about the candidate’s fitness for office, and factual determinations by examining bodies are entitled to deference by higher governing bodies in any review process.
3. Statements of “Essentials of Reformed Faith and Polity”: Attempts by governing bodies that ordain and install officers to adopt resolutions, statements or policies that paraphrase or restate provisions of the Book of Order and/or declare them as “essentials of Reformed faith and polity” are confusing and unnecessary; and are themselves an obstruction to constitutional governance in violation of G-6.0108a.
In the body of its decision, the GAPJC further wrote: “G-6.0108a sets forth standards that apply to the whole church. These standards are binding on and must be followed by all governing bodies, church officers and candidates for church office.” I interpret this to mean that if a Presbytery cannot adopt such a resolution, then neither can an individual church.
Based upon these holdings, I believe the Resolution of August 27, 2001 is not valid, as it is factually analogous to the Pittsburgh Presbytery’s resolution which was declared invalid in Bush.
The next question is: In the absence of the resolution, what standards govern the Church’s decision whether to accept an applicant for ordination? The Book of Order 2013-2015 provides the following in place of the prior G-6.0108, and the “Fidelity and Chastity” requirement which was fought over for many years:
G-2.0105 Freedom of Conscience
It is necessary to the integrity and health of the church that the persons who serve it in ordered ministries shall adhere to the essentials of the Reformed faith and polity as expressed in this Constitution. So far as may be possible without serious departure from these standards, without infringing on the rights and views of others, and without obstructing the constitutional governance of the church, freedom of conscience with respect to the interpretation of Scripture is to be maintained. It is to be recognized, however, that in entering the ordered ministries of the Presbyterian Church (U.S.A.), one chooses to exercise freedom of conscience within certain bounds. His or her conscience is captive to the Word of God as interpreted in the standards of the church so long as he or she continues to seek, or serve in, ordered ministry. The decision as to whether a person has departed from essentials of Reformed faith and polity is made initially by the individual concerned but ultimately becomes the responsibility of the council in which he or she is a member.
(emphasis added). This would appear to me to leave discretion in the local church or council as to whether any particular candidate’s moral behavior – whether it pertain to sexuality or any other sin – serves to prohibit them from candidacy for ordination.
Accordingly, while the Resolution of Aug. 27, 2001 may not be valid under these judicial decisions, this does not mean that the Session must ordain a deacon who has an open or adulterous marriage, or someone who is homosexual. To the contrary, if the Session is of the opinion that a candidate’s exhibition of such behavior “departs from the essentials of the Reformed Faith,” then it is a valid basis upon which to refuse ordination. The judicial decisions merely state that no resolution is permissible which isolates or elevates one part of the Confessions, all of which must be considered in any particular candidate’s application.
S. H. Foreman
Memo on Nominations from the Floor
Annual Meeting to Elect Officers – November 22, 2015
The Church of the Covenant
Note: this procedure drafted with the advice of Stated Clerk John Rodgers, 10/13/15
Note: The procedure comes from the Book of Order, Roberts Rules of Order, Revised and our Bylaws
The Nominating Committee through its Chair presents its report and its slates ad seriatim, for Elders, then Deacons, then Nominating Committee, and then Trust Fund Management and Auditor positions. The Moderator begins by stating to the Chair of Nominating, “Nominations are now in order for the office of _____ (elder, deacon, nominating committee, trust fund management, auditor). They present a slate with the exact number of names corresponding to the openings being elected and filled by the congregational meeting. (exception from Roberts Rules for more than maximum p. 433) Once the slate has been moved by the Chair of Nominating, the Moderator asks: “Are there any nominations from the floor?” If there are none, the Moderator, then calls for a motion that nominations be closed. Moved, seconded and approved (requires a 2/3 vote). And then the vote on the slate is taken by voice vote. (simple majority of those members present and voting). No unanimous ballot may then be cast.
If there is a nomination from the floor, the Moderator again asks: “Are there any further nominations from the floor?” If none, Motion then that nominations be closed (2/3 vote). Then the Moderator calls for a motion to approve the use of paper ballots. (majority vote). Then the tellers of the election are instructed to pass out previously prepared ballots to members of the congregation only.
These previously prepared ballots have the names of all of the people being nominated by the Nominating Committee listed with a box to check next to their name. Above each category (elder, deacon, nominating committee member, trust fund management, auditor) it states: “Vote for a maximum of ----” (If there are 6 openings for elder then the maximum number to vote for is 6). A verbal instruction is made that, in the event someone marks their ballot with more than the maximum then the tellers are instructed to declare that ballot null and void. There are then also open lines with no name printed next to them, also with a box to check, where someone’s name who has been nominated from the floor can be written in. Likewise there is a verbal announcement from the Moderator noting where the name is to be written in and the spelling of the name. Instruction that clear legible writing is important. Instruction to fold the ballot in half only. Instruction that placing more than one ballot folded together invalidates both ballots. Instruction about write in (but not more than 6 for any office). Moderator inquires: “Have all those voted who wish to do so?...Then declares that the “polls are closed.”
As has been our practice, there will be no speeches or discussion from any of the candidates for office. Procedural questions are always in order.
The tellers for the election will be appointed in advance. They will consist of the ushers for the 11am service for that day and also a person appointed by the Moderator who knows the congregation and its rolls, to be able to determine when the ballots are passed out who is and who is not a current member of the congregation. Once the election is set and ready to proceed, the Moderator will call for the distribution of ballots by the tellers. The person appointed as head teller, if there is some question about whether someone is or is not a church member, will have the current roll books present in order to double check someone’s status. The tellers hand the ballots to individuals whom they know and
believe are members of The Church of the Covenant. While non-member visitors may be present, they do not have the right to vote. The Moderator will appoint the head teller in advance.
The tellers then take the written ballots back to the office and count them carefully 3 times. The totals should agree three times in a row to be final. If someone mismarks a ballot by voting for more than the maximum in that category then that ballot is declared “illegal”. If a ballot is folded with another ballot likewise the ballots are declared “illegal”. The head teller then records the votes, signs the record, and hands that record to the Moderator to announce to the congregation and for the totals to be entered into the minutes of the meeting. The “Tellers Report” is included in its entirety in the minutes of the meeting by the Clerk. The paper ballots are given by the head teller to the Clerk where they are kept under lock and key for one year following the election and then destroyed. The names of those elected are then published in the following week bulletin.
**The Tellers Report**
Number of Votes Cast ______.
Necessary for Election (one half of total number cast) ________.
Person X received ________. (list name and total votes received)
Person Y received ________.
Person Z received ________.
Etc. ________.
Illegal Votes ________.
Ballots folded together (illegal) ________.
List names of Tellers _____________________________________________.
List name of Head Teller ________________________________________.
10/14/15
|
Montagehandleiding klimwand speeltoestel – V2.1
Dit product voldoet aan standaard EN 71 en richtlijn 2009/48/EC. The product complies with standard EN 71 & Directive 2009/48/EC.
Onderdelen in het pakket: Parts in the package:
Warnings
* Only for outdoor domestic use.
* Not suitable for children under 3 years. Risk of crushing.
* To be used under the direct supervision of an adult. Risk of falling.
* Keep away from fire.
* The product is supplied disassembled. Adult assembly required.
* Wooden parts are not necessarily free from splinters, sharp edges, ends.
* The product is meant to be used by children from the age of 3 and weighing no more than 50 kg.
Lees voorafgaand aan de montage zorgvuldig deze handleiding en controleer of alle onderdelen aanwezig en in goede staat zijn.
Montage
Het speeltoestel moet in elkaar worden gezet overeenkomstig de tekeningen die in de handleiding staan afgedrukt. Er zijn ten minste twee personen nodig om een veilige montage te waarborgen. Benodigd gereedschap: waterpas, zaag, handschoenen, veiligheidsbril, snoerloze boormachine met boor- en schroevendraaierbits, rolmaat, potlood, trapladder, sleutelset en vijl. Boor schroefgaten voor om splijten van het hout te voorkomen.
Installatie
De schommelcombinatie moet worden gemonteerd op een vlakke ondergrond die bedekt is met een geschikte bovenlaag: zand, boomschorssnippers, of een goed onderhouden grasmat. De montage moet gebeuren zoals aangegeven in het plaatje met bijschrift: "Veilige montage van de schommelcombinatie, bovenaanzicht." Er dient een afstand van ten minste twee meter te worden aangehouden tussen de schommelcombinatie en andere voorwerpen (inclusief dingen die boven de grond hangen zoals boomtakken). De schommelstaanders moeten op de in beton gegoten ankers worden bevestigd (zoals afgebeeld). De poten mogen niet in zand of ander zacht materiaal worden geplaatst, omdat dan onvoldoende stabiliteit kan worden gegarandeerd.
Verankering (6x)
Om voldoende stabiliteit te garanderen moet de schommel verankerd worden in de grond. Graaf gaten van 60 bij 60 cm en 40 cm diep (zie tekening). Leg een laag grind op de bodem van het gat om de drainage te verbeteren. Stort beton in het gat tot 5 cm onder het maaiveld. Vul zodra het beton hard is het gat met een laag zand, zodat het gelijk is met de grond eromheen.
Gebruik
Alle uitstekende schroeven en scherpe randen moeten onmiddellijk na montage glad geschuurd worden om verwondingen te voorkomen. De hoogte van het zitje van de schommel (350 mm) en de algehele staat van het speeltoestel moeten dagelijks worden gecontroleerd. Minimaal elke drie maanden moet de stabiliteit, slijtage van de bewegende onderdelen en de sterkte van de verbindingen worden gecontroleerd. De staat van het basismateriaal, roest van metalen onderdelen of verrotting van het hout moet één keer per jaar worden gecontroleerd. Bewegende delen moeten worden geolied en versleten of defecte onderdelen moeten worden vervangen. Bouten moeten worden aangetrokken, touwen worden vastgesnoerd.
Nota bene! De II* constructie van het speeltoestel kan alleen maar als veilig worden beschouwd wanneer het wordt gebruikt met een geschikte glijbaan (1500 ± 50 mm van de grond, maximale breedte 500 mm). Wanneer u de glijbaan monteert, laat dan ten minste 25 mm (30 mm is beter) ruimte aan beide zijden om te voorkomen dat kleding bij het glijden vast komt te zitten.
De glijbaan die wordt gebruikt moet voldoen aan de Europese standaard EN 71 en passen bij de betreffende schommelcombinatie.
Speeltoestel zonder glijbaan.
Speeltoestel met glijbaan. Om veiligheidsredenen moeten de tweede sport vanaf boven en de twee schommelzittingen met touwen worden verwijderd voorafgaand aan de montage.
Wij adviseren om 's winters alle toebehoren te verwijderen en op te slaan omdat de grond, als deze bevroren is, niet geschikt is om veilig op te spelen. Plaats de schommel, om te voorkomen dat deze opwarmt, niet in de richting van de zon. Controleer bij warm weer of het oppervlak van de zitting niet te heet is.
Waarschuwingen:
* Niet geschikt voor kinderen jonger dan 3 jaar. Risico op vallen.
* Gebruiken onder direct toezicht van een volwassene. Touwen, verstikkingsgevaar.
* Uitsluitend voor huishoudelijk gebruik en voor buiten.
* Te monteren door een volwassene.
* Bedoeld voor kinderen van 3 tot 14 jaar (max 50 kg/kind).
* Het product is ontworpen voor gelijktijdig gebruik van maximaal 5 personen.
Voorzichtig zijn met:
* Onderdelen van kledingstukken die een verhoogd risico veroorzaken, zoals capuchons, knopen etc., kunnen blijven haken bij gebruik van het product en kunnen daardoor bij de spelende kinderen verstikkingsgevaar veroorzaken.
* Houten details kunnen aan de uiteinden gebreken vertonen evenals scherpe randen.
* Contoleer voor gebruik of de verankering nog stevig is.
Nota bene! De constructie mag niet worden gewijzigd zonder toestemming van de fabrikant, omdat daardoor het gevaar ontstaat van ernstige verwondingen. Deze handleiding dient bewaard te worden om toekomstige discussies te vermijden.
Voor uw informatie: Hout is een natuurlijk product. Het is gevoelig voor omgevingsveranderingen. Fluctuaties in temperatuur en luchtvochtigheid kunnen veroorzaken dat het hout splijt of vervormt, met name wanneer het gaat om machinaal bewerkte houtproducten. Dit heeft echter doorgaans geen invloed op de sterkte van het hout. De schommel mag slechts één keer worden geïnstalleerd.
BEHANDELD HOUT DAT EEN BIOCIDE BEVAT. Bescherming tegen hout-aantastende organismen. Actieve bestanddelen: kopercarbonaat/koperhydroxide (1:1) en boorzuur.
Vermijd het inademen van zaagsel. Zorg dat het hout niet in contact komt met drinkwater of voedsel. Gebruik snippers van dit hout niet als bodem in dierverblijven, gebruik het hout niet in een visvijver. Verduurzaamd hout kan niet bij het gewone afval. Industrieel afval moet worden afgevoerd via een erkend afvalverwerkingsbedrijf.
Read these instructions carefully prior to assembly and check the number and condition of the parts.
Assembly
The children's playground must be installed in accordance with the drawings included in the manual. At least two people are
required to ensure safety of assembly. Spirit level, saw, protective gloves, safety glasses, cordless drill with drill bits and screwdrivers, tape measure, pencil, stair ladder, wrench set, file are required for assembly. Before screwing, please drill hole in order to avoid cracks in the wood.
Installation
Children's swing set must be installed on a smooth surface covered with an appropriate material: sand, bark or well-maintained grass. Installation must take place in accordance with the figure labelled 'Safe installation of combined swing, aerial view'. The distance between the swing set and other objects (including those situated above the ground, such as tree branches) should be at least 2 meters. The swing legs must be fixed to the concreted anchors (as indicated in the drawing). The legs must not be set in sand or another soft surface material, as this may not guarantee sufficient stability.
Anchoring (6x)
To ensure stability, the swing must be anchored in the ground. Dig in the soil at the given intervals (see the drawings) holes that are 400 mm deep and with the cross-section of 600x600mm. Throw a layer of gravel
to the bottom of the hole to improve rainwater absorption in the soil. Then pour concrete into the hole, stopping at 50 mm below the ground level. When the concrete becomes dry, place a layer of soil on it up to the ground level.
Use
All projecting screws and sharp edges must be planed smooth immediately after installation to avoid potential injuries. The height of the swing seat (350 mm) as well as the general condition of the set must be checked on a daily basis. Stability, wear on the moving parts and strength of connections should be checked no less frequently than every 1 to 3 months. The suitability of the base material, corrosion of accessories and rotting of wooden parts should be checked once a year. Moving parts must be oiled and worn and/or defective parts should be replaced with parts provided by the manufacturer. Bolts and ropes must be tightened.
NB! The children's playground construction II* can only be considered safe when it is used with the appropriate slide (height from ground surface 1500±50 mm, maximum width 500 mm). When installing the slide leave at least 25 mm (30 mm recommended) clearances on both sides, in order to prevent clothing entrapments during gliding. Providing the product with a suitable slide is the responsibility of the end-seller of the product. The slide that is used must meet the criteria of the EN 71 of the European Safety Standard and go with the given swing set.
Playground without a slide.
Playground with slide. To ensure safety, the second rung from the top and two swing seats with ropes must be removed during the installation process.
We advise the removal and storage of all accessories during the winter because the characteristics of the soil (when frozen) are not suitable for safe play. To avoid that the swing is heating up, do not place it facing the sun. In warm weather, check that the seating surface is not too hot.
Warnings
* Not suitable for children under 3 years. Risk of falling.
* To be used under the direct supervision of an adult. Ropes. Suffocation risk.
* Only for domestic use. Outdoor.
*
Adult assembly required.
* This product is designed to be used by children from the ages of 3 to 14 (max 50kg/user).
* The product is designed for simultaneous use by up to 5 persons.
Caution
* Items of clothing causing increased danger such as hoods, buttons, etc., may become caught during use of the product and may therefore present a risk of suffocation to the wearer.
* Wooden details can contain slight torn grain or sharp ends and edges.
* Every time before using the playground you should make that the anchoring remains reliable.
NB! The structure MUST NOT be altered without the consent of the manufacturer, otherwise risk of serious injury. These instructions should be kept in order to avoid later disputes.
For your information:
Wood, as a natural material, is sensitive to changes in the environment. Fluctuations in temperature and weather conditions may cause the wood to split and harden – particularly in the case of machine-cut wood products. However, this generally does not affect the strength of the wood. This swing design is foreseen for one-time installation only.
TREATED TIMBER CONTAINING A BIOCIDAL PRODUCT. Control of wood destroying organisms.
Active Ingredients: copper carbonate / copper hydroxide (1:1) and boric acid. Avoid inhalation of sawdust. Do not use in contact with drinking water or for direct food contact. Do not use for animal bedding or in fish ponds. Dispose of treated wood responsibly. Industrial waste should be disposed of through an authorised waste contactor.
|
Chapter 6
Believing God
There is a big difference between the way Jesus lived and the way we live. As we draw closer to Him this becomes more and more clear. The fact is that, while He is holy, we are not. So His holiness makes our sinfulness stand out. In His presence sin appears as it really is – ugly and undesirable. It’s only when we know Jesus that we can truly learn to hate our sins.
As you begin to see that your sins really are sins, and that this is not just something we say, you may feel that Christ can’t forgive you. You may even feel that the sins you brought to Him before are not completely forgiven, that some of your guilt remains. This is simply not true.
"You need peace – Heaven's forgiveness and peace and love. Money cannot buy that peace. Study will not give it. The mind cannot find it. Being wise will not provide it. You can never hope to receive this peace by your own work."
Wólta'ii 6
Diyin God Joodlâago
Jesus be'iina' éí nihí dahinii'nánígíí doo yił aheełt'ée da. T'áadoo bahat'aadí Jesus hoł yi'ashgo díí baa ákoznízín doo. Nihí bạqahági ádaniit'é, bí éí diyin. Diyin nilínígíí biniinaa Jesus éí nihí bạqahági ádaniit'ehígíí nihí úshjáni iíł'í. T'áá ákót'éego bạqahági át'éii éí danichxó'ígíí dóó doo yá'ádaat'ehígíí danilíigo hoł bééhoozijih. Christ bit'áá ákwii át'ehígíí baa ákoniidzijihgo nihibạqahági át'éii jiiniidlá yileeh.
Bạqahági át'éii ts'ídá nichxó'ígíí át'é. Díí doo t'óó ádii'nii da. Kót'éego baa ákoznízingo, shí doo shaa núdidoot'áał da jinízin shújí łeh. T'áá úidáq' bạqahági át'éii yéę haa nídeet'ąą nidi doo t'áá ałtsojí' shaa nídeet'ąą da sha'shin jinízin łeh. Shibạqahági át'éii yéę t'ah nidi hóló, jinízin daats'í. Díí éí ts'ídá doo ákót'ée da!
"Ach'í' hózhó bídin nílíį lá – yá'qashdéę' aa náhidit'aahígíí dóó ach'í' hózhó dóó ayóó'ó'ó'ní nijéí bii' hólóogo bídin nílí. Díí doo béeso bee nahidoonihígíí át'ée da. Azhá hojiyáago yéego nitsídzíkees nidi doo bik'izhdoogáał da. Binijilnishgo nidi doo bik'izhdoolkah át'ée da lá. Diyin God
and power. God offers His peace to you as a gift. 'It will cost you nothing!' (Isaiah 55:1). It is yours if you will reach out your hands and take it" (*Steps to Jesus*, pp. 46-47).
"You have confessed your sins and chosen to put them out of your life. You have decided to give yourself to God. Now go to Him and ask Him to wash away your sins. Ask Him to give you a new heart, a new mind. Then believe that He does this, *because He has promised*. Jesus taught this lesson when He was on the earth. You must believe that you receive the gift God promises and that it is yours" (*Steps to Jesus*, p. 47).
To have peace in your heart you must believe that the sins you've asked Jesus to forgive are really forgiven. Jesus asks us to believe this, not because believing it will make us feel good, but because it's true. Notice, He doesn't ask us to believe this because we feel it is true. We might *not* feel it is true, but this doesn't change the fact. You asked God for forgiveness. Believe that He has answered your prayer – not because you feel it, but because He said so. It makes no difference how we feel. His Word is still true. If you have confessed your sins and in your heart truly put them away, then God has forgiven those sins completely.
t'áá jíík'e neilé – 'béeso t'áá gééd índa doo báháh ilínígóó' (Isaiah 55:1). T'áá jíík'e nee hodooleet, Diyin bich'í' dah diníyá dóó bíká díníchidgo índá" (*Steps to Jesus*, pp. 46-47).
"Hádáá' shíí nibąąhági át'ěii Diyin bich'í' áada nahosínílne', dóó nijéít'yi'di doo baa nichí' da sínílíí'. Diyin baa ánídíínít'á bich'í' háínídzíí'. Áko k'ad bich'í' náánídá, ákóbidiniígo: Shibaąąhági át'ěii shii'i'déé' hanílé dóó ajéé ániiidígíí shaa nilé. Áádóó índá bini'dii há ákwííléhígíí baa ákonínízin doo – Jesus ts'ídá há ádeeshlááí nidííniidígíí biniinaa. Kót'ěego Jesus yee na'neeztááí', nahasdzááán bikáá'gi nihitaagháhádadáá' " (*Steps to Jesus*, p. 47).
Nibąąhági át'ěii yęę Jesus beíníláago, t'áá altsojí' naa nídeet'ánígíí yidíídląął, áko ach'í' hózhó nee hodooleet, nidi ach'í' hózhónígíí éí doo éí át'ée da. Jó doo ákót'ée da jinízín nidi, t'ahdii t'áá aaníí łeh. Diyin God shaa nídin'aaah bidííníniid, áko t'áá bí Bizaadígíí niłdzilgo yinídląą doo – doo t'óó baa nitsíníkeesígíí biniinaa da, Bizaad biyi'dóó yee ánínígíí biniinaago ląą. Ni éí, daats'í dóó sha'shingo shíí baa nitsíníkees. Azhą́ ákót'ée nidi Diyin Bizaad ts'ídá t'áá aaníí át'ę́. Nibąąhági át'ěii yęę Diyin bich'í' áada nahosínílne' ládadáá' dóó nijéít'áahdi nahji' kwiiínilaago, áko t'áá altsojí' naa nídeet'áá lá.
"Read the Bible stories about Jesus healing the sick. From them you can learn something of how to believe in Him for the forgiveness of sins. Turn to the story of the sick man at the pool of Bethesda. The poor man was helpless. He had not walked for 38 years. Yet Jesus said to him, 'Get up, pick up your bed, and go home!' The sick man did not say, 'Lord, if you make me well, I will obey Your word.' No, he believed Christ's word. He believed he was made well, and that very moment he tried to walk. He chose to walk. And he did walk. He acted on the word of Christ, and God gave the power. The man was healed" (Steps to Jesus, p. 48).
"You can do nothing to take away your past sins. You cannot change your heart or make yourself holy. But God promises to do all this for you through Christ. Believe that promise. Confess your sins and give yourself to God. Choose to serve Him. God will surely keep His promise to you if you
"Diyin Bizaad biyi'dóó Jesus éí kanidaakaiígíí hadaal'té ánídayiídlaago yaa nahasne'. Díí hane'ígíí éí bąąhági át'éii haa náhidit'aah haz'áagi bee bééhodoozijíí lá. Diné la' na'níthodí jilífigo, Bethésda hoolyéegi dzizdáago, éí baa nitsíníkees. Dinéhéę́ háká análwo'ii ádingo dzizdá. Tádiin dóó ba'ąą tseebíí nááhai biįghahgo hajáád doo choyoo'įjíid da nít'ée' lá. Azhą́ ákót'ee nidi Jesus nídiidááh hałní; bik'i nítéhí nídiiltsóosgo dah diinááh hałní. Diné kanijijigháhą́, ShiBóhólnüihii, náshidinisáago índa nik'eh honish'įjgo dah dideeshááł éí doo núí da. Christ hach'į́ hadzíí'go áhodííniidéé joosdląqą́. Shich'į́ há'oodzi'ígíí yinishdlâago bee bíneesh'á jiní, áko nídií'na'go dah diiyá. Yídeesijíí dóó dah dideeshááł t'éiyá baa nitsídzíkeesgo, nízhdíi'na' dóó dashdiiyá. Christ kónínééh hałnígo bízneeztą́ą́', áko Diyin éí diné bízhneel'ą́ áhoolaa. Hadaalt'é ánáho'diilyaa" (Steps to Jesus, p. 48).
"T'áá ni t'áá sáhígó doo Diyin bił k'é ná'ahidíí'niil át'ée da. Nijéé doo t'áá ni łahgo át'ęego ánídíídlííįgo át'ee da; doo t'áá ni yá'át'ęéh ánídidíilniił da. Nidi Christ beego díí t'áá ałtso Diyin ná ákwúiidooliił. Bizaadígíí yinídlą́. Nibąąhági át'éii yęę Christ bich'į́ ádaa hodíílnihgo t'áá sinízįį nít'ée' baa ánídinít'aah doo. T'áá k'ad nik'eh honish'įj doo, bidiní. Ákohgo Diyin God t'áá bí éí bíninil'ąągo nidziilgo
do this" (*Steps to Jesus*, p. 48). "Do not wait to *feel* that you are made whole. Say, 'I believe it. It *is* so, not because I feel it, but because God has promised'" (*Steps to Jesus*, p. 48).
God says, "As far as the east is from the west, so far has he removed our transgressions from us" (Psalm 103:12). This means *your* sins, not just other people's. Do you believe this? Which is more sure – your feelings, or God's Word? Jesus once said: "So if the Son sets you free, you will be free indeed" (John 8:36).
If you've asked Him to do this, then Christ has set you free. If you're free, be glad! Take God at His word.
ánidoolíí" (Steps to Jesus, p. 48). "T'áadoo t'óó hasht'e-shi'diilyaago índa dooleel nínízini. Diyin Bizaad yinishdlá diní – doo t'óó úinisinígíí biniinaa da, nidi Diyin God yee haadzí'ígíí biniinaago láą" (Steps to Jesus, p. 48).
Diyin ání, "Bee haz'áanii dahiiiti'ii ha'a'aahdóó e'e'aah ánizáágóó nihits'éediní'á" (Psalm 103:12). T'áá ni nibaąahági át'éii éí ááhyilní kwe'é, doo diné náánála' t'éiyáa da. Da' díish yinídlá? Háidísha' aláahgo ba'į́nlíigo yinídláą doo – baa nitsíníkeesígíí daats'i? Diyin God bizaad niłdzilgo yee hahasdzí'ígíí daats'i? Łahdáá' Jesus ání: 'Éí bąą aYe' nihéédideideezhchidgo, t'áá aanúí nihéédadeeshchid doo" (John 8:36).
Christ bííníkeed ládáá', t'áá űídáá' néédoochidgo át'é. Néédoochid ládáá', baa nił hózhqo doo! Bíni'dii Diyin God yee hahasdzí'ígíí yéego yinídláą dooleel!!
|
Chapter 3
Being Sorry for Sin: Repentance
On the day of Pentecost Peter preached a powerful sermon to the Jewish people in Jerusalem. They saw that they had sinned by crucifying Christ, and asked, "What shall we do?" (Acts 2:37). Peter's answer was simple. He said, "Repent and be baptized, every one of you, in the name of Jesus Christ for the forgiveness of your sins" (Acts 2:38).
Repent. But what does it mean to repent? "Many people do not really understand true repentance. Millions are sorry that they have sinned. They even change their ways, because they are afraid that their wrongdoing will cause them suffering. But this is not true repentance; it is not the kind the Bible tells about. These people are sorry that sin may make them suffer, but they are not sorry for the sin itself" (Steps to Jesus, pp. 18-19).
Wólta'ii 3
Łahgo Át'éego Diyin God Bich'i'
Tsínáhodiikeesgo
Alk'idáá' Pentecost beinílkáago Peter éí Jew dine'é Jerusalemgi nidaakaiígíí ná'ilnáago bidziilgo yił hoolne'. Jesus éí tsin ałnáoszid bikáa'jí' yił ada'askaal, áko doo ákóne' adááát'jidéę yaa ákodaniizíjí'go yaqah dabíni'. "Ákoshá' haa dadii'núl?" na'ídééłkid (Acts 2:37). Peter ákóníigo t'áá k'ehézdonígo áyidííniid: "T'áálá'í nootíníigo łahgo át'éego Diyin God bich'i' tsíndahidohkeesgo Jesus Christ bízhi' bee tó bee danihí'dólzíjíh, áko Diyin God bich'i' ádił da'soohsí'ígíí nihá yóó'iididoo'áál" (Acts 2:38).
"Lahgo át'éego Diyin God bich'i' tsíndahidohkees," nihíłní kwe'éé. Nidi kónígíí ha'át'íí lá ááhyilní? "Diné t'óó ahayóí, łahgo át'éego Diyin God bich'i' nitsinízdoookosigíí doo bił béédahózín da. Ajiisiih bił béédahózingo yaqah dabíni', áádóó kodóó nizhónígo nijigháa dooleeł, daanii łeh. Shik'éí yá'át'éehgo shaa nitsídaakees doo, danízingo ákódaaníí shíí. Doo ákót'éego Diyin God bich'i' łahgo át'éego tsínázdiikees da, Saad Diyinií bik'ehgo. T'óó shíí ti'dahwiizhdoonihiğíí bee dadzildzidgogo, áko Diyin bich'i' asésiih ni'dajinnígíí éí t'óó ádajiní" (Steps to Jesus, pp. 18-19).
After Judas, who was one of the twelve disciples, handed Jesus over and saw that he would suffer for what he had done, he seemed to repent. "He was afraid that he might have to suffer for what he had done, but he felt no deep, heart-breaking sorrow for selling the perfect Son of God to die. He was not sorry that he had turned away from Jesus, the Holy One of Israel" (*Steps to Jesus*, p. 19).
On the other hand, when David sinned by taking a woman named Bathsheba away from her husband and making her his own wife, he knew that what he had done was wrong. Afterward he was truly sorry for his sin. His repentance was sincere and deep. He didn't try to hide his mistake from God, but admitted it freely. He prayed:
"Be merciful to me, O God, because of your constant love. Because of your great mercy wipe away my sins! Wash away all my evil, and make me clean from my sin! I recognize my faults; I am always conscious of my sins. . . ."
Judas, éí Jesus naakits’áadah bikéé' naakaiígíí atah jilínígíí éí Jesus ninázhdim’á. Áádóó bik’iji’ bik’ee ti’hwii-deeshnihígíí baa ákozniizíí’, áko łahgo át’éego baa tsínízdeezkééz nahalin. Nidi doo t’áá aaníí ájíníí da. "Ti’hoo’nííh hodooleeëii t’éí yaqah bini’ nít’ée’. Doo t’áá iiyisí bijéítl’ááhdéé’ yee yaa yínííł tsínídeezkéez da, azhá Diyin God biYe’ – Ízrel dine’é biDiyin nilínígíí – ninádiní’áą nidi" (Steps to Jesus, p. 19).
Łahdä́’ shú́ David, aláahgo naat’áanii nilínígíí, ałdó’ bạqhági ádzaa. Asdzání bahastiin t’áá iídä́’ hólóogo lá’, éí Bathshú́ba wolyé. Asdzánígíí nizhónígo biniinaa bidázh-noosnii’, bahastiinéé baazht’íí lá, áádóó hwe’asdzáá ájiilaa. Áko bạqhági ájiidzaaígíí biniinaa bạqh hání’go baa tsízdeezkééz. Doo t’óó bik’ee ti’hwiiizhdoonihígíí t’éí bạqh hání’ da; t’áá aaníí łahgo át’éego Diyin God bich’i’ tsínízdeezkééz. Doo áköó ájíít’ijdéę doo nízdees’íí’ da; doo neini’ingóó Diyin bee bił hojoolne’. Ákóníígo sozdoolzin lá:
"Diyin God nílíinii, aa a’ááh nínízinií bik’ehgo shaa jiiníbaah; bee aa a’ááh nínízinií t’óó ahayóígíí bik’ehgo bee haz’áanii hétí’ii shá k’ée’íłchxqoh. Doo ákwii áát’ijdii altso shąąh táánígis áádóó ádił ni’ayészíí’ii shąąh dííldah! Háálá bee haz’áanii hétí’ii baa ákoniiizíí”, índa ádił ni’ayészíí’ii t’áá
Create a pure heart in me, O God, and put a new and loyal spirit in me. Do not banish me from your presence; do not take your holy spirit away from me" (Psalm 51:1-3, 10-11, GNT).
"He prayed not only for forgiveness but for a clean heart. He wanted the joy of holiness—to be brought back into harmony with God" (Steps to Jesus, p. 20).
The point is this: True repentance doesn't mean being sorry for the punishment you see coming. Anyone can be sorry for that. We don't need help from God to be sorry for what might happen to us. But we do need help from God to be sorry for our sins. There's a difference. The one means being sorry that *you* might suffer in hell. The other means being sorry that *Jesus* had to suffer for you on the cross.
Jesus gives us repentance, just as He gives us salvation. "The Bible does not teach that the sinner must
álahjí' shi'dii'lá. . . . Diyin God nílíinii, baa'ihii shijéé bii' ádingo ánílééh, áádóó núch'i shii' hólónígíí baa áhódlíígo ániidí ánánídlééh. Honílónígíí bits'áaji' áshólééh lágo, áádóó niNích'i Diyinii shits'áaji' kóole' yíila " (Psalm 51:1-3, 10-11).
"Shaa nídiní'ah jinúigo, doo éí t'éí bíká sozdoolzin da; shijéé yá'át'eehgo shá ánánídlééh jinúigo júkeed. Ts'ídá t'áá ákogi é'ét'ëii bibee ił hózhó hwee náhódleeh laanaa jinízin; Diyin God nizhónígo bił ahił nááhojilne' le' laanaa jinízin" (Steps to Jesus, p. 20).
Aláahdi ááhdii'nínígíí éí díí: T'áá aaníí łahgo át'éeego Diyin God bich'í' tsínáhodookosígíí éí doo ti'hwiidoo'nihígíí t'ëí biniinaago bąąh háni' da. Ti'hoo'nííh éí báhádzid; Diyin t'áá gééd nidi díí éí bąąh háni' ñeh. T'áá háiída ákózhdoonííł. Áko nidi Diyin God éí hajéé bii' hólóogo t'ëiyá bąąhágí át'ëii bąąh háni' dooleelgo bízhneel'á. Diyin t'áá géedgo éiyá t'áá hó ádá ti'hwiizhdoonihígíí ch'íidiitahgóó baa nitsídzíkeesgo bąąh háni' ñeh. Diyin bił lá'í nijigháago, Jesus t'áá bí alk'ídá'á' shá ti'hooznii' tsin ahnáoszid bikáa'ji' baa nitsídzíkees ñeh.
Jesus éí nihí łahgo át'éeego Diyin God bich'í' tsínádadiikosgo úídoolííł, dóó yisdánihidooltí – t'áá álah. "T'áá ákót'ëego Diyin Bizaad yee na'nitin: Áłtsé Jesus baa nijidááh
repent before he can accept Christ's invitation, 'Come to me, all of you who are tired from carrying heavy loads, and I will give you rest' (Matthew 11:28). Christ's grace, His power, leads a person to truly repent" (*Steps to Jesus*, pp. 21-22).
We've said that repenting means being sorry for sin. But it means more than just being sorry. It also means turning away from sin. No one can be truly sorry for something he hopes to do again. He can be sorry for the guilt of a sin he hopes to do again, but not for the sin itself. To repent of your sins you must turn away from them.
It's very important to know this. It's also important to know that we can't do it. We can make people think we've put away a certain sin. We can even fool ourselves. But the desire to do that sin, or another one just as bad, is still in our hearts. Only Jesus can take that away. We are sinful and only Jesus can change our thinking so that we want different things than we once wanted. Come to God and in Jesus'
— t'áá ájít'ehígí át'ëego. Áádóó índá yá'át'ëehgo tsínízdidookos, dóó habaqhági át'ëii bąąh háni' doo. Jesus ání: 'Dziil nihigháanii áádóó nihiyéél dandaazii t'áá ánól'tso hágó, shaa hohkááh, áko hááadałyíihgo ánihideesh-liíí' (Matthew 11:28). Áltse shíí t'áá ni láhgo át'ëego shich'i' tsínídahidoohkees áádóó índá shaa hohkááh, doo níí da lá! Jesus ách'i' ninihíí'ëesh, áádóó índá láhgo át'ëego tsínídahidiikeesgo ádanihiile'" (Steps to Jesus, pp. 21-22).
T'áá aaníí láhgo át'ëego Diyin God bich'i' tsíníz-deezkéezgo doo ákóne' ájiidzaaigíí éí biniinaa bąąh háni' dooleet, dadii'ní. Nidi doo t'óó bąąh háni' da. Ałdó' yóó'azhdi'ááh. Díída, éida ánáádeesh'nííí jinízingo áko doo t'áá aaníí bąąh háni' da. Ha'át'e' doo yóó'azhdi'aahgóogo doo t'áá aaníí láhgo át'ëego tsínízdeezkéez da.
Díí t'áá ałtso baa ákoznízingo éí yá'át'ëeh, nidi t'áá hó t'ëiyá t'áadoo diyin k'ehgo bízhneel'ání da. Bił kééhojit'iinii bąąhági át'ëii yóó'iidíí'áá lá danízingo shaa nitsídaakees doo, jinízin łeh. Shí k'ad yóó'adíí'á ni jiniigo ázhdinót'áahgo shíí haz'áá lá ałdó'. Nidi t'ahdii hajéít'áahdi ánáázhdooníílgo át'ëee łeh. Jesus t'ëí hajéí bii' hólóogo nihíni' láhgo ánéidoodliíígo yíneel'á, áko yá'ádaat'ehígíí daniidzin doo.
name say: Change me so that in my heart I will hate sin, and help me to turn away from it.
You can't do either of these things without Jesus, but if you come to Him in faith and simply ask for these blessings He will give them to you. He can do for you what you can't do for yourself. Ask Him.
Yucca. State flower of New Mexico.
Diyin bináa'go Jesus yízhi' bee ákóbidiní: Shibąahági át'éii yęę t'áá aaníí shijéít'ááhdóó yóó'adish'aah nisingo áshílééh, áko doo ákóne' üishłaaígíí jiinishlá áshidílííł.
Jesus t'áá géedgo łahgo át'ëego nitsídzíkeesgo doo bízhneel'ąą da lá, nidi bíníkeedgo dóó nijéí bii' yinídláągo, díí t'áá ałtsó ná üidoolííł. Doo bíninil'ánígíí Jesus ná üidoolííłgo át'ëé. Bíníkeed.
|
Agenda item: PL 2.5
Document C13/64-‐E 30 May 2013 Original: English
Report by the Secretary-‐General
WTPF-‐13 – REPORT ON OUTCOME
Summary
This Report summarizes the outcome of the ITU World Telecommunication/ICT Policy Forum (WTPF), held at the CICG, Geneva, from 14-‐16 May 2013.
Action required
The Council is invitedto notethe report described below and to consider, and as appropriate,to approve the draft Resolution in Annex.
____________
References
Resolution 101 (Rev. Guadalajara, 2010),Council Decision 562 (2011)
1. The ITU World Telecommunication/ICT Policy Forum (WTPF) was established by the 1994 Kyoto Plenipotentiary Conference and is covered by the provisions of Resolution 2 (Rev. Guadalajara, 2010). The WTPF provides a forum where ITU Member States and Sector Members can discuss and exchange views and information on emerging telecommunication/ICT policy and regulatory matters, especially global and cross-‐sectoral issues. The WTPF shall not produce prescriptive regulatory outcomes; however, it shall prepare reports and adopt opinions by consensus for consideration by Member States, Sector Members and relevant ITU meetings.
2. Resolution 101 (Rev. Guadalajara, 2010), as reaffirmed by Decision 562 (Council 2011), decided that the WTPF-‐13 would discuss all the issues raised in: Resolution 101: "Internet Protocol (IP)-‐based Networks" (Rev. Guadalajara, 2010); Resolution 102: "ITU's role with regard to international public policy issues pertaining to the Internet and the management of Internet resources, including domain names and addresses" (Rev. Guadalajara, 2010); and Resolution 133: "Roles of administrations of Member States in the management of Internationalized (multilingual) domain names" (Rev. Guadalajara, 2010).
3. In accordance with Decision 562, the ITU Secretary-‐General convened an Informal Experts Group (IEG), each of whom was active in preparing for the Policy Forum. With the approval of ITU Council 2012, membership of the IEG was opened to all stakeholders. The
IEG met three times under the Chairmanship of Mr Petko Kantchev (Bulgaria) -‐ twice in 2012 (5 June 2012 and 8-‐10 October 2012) and on 6-‐8 February 2013. More than 180 experts participated in the work of the expert group. 1 Around 75 contributions were received from all stakeholders towards the various drafts of the ITU Secretary-‐General's Report (five in total) and the Draft Opinions. All documents of the WTPF-‐13 preparatory process are freely available on theWTPF website without any restrictions.
4. The Policy Forum was preceded by a Strategic Dialogue, Building our Broadband Future, held on 13 May 2013 and moderated by Mr Raffaele Barberio of Key4Biz. Eleven panelists and two Scribes participated in dynamic debates on the status, progress and challenges of rolling out broadband in two Sessions, "Building out Broadband" and "Broadband Driving Development". Session 1 debated whether access to broadband is a human need or a right. Session 2 debated the vital applications of broadband for improving people's lives and achieving the Millennium Development Goals (MDGs).
5. The following day, Mr Ulf Pehrsson and Ms Kathryn Brown presented the outcomes of the Strategic Dialogue to the Opening Plenary of the Forum, 2 summarizing the discussions and rich insights from the Strategic Dialogue. For Session 1, participants generally agreed that access to Internet and access to broadband is indeed a basic human need, although some went further and called it a fundamental right. For Session 2, Ms Kathryn Brown noted a maturing of the conversation, and described the relevance, evolution and risk. Participants called for government and the private sector to work together on both supply-‐side and demand-‐side issues relating to broadband, emphasizing that we now stand at a tipping point and now is the time for action to realize the significant benefits of broadband. The programme can be found athttp://www.itu.int/en/wtpf-‐13/Pages/dialogue.aspx.
6. The Fifth World Telecommunication/ICT Policy Forum (WTPF-‐13) was held at the CICG in Geneva, Switzerland, from 14-‐16 May 2013. It was attended by 900 delegates, representing 126 Member States and 49 Sector Members and five United Nations entities, as well as 37 members of the public. Twelve members of the IEG, invited as Special Guests of the ITU Secretary-‐General, attended WTPF-‐13. High-‐level participation by VIPs reached unprecedented levels, including a record attendance of 33 Ministers and eight Deputy Ministers as well as several heads of regulatory agencies. H.E. Mr Ivo Ivanovski, Minister of Information Society, The Former Yugoslav Republic of Macedonia, was elected Chairman of the Forum.
7. The Policy Forum opened with addresses from:Dr Hamadoun I. Touré, ITU Secretary-‐ General; H.E. Mme Doris Leuthard, Federal Counsellor, Department of Environment, Transport, Energy and Communications, Switzerland; Mr Fadi Chehadé, President and CEO of ICANN; andDr Robert E. Kahn, President, CNRI, and the co-‐founder of the Internet.
8. In his speech, Dr Touré underlined the need to work together. ITU will continue its bridge-‐building role and can leverage its unique position as a neutral convenor, where Member States, Sector Members and other stakeholders can come together. The timing of
1 List of members of the IEG is available at:http://www.itu.int/md/S13-‐WTPF13IEG3-‐ADM-‐0002/en
2 Seehttp://www.itu.int/en/wtpf-‐13/Documents/statements/wtpf-‐13-‐ericsson-‐en.pdf and http://www.itu.int/en/wtpf-‐13/Documents/statements/wtpf-‐13-‐verizon-‐en.pdf
this year's WTPF, with its focus on International Internet-‐related public policy matters, was particularly appropriate – as we stand at a 'tipping point' between the Internet as a vital enabler of social and economic progress in the industrialized world, and the Internet as a valuable global resource and a basic commodity of human life everywhere. The WTPF can create a shared vision that can be transformed into effective action to bring connectivity to the two-‐thirds of the world's people who are still offline. He reminded delegates that WTPF is a forum for free-‐thinking debate and discussion on new and emerging issues.
9. In her speech, H.E. Mme Doris Leuthard, Federal Counsellor of Switzerland, emphasized how broadband is critical to a modern, global economy and underlined her Government's support for the WSIS Forum, running in parallel to the WTPF. She highlighted how the Internet touches profoundly our society of today, our citizens, our businesses, our administrations, and schools. She stressed the importance of the responsibility of governments and their role in the protection of the rights of citizens and consumers.
10. Mr Fadi Chehadé, President and CEO of ICANN, delivered a strong message of cooperation throughout the Internet community announcing it is a "new season". He noted that "no one organization, no one country, no one person can manage the Internet – we must do this together. And it's our unity that will make this a very strong Internet that is secure and stable for everybody".
11. Dr Robert E. Kahn, one of the founders of the Internet, took the opportunity to take stock of where we are today in terms of the Internet's development and shared his vision of the future Internet as a global resource that will continue to evolve and bring value to the world.
12. Six Vice-‐Chairs were elected for the Forum:
* Ms Magdalena Gaj (Poland);
* Mr Rashid Ismailov (Russia);
* H.E. Mr Rowland Espinosa Howell (Costa Rica);
* Mr Majed M. Almazyed (Saudi Arabia);
* H.E. Mr Blaise Louembé (Gabon); and
* Mr Rabindra N. Jha (India).
13. In accordance with Resolution 2 (Rev. Guadalajara, 2010), discussions at the WTPF were based on the Report of the Secretary-‐General, 3 which served as the main working document of the Forum. The contributions and comments of members of the IEG were incorporated into this report. Annexed to the Report were six Draft Opinions which were forwarded by the IEG, by consensus, to WTPF-‐13 for further discussion.
14. The General Secretariat presented the Secretary-‐General's Report, on behalf of the Secretary-‐General, giving a broad overview of issues covered by the Report based on the issues raised in the Plenipotentiary Resolutions 101, 102 and 133 (Rev. Guadalajara, 2010).
15. The presentation of the Report was followed by a succession of high-‐level statements 4 by Member States and Sector Members based on the themes raised in the Secretary-‐
3http://www.itu.int/md/S13-‐WTPF13-‐C-‐0003/en
4http://www.itu.int/en/wtpf-‐13/Pages/speakers.aspx
General's report. It was noted that the body of the Report serves as an input document to the Policy Forum, and will not be revised during the Forum.
16. Three Working Groups were established to discuss the six draft Opinions annexed to the Secretary-‐General's Report and associated contributions from Member States and Sector Members. The following were elected as Chairmen and Vice-‐Chairmen of the Working Groups.
17. The Working Groups worked constructively over two and a half days. The Chairmen of the three Working Groups presented the results of the work undertaken by the Working Groups to the Plenary for approval – seeAnnex I of the Chairman's Report.
18. In the discussion that followed the presentation of the report of the Chairman of Working Group 3, many participants stressed the importance of continuing the discussion on the role of governments in various fora in an open, transparent and multistakeholder manner.
19. The Chairman of the Forum presented the Draft Opinions, which had been revised and endorsed by the Working Groups. He invited the Forum to adopt the following Opinions:
Opinion 1: Promoting Internet Exchange Points (IXPs) as a long term solution to advance connectivity
Opinion 2: Fostering an enabling environment for the greater growth and development of broadband connectivity
Opinion 3: Supporting Capacity Building for the deployment of IPv6
Opinion 4: In Support of IPv6 Adoption and Transition from IPv4
Opinion 5: Supporting Multi-‐stakeholderism in Internet Governance
Opinion 6: On supporting operationalizing the Enhanced Cooperation Process
20. The forum delegates thanked the Chairmen and Vice-‐Chairmen of the various Working Groups for their outstanding work.
21. The Policy Forum adopted the Opinions as presented in Part II of the Chairman's Report.
22. The concluding remarks of the Forum can be found here: http://www.itu.int/en/wtpf-‐ 13/Pages/speakers.aspx.
23. The ITU Secretary-‐General, in his concluding remarks, said that he would propose to the ITU Council that the composition of the Council Working Group on international Internet-‐related public policy issues (CWG-‐Internet) should be extended to all stakeholders in light of the extremely positive experience of WTPF-‐13 and its preparatory process.
24. While the Secretary-‐General is fully aware that the enlargement of the group's composition is under the purview of the 2014 ITU Plenipotentiary Conference (PP-‐14) -‐ since the current rules of composition were determined by PP-‐10 -‐ it is particularly important, if not essential, for ITU to not wait nearly two more years to open up the group's composition, so that the group effectively fulfills its intended mandate.
25. It is therefore suggested that the ITU Council recommend to PP-‐14 that the composition of CWG-‐Internet be broadened and that this expansion, if the Council agrees, is implemented as on a provisional and interim test basis at the group's next meeting(s), which would be a test to allow PP-‐14 to make an informed final decision. In this regard, the Council is invited to consider and, as appropriate, approve the related draft Resolution in Annex.
ANNEX
DRAFT RESOLUTION
Participation of all Stakeholders in the Council Working Group on international Internet-‐related Public Policy Issues
The Council,
recalling
a) that Resolution 102 (Rev. Guadalajara, 2010) instructed the Council to revise its appropriate resolutions to make the Dedicated Group on international Internet-‐related Public Policy Issues into a Council Working Group, limited to Member States, with open consultation to all stakeholders;
b) that Council 2011 Resolution 1336 established the Council Working Group on international Internet-‐related Public Policy Issues (CWG-‐Internet), limited to Member States, with open consultation to all stakeholders and with terms of reference as described in the Annex of the Resolution;
c) that the terms of reference for CWG-‐Internet, specified in Council 2011 Resolution 1336, are to identify, study and develop matters related to international Internet-‐ related public policy issues;
d) that Council 2012 Resolution 1344 defined the modalities of the open consultation for CWG-‐Internet with all stakeholders,
recognizing
a) that Resolution 102 (Rev. Guadalajara, 2010) instructed the Secretary-‐General
1. to continue to take a significant role in international discussions and initiatives on the management of Internet domain names and addresses and other Internet resources within the mandate of ITU, taking into account future developments of the Internet, the purposes of the Union and the interests of its membership as expressed in its instruments, resolutions and decisions;
2. to continue to take the necessary steps in ITU's own internal process towards enhanced cooperation on international public policy issues pertaining to the Internet as expressed in § 71 of the Tunis Agenda, involving all stakeholders, in their respective roles and responsibilities;
b) that Opinion 5 of fifth World Telecommunication/ICT Policy Forum (2013) on Supporting multi-‐stakeholderism in Internet Governance invites Member States and other stakeholders to explore ways and means for greater collaboration and coordination between governments, the private sector, international and intergovernmental organizations, and civil society, as well as greater participation in multistakeholder processes, with a view to ensure that the governance of the Internet is a multi-‐stakeholder process that enables all parties to continue to benefit from the Internet.
-‐ 7 -‐
resolves
1 to recommend to the 2014 ITU Plenipotentiary Conference (PP-‐14) to make CWG-‐ Internet open to all stakeholders;
2 that pending the decision of PP-‐14, on a provisional and interim test basis, the intermediate meetings of the CWG-‐Internet prior to PP-‐14 will be open to all stakeholders,
instructs the Secretary-‐General to report to Council 2014 on the results ofresolves 2.
|
Bar Mitzvah
Rebecca—one of two Rebeccas, this one Silver—texted the group. Inana was going into labor. Oh my God, Inana was going into labor. There for a while we had had our doubts. Only a few days ago had Carly declared, self-‐assured as always, that Inana was, in fact, not pregnant. Her due date had passed, after all, and of the two goats, Inana and Fiona, Inana was by far the smaller one. It seemed likely that we'd all gotten our hopes up for nothing, that there would be no birth on the farm during our time at Urban Adamah.
I'd been by their pen earlier in the day and saw what I thought were signs—a swollen vulva and some strange behavior—but who was I too judge? Who'd given me any authority on what is or what is not a swollen goat vulva? But, this time, I'd seen what there was to be seen, and oh my God, Inana was going into labor. We, me and two other fellows, were at a salvage yard not far from the house. I had a book of poetry in my hand. Kendra had another. We had to leave—apparently you could already see one of the baby's hooves.
"Do you think it'll be a boy or a girl?" Anna asked me as we left.
"Oh, a girl for sure," I said. Too much feminine energy in the air for it to be otherwise. Inana's kid turned out to be a female, and though informed speculations and nearly 50/50 guesses are no miracles when they turn out to be right, getting both the birth and the sex right in the same day wound up making me wonder if I didn't have some slight talent for prophecy. If so, I hope to use these powers to greater ends than imposing gender on unborn goats, but, you know, ultimately, it turns out that we have very little control over how things manifest in our lives.
We bought our books and Kendra drove fast through Berkeley. By the time we got to the farm, a ring of fellows and staff had formed around the goat pen. Inana was shifting in the hay. Fiona watched over her. The rest of us tried to contain our excitement and give the mother-‐to-‐be some space. Soon you could begin to see a nose emerge.
-‐-‐
That day began early, but no earlier than the rest. As a fellow, you have to get up before seven on a normal day, or at around five thirty if you have farm chores to take care of before Avodat Lev (morning prayer and meditation). It was Saturday, Shabbos, and Rebecca—the other one, Rebecca Spiro—Sara and I wanted to go to a service. The night before we'd participated in Shabbat on the Rock, a Kabbalat Shabbat (Friday night service) held by a group of linen-‐clad Jews at Indian Rock in the Berkeley Hills. It was a moving experience, watching the sunset over the hills and trees and water, listening to a chorus of voices and instruments beat out the melodies of traditional Hebrew prayers and even some modern English ones. The world came alive as a piece of immaculate art, feeling magical and spontaneous, created and unfolding. I guess you could say that the three of us wanted more, wanted to a scratch a religious itch after this deeply spiritual experience. At first, none of us realized we'd be attending a Bar Mitzvah.
We went to Netivot Shalom, a socially-‐progressive Conservative shul in Berkeley. Sara's mother knew the rabbi. When we got there, there was a boy handing out siddurs (prayer books) with blue pieces of paper informing us as to the order of the service. In black letters at the top it read "Bar Mitzvah of Nathaniel Spiro." It was going to be my first Bar Mitzvah ceremony, I
informed Rebecca Spiro, who was already hard at work scheming up a backstory to pass herself off as an estranged relative of the Bar Mitzvah boy.
Since childhood I'd known about Bar and Bat Mitzvahs, having heard stories about the big parties kids had in order to celebrate their emergence into Jewish adulthood. I thought about them mostly as parties, and was glad I'd never had one, sure that the stress of having to compile a guest list for such an event at the age of thirteen would have killed me. I'd never gone to Hebrew school besides, and as the child of a non-‐Jewish mother, my status as a Jew always felt undefined. By fourteen I considered myself a hard-‐liner atheist, so having a Bar Mitzvah around then would have been a lot to swallow. Despite that, though, I'd always enjoyed celebrating Hanukkah and Pesach at home, and especially enjoyed when my dad would take my sister and me to temple to observe Rosh Hashanah or Yom Kippur. My house didn't have very many traditions growing up, but those were a few that felt precious to me.
Nathaniel Spiro was excellent. He was tiny, young-‐looking the way that boys that age can tend to be, and wielded as he spoke a water bottle that seemed almost too big for his hands. But he read his Hebrew and, when the time came, offered a dvar Torah that was rich with humor and insight. His parsha was about a talking donkey and an invisible angel with a flaming sword. Seen through his eyes, the ancient biblical text became an allegory about climate change and the disconnection between humankind and nature. They say there are as many interpretations of scripture as there are those who read it, but I think Nathaniel had us all convinced and reflecting on our own relationship with the Earth. I smiled the entire time he spoke, and as his parents and mentors delivered their own small speeches, and as his friends rushed into the chamber to hear him speak, and as he helped to carry the Torah around the
room so that everyone might touch their siddur or tefillin to it. I saw the Bar Mitzvah as more than a party; I saw it as an act of community, of affirming one's presence before tribe and tradition as both shaper and participant.
If before I had suspected that, as an adult, having a Bar Mitzvah ceremony was something I might want, then attending Nathaniel Spiro's confirmed it. Even then, I did not consider the possibility of having one so soon, of getting bar mitzvahed before leaving Berkeley. Cara, another fellow, was the one who sowed the seeds of that development. Sitting at our kitchen table, she said that she could help me, that she'd tutored Bar and Bat Mitzvahs before. We could throw a ceremony together, could make the whole thing happen. I told her that I wasn't sure. I told her that if we went through with it, I'd want to make sure there was a rabbi there. For some reason, I was under the impression that that was what would make it official. I kept thinking of reasons no, but knew that underneath all of that was a single, persistentyes.
That evening, on Shabbos, before the sun had set and the full moon for which she was named came out in full force, we watched baby Levana be born. In the background, from the main tent, came the sound of corny jazz music. Somebody was having their Bar Mitzvah party.
-‐-‐
There were a few weeks of hemming and hawing on my part. I'm the type of person who should never be left with time to consider my decisions. I'm greedy with it, asking for more than I can get. Options are bad for me, too—every choice is its own calamity. I'm learning to trust my intuition more, to pick a path and pursue it instead of wavering endlessly at the crossroads, but it's difficult to change decades-‐worth of routine behavior. When it came to my
Bar Mitzvah, there was a lot of indecision over what wasnecessary.Did Ineedto make a big show of myself, to ask people to go out of their way to attend some ceremony that, in all likelihood, would wind up being kind of a mess? Would it be necessary to get staff from Urban Adamah involved? Would I have to learn Hebrew? Did I have to pick a date? These questions paralyzed the process, and for the most part I tried to keep things under wraps, not really talking to people too far outside of the circle of those who'd expressed interest in helping put the ceremony together that this was even something that I was considering doing. But from each person who Ididtalk to about my Bar Mitzvah, I received nothing but excitement and support. In talking to them about why it was that I wanted to do it and what aspects of the ceremony were most important to me, I became more sure. When I spoke about it, I knew it was something that I needed to do.
Over time, my concept of what a Bar Mitzvah—of whatmyBar Mitzvah—should look like shifted dramatically. Originally, my thought was that it should be something formal. I'd decided not to participate in the ceremony a year earlier when my sister and I went to Israel because the large Bnei Mitzvah held for members of our tour group seemed, to me, a little kitschy (and besides, I don't think that I felt that I was ready to make that sort of "commitment" to Judaism at the time). To me, it seemed that an important component of a proper Bar Mitzvah waswork—particularly in the forms of scriptural and Hebrew study. I was going to have to learn about the Bible and learn to read Hebrew words and letters. I also thought that a "real" Bar Mitzvah required the presence of a "spiritual authority"; at first, this meant a rabbi, but after a while, I was willing to settle for a "personal spiritual authority," someone from outside of my fellowship class that I looked up to in the community as a spiritual guide. I
wanted to do things by the book, but the funny thing is that I had no idea what the book entailed, or even what the title of the book might have been. The Hebrew Bible doesn't lay out certain requirements for a Bar Mitzvah ceremony, and what such a thing entails changes dramatically based on factors of space and time. Before long, I began to let go of my ideas of what was "proper" and "traditional," and instead chose to focus on what was important about the ceremonyto me, on how I could make the experience feel relevant to who I am as a person and the type of Judaism that I want to participate in. What resulted broke a lot of rules, was messy and dysfunctional and joyful and fun. It was creative and communal. It was perfect in its way.
Of course, though, when you start out looking for a book, a book tends to be what you find. The process of preparing for my Bar Mitzvah began with a book, one that had to travel through decades—and from Chicago to my home in Leesburg, VA and then all the way out to Berkeley—to reach me. In English, the book is calledReading for Beginners.It's the revised version from 1968, twelve years younger than the original version published in 1956. It belonged to my grandmother sometime in the 80s, when, already a mother and twice married, she had her own Bat Mitzvah. Previously rare, Bat Mitzvah ceremonies came into vogue around that time as women began to gain more access to Jewish tradition and institutions. At some point before her, the book had belonged to a girl named Wendy. This I know because Wendy's name is scrawled in red crayon at the top of the book, the handwriting distinctly belonging to a twelve-‐ or thirteen-‐year-‐old. An early Bat Mitzvah, or else a person who never grew up in a Judaism without it.
I didn't getReading for Beginnersuntil about the last month of the fellowship. Bubbe called me when I'd been in California for about a week, and we talked about the book then. Most of the other fellows had been to Hebrew school, and so could read Hebrew to various degrees. Sara, who was raised Conservative and spent a significant amount of time in Israel, was more or less fluent. Although I wasn't planning to have a Bar Mitzvah at the time, I thought that I could take the summer as an opportunity to learn a little bit of the language, and when my grandmother mentioned that she had an old primer I could use, I was very excited about it. She said she'd bring it to Leesburg with her when she came to visit in early July, and that she'd have my mother send it to me. When my mother received the book, she planned to send it to me, but was waiting until she could send it with a few other things that took her some time to get. It was a circuitous journey, but eventually the book got to me. Over sixty years old, the yellow paperback was in surprisingly good condition, although you can see where it might soon start to fall apart. It arrived with a magazine and a folded up newspaper clipping tucked between its pages. My first grade teacher was featured in the Washington Post's "Date Lab."
I'd been waiting to "officially" start preparing until the primer arrived. My thought was that I wanted to be ready to read my Torah portion in Hebrew by whatever date I chose for the ceremony, and so having the primer would be important for that. It was late July by the time it arrived, however, and though I love language learning, some twenty-‐odd days with very full schedules allows for little time to learn how to confidently read in a language that uses a writing system unlike any that you've known or studied before. Hebrew uses an alphabet, sure, but a very different one than the Latin alphabet used in English. Vowels don't get their own distinct letters, but are represented with different marks below the consonants. There are
letters that make no sound. It's a lot to absorb. Still, though, I did learn a little, with help from both the primer and my friends, and even picked up some Hebrew words from the months of prayer and ceremony and conversation with other Jews. By the time that my Bar Mitzvah rolled around, I didn't feel that I was personally ready to deliver the Hebrew with its proper cantillation, but Rebecca Spiro was more than happy to do the honors on my behalf. Language unites communities, and the reading served that purpose as Rebecca and I worked alongside each other in performing parts of the ceremony.
While I might not have learned as much fromReading for Beginnersas I could have, it certainly had a hand in moving things along. Its presence by my bed served as a reminder of the ceremony we'd been meaning to plan. We were moving into August and there were only a few weeks left in the fellowship. Little Lev had gone from this tiny creature who could hardly stand to a confident, playful kid that liked to chew on my shoelaces and jump up into Fiona's feed bin to take her afternoon naps. We were all intoxicated by that baby goat, the new Moon that came into our lives on a Shabbos in the middle of the summer. She was part of the inspiration behind the date I chose for my Bar Mitzvah, August 22, 2017. The last Tuesday of our fellowship, our last night off. It was also Rosh Chodesh.
Rosh Chodesh is the Jewish observance of the New Moon. The Jewish calendar is a lunar calendar, and so Rosh Chodesh falls as the first day of every new month in Judaism. We were in Berkeley for three New Moons, the one in August being our last. It was Rosh Chodesh Elul, Elul being the month that leads up to the High Holy Days of Rosh Hashanah and Yom Kippur, the Jewish New Year and the Day of Atonement. Elul is meant to be a month of introspection, of piecing things together before moving on. Beginning just as our fellowship was ending, the
timing of this felt especially appropriate: coming up on the end of three months of intense physical, emotional, and spiritual work, a period of introspection seemed like something that we would all need. My dvar Torah, in which I was talking about the Rosh Chodesh parsha, which itself is focused entirely on the sacrifices that were made at the ancient Temple in Jerusalem, spoke to this. The intention was to find some solace in the End, to prepare myself and others present to step open-‐eyed into the future beyond Urban Adamah. I believe that I achieved that for myself, and others told me that my words had done the same for them. As a writer, I can't think of a greater gift to receive from people whom I've come to love very much. I'm glad that the words I wrote were able to help my friends, but I can't take credit for the inspiration behind them. That, I chalk up to the Moon.
We all respond to different symbols. The Moon has felt like a powerful presence in my life for a very long time. Some of my nicest memories from childhood are of playing in moonlight in the summer with my sister and the other neighborhood kids. During the colder months, I was still often up late, playing with toys or watching television—usually both. My mind has always been more active at night, particularly in the dark and quiet hours after people have gone to bed.
In college, I often had a hard time starting assignments until before nine or ten at night. If I had a big paper to write, the first words might not have hit the page until after midnight. If I wasn't working I'd still usually be awake, thinking in the dark. It was better then than in high school, when I only got between two and four hours of sleep most nights. I always felt tired during the day, and sometimes would take naps after coming home from school. But at night I couldn't sleep, or wouldn't try. I spent a lot of time finding ways to distract myself because if I didn't, my thoughts would quickly turn against me.
Although it wasn't always pleasant, I began to feel somewhat beholden to the night, and the Moon as a part of the night. I always found it beautiful and captivating as an object, somewhat mysterious. In college, with a postmodern seriousness, I began to treat the Moon as a Goddess, a practice I adopted from a few of my friends. It is and was a joke, but the reverence we pay has its own authenticity. It was partially through this relationship with the Moon that I began to reframe my way of approaching the Divine, an activity that caused me to reconsider my Judaism and ultimately to wind up at Urban Adamah. In a certain sense, the Moon brought about my Bar Mitzvah, and I can only give credit for what happened there to Her.
The ceremony didn't go perfectly. For starters, I forgot to bring a kippah or a tallis from the house. Luckily, we were holding it in the big tent on the farm, and there was a basket of kippot there for when Urban Adamah hosted their monthly Kabbalat Shabbat. This was just after the eclipse, and the kids that attended the UA summer camp had painted a huge banner of "eclipse art" that featured many depictions of the moon. Rebecca Spiro had the idea of draping that around me as an improvised tallis. My "moon cape," we called it. There's a picture of me wearing it. I had to have Cara choreograph me through all of the prayers. There was no Torah, so Carly picked up a bongo drum and we proceeded around the tent with it. Everyone surrounded me and danced while I tried to follow along. I didn't completely understand what I was doing, but I felt loved. I felt acknowledged. I felt that I had gone before a very special tribe and said to them that we are united, and that, to me, felt like what was important. I understood that that was what I had meant to be doing all along.
|
PRRI 2010 American Values Survey September 1-‐‑14, 2010 N=3,013
ASK ALL:
Q.1 Do you approve or disapprove of the way Barack Obama is handling his job as President?
[IF DK ENTER AS DK. IF DEPENDS PROBE ONCE WITH: OVERALL, do you approve or disapprove of the way Barack Obama is handling his job as President?]
49 Approve
42 Disapprove
9 Don't know/Refused (VOL.)
100 Total
Q.2 Over the last two years do you think the economy has gotten better, gotten worse or stayed about the same?[READ IN ORDER]
19 Better
48 Worse
32 About the same
1 Don't know/Refused (VOL.)
100 Total
REGIST These days, many people are so busy they can't find time to register to vote, or move around so often they don't get a chance to re-‐‑register. Are you NOW registered to vote in your precinct or election district or haven't you been able to register so far?
IF RESPONDENT ANSWERED '1' YES IN REGIST ASK:
REGICERT Are you absolutely certain that you are registered to vote, or is there a chance that your registration has lapsed because you moved or for some other reason?
80 Yes, registered
78 Absolutely certain
2 Chance registration has lapsed
* Don't know (VOL.)
20 No, not registered
* Don't know/Refused (VOL.)
100 Total
ASK ALL REGISTERED VOTERS (REGICERT=1):
Q.3 Now, suppose the 2010 congressional election was being held TODAY. If you had to choose between[READ AND ROTATE] in your election district, who would you vote for?
IF OTHER OR DK (Q.3 =4,9), ASK:
Q.4 As of TODAY, do you LEAN more towards[READ, ROTATE SAME ORDER AS Q.3]?
44 The Republican candidate
47 The Democratic candidate
* Tea Party candidate (VOL.)
1 Other/Third Party candidate (VOL.—SPECIFY PARTY)
* Not voting (VOL.)
8 Don't know/Refused (VOL.)
100 Total
ASK ALL DEMOCRATIC CANDIDATE VOTERS (Q3=2 OR Q4=2):
Q.5 Thinking about past elections, do you usually vote for the Democratic candidate or not?
81 Yes
13 No
6 Don't know/Refused (VOL.)
100 Total
ASK ALL REPUBLICAN, TEA PARTY CANDIDATE VOTERS (Q3=1,3 OR Q4=1):
Q.6 Thinking about past elections, do you usually vote for the Republican candidate or not?
70 Yes
23 No
7 Don't know/Refused (VOL.)
100
Total
ASK ALL:
Q.7 Of the following six issues, which one would you say is MOST important to your vote for Congress this year?[READ AND RANDOMIZE]
45 The economy
9 The federal budget deficit
7 Immigration
10 The wars in Iraq and Afghanistan
20 Health care
6 Same-‐‑sex marriage and abortion
1 Other (VOL.)
2 Don't know/Refused (VOL.)
100
Total
Q.8 Now I'm going to read you a few pairs of statements. For each pair, please tell me whether the FIRST statement or the SECOND statement comes closer to your own views — even if neither is exactly right. The first pair is... [READ AND RANDOMIZE ITEMS; ROTATE OPTIONS]Next…
a.
44 1—Immigrants today strengthen our country because of their hard work and talents [OR]
48 2—Immigrants today are a burden on our country because they take our jobs, housing and health care
5 Neither/Both equally (VOL.)
3 Don't know/Refused (VOL.)
100 Total
41 1—It is not really that big a problem if some people have more of a chance in life than others [OR]
2—One of the big problems in this country is that we don't give everyone an
53 equal chance in life
4 Neither/Both equally (VOL.)
2 Don't know/Refused (VOL.)
100 Total
56 1—Government has become bigger over the years because it has gotten involved in things that people should do for themselves [OR]
41
2—Government has become bigger over the years because the problems we face have become bigger
1 Neither/Both equally (VOL.)
2 Don't know/Refused (VOL.)
100 Total
Q.9 Now, I'd like to get your views on some issues that are being discussed in the country today. All in all, do you strongly favor, favor, oppose, or strongly oppose [INSERT FIRST TIME; RANDOMIZE]? What about[NEXT ITEM]?
REPEAT AS NECESSARY:Do you strongly favor, favor, oppose, or strongly oppose?
a. Allowing undocumented immigrants who have been in the U.S. for several years to earn legal working status and an opportunity for citizenship in the future
18 Strongly favor
40 Favor
22 Oppose
18 Strongly oppose
2 Don't know/Refused (VOL.)
100 Total
b.
c.
Q.9 CONTINUED…
IF R DOES NOT SUPPORT SAME-‐‑SEX MARRIAGE (Q10=2,3,9), ASK:
Q.11 Please tell me if you completely agree, mostly agree, mostly disagree, or completely disagree with the following statement. If the law only provided for civil marriages like you get at city hall for gay couples, I would support allowing them to have a civil marriage.
10 Completely agree
25 Mostly agree
19 Mostly disagree
42 Completely disagree
4 Don't know/Refused (VOL.)
100 Total
ASK ALL:
Q.12 Compared to your views five years ago, are your current views about rights for gay and lesbian people generally more supportive, more opposed, or have they not changed?
19 More Supportive
6 More Opposed
73 No Change
2 Don't know/Refused (VOL.)
100 Total
On another subject…
Q.13 Do you think abortion should be[READ IN ORDER]
18 Legal in all cases
37 Legal in most cases
27 Illegal in most cases
15 Illegal in all cases
3 Don't know/Refused (VOL.)
100 Total
Q.14 Compared to your views five years ago, are your current views about the legality of abortion generally more supportive, more opposed, or have they not changed?
7 More Supportive
7 More Opposed
85 No Change
1 Don't know/Refused (VOL.)
Public Religion Research Institute
Q.15 Now I'd like your views on some political leaders. As I read some names, please tell me if you have a favorable or unfavorable opinion of each person. The first person is [INSERT; RANDOMIZE]. Would you say your overall opinion is very favorable, mostly favorable, mostly Unfavorable, or very unfavorable? How about[INSERT NEXT]?
REPEAT AS NECESSARY:Is your opinion very favorable, mostly favorable, mostly unfavorable or very unfavorable?
a. Mitt Romney
8 Very favorable
30 Mostly favorable
24 Mostly unfavorable
8 Very unfavorable
30 Don't know/Refused (VOL.)
100 Total
b. Sarah Palin
12 Very favorable
28 Mostly favorable
23 Mostly unfavorable
29 Very unfavorable
8 Don't know/Refused (VOL.)
100
Total
c. Mike Huckabee
9 Very favorable
32 Mostly favorable
21 Mostly unfavorable
8 Very unfavorable
30 Don't know/Refused (VOL.)
100 Total
d. Barack Obama
26 Very favorable
32 Mostly favorable
18 Mostly unfavorable
22 Very unfavorable
2 Don't know/Refused (VOL.)
ASK FORM 1 ONLY:
ef1. George W. Bush
12 Very favorable
30 Mostly favorable
26 Mostly unfavorable
30 Very unfavorable
2 Don't know/Refused (VOL.)
100 Total
ASK FORM 2 ONLY:
ff2. Bill Clinton
29 Very favorable
40 Mostly favorable
17 Mostly unfavorable
11 Very unfavorable
3 Don't know/Refused (VOL.)
100 Total
ASK ALL:
Turning to a different subject…
Q.16 Do you think [INSERT; RANDOMIZE] is something that should be decided at the national level or is it something that each state should decide for itself? What about [INSERT NEXT]?
REPEAT AS NECESSARY: Is this something that should be decided at the national level or the state level?
a. Laws about same-‐‑sex marriage
51 National level
45 State level
2 Neither/Both (VOL.)
2 Don't know/Refused (VOL.)
100 Total
b. Health care reform policy
55 National level
42 State level
1 Neither/Both (VOL.)
2 Don't know/Refused (VOL.)
Public Religion Research Institute
Q.11 CONTINUED…
c. Immigration policy
67 National level
30 State level
1 Neither/Both (VOL.)
2 Don't know/Refused (VOL.)
100 Total
Q.17 Do you think[INSERT; RANDOMIZE]pay too much, too little or the right amount of taxes? What about[INSERT NEXT]?
a. People who have incomes of more than $1 million a year
9 Too much
56 Too little
24 Right amount
11 Don't know/Refused (VOL.)
100 Total
b. People who have incomes of less than $30,000 a year
53 Too much
6 Too little
33 Right amount
8 Don't know/Refused (VOL.)
100 Total
Q.18 Thinking about the candidates running for Congress in November, please tell me whether the following characteristics would make you more likely to vote for them, less likely or whether it would not make a difference. First, a candidate who [INSERT; RANDOMIZE]. Would this make you more likely to vote for them, less likely, or would it not make a difference? What about a candidate who[INSERT NEXT]?
ASK FORM 1:
af1. Supported health care reform
56 More likely
23 Less likely
19 Would not make a difference
2 Don't know/Refused (VOL.)
Q.18 CONTINUED…
ASK FORM 2:
bf2. Supports comprehensive immigration reform
45 More likely
18 Less likely
31 Would not make a difference
6 Don't know/Refused (VOL.)
100 Total
ASK ALL:
c. Supports the Tea Party Movement
22 More likely
28 Less likely
40 Would not make a difference
10 Don't know/Refused (VOL.)
100 Total
d. Holds religious beliefs different than your own
8 More likely
16 Less likely
74 Would not make a difference
2 Don't know/Refused (VOL.)
100 Total
e. Supports abortion rights
29 More likely
35 Less likely
35 Would not make a difference
1 Don't know/Refused (VOL.)
100 Total
Q.19 What worries you more, public officials who don't pay enough attention to religion OR public officials who are too close to religious leaders?
34 Not enough attention
56 Too close
10 Don't know/Refused (VOL.)
Public Religion Research Institute
Q.20 Which of the following statements comes closest to your view?[READ IN ORDER]
42 America has always been and is currently a Christian nation
37 America was a Christian nation in the past, but is not now, [OR]
17 America has never been a Christian nation
4 Don't know/Refused (VOL.)
100 Total
IF NOT ATHEIST (RELIG=1-‐‑8, 10-‐‑14, 99), ASK:
DUAL Do you follow the teachings or practices of more than one religion?
13 Yes
86 No
1 Don't know/Refused (VOL.)
100 Total
IF CHRISTIAN (RELIG=1-‐‑4, 13 OR CHR=1), ASK:
Q.21 Do you consider yourself part of the religious right or conservative Christian movement or not?
29 Yes
66 No
5 Don't know/Refused (VOL.)
100 Total
IF NOT ATHEIST (RELIG=1-‐‑8, 10-‐‑14, 99), ASK:
Q.22 Do you consider yourself part of the progressive religious movement or a religious group working for social justice or not?
20 Yes
75 No
5 Don't know/Refused (VOL.)
100 Total
ATTEND
Aside from weddings and funerals, how often do you attend religious services... more than once a week, once a week, once or twice a month, a few
times a year, seldom, or never?
14 More than once a week
22 Once a week
15 Once or twice a month
21 A few times a year
15 Seldom
12
Never
1 Don't know/Refused (VOL.)
Q.23 How important is religion in your life – the most important thing, very important, somewhat important, not too important, or not at all important?
20 The most important thing
37 Very important
24 Somewhat important
10 Not too important
8 Not at all important
1 Don't know/Refused (VOL.)
100 Total
Q.24 Which comes closest to your view?[READ, IN ORDER]
[Holy book: If Christian or no religion (RELIG=1-‐‑4, 9, 10, 12, 13 OR CHR=1)
insert "the Bible"; If Jewish (RELIG=5), insert "the Torah"; If Muslim
(RELIG=6), insert, "the Koran"; If other non-‐‑Christian affiliations
(RELIG=7,8,14 OR (RELIG=11 AND CHR=2,9)), insert "the Holy Scripture"; IF
DK/REF IN RELIGION (RELIG=99) AND CHR=2,9, insert "the Bible"]
IF BELIEVE HOLY BOOK IS WORD OF GOD (Q.24=1), ASK:
Q.25
And would you say that
[READ, IN ORDER]
?
69 [Holy book] is the word of God, OR
33 [Holy book] is to be taken literally, word for word, OR
2 Don't know/Refused (VOL.)
33 Not everything in[Holy book] should be taken literally, word for word 1 Other (VOL.)
26 [Holy book] is a book written by men and is not the word of God
2 Other (VOL.)
3 Don't know/Refused (VOL.)
100 Total
Q.26 Which statement comes closest to your view of God: God is a person with whom people can have a relationship, God is an impersonal force, or I do not believe in God?
64 God is a person
26 God is an impersonal force
6 I do not believe in God
2 Both/Neither/Other (VOL.)
2 Don't know/Refused (VOL.)
100
Total
IF R HAS RELIGIOUS AFFILIATION (RELIG=1-‐‑8,11,13,14, OR CHR=1), ASK:
Q.27 Now I'm going to read you a few pairs of statements. For each pair, please tell me whether the FIRST statement or the SECOND statement comes closer to your own views — even if neither is exactly right. The first pair is... [READ AND RANDOMIZE QUESTIONS; ROTATE OPTIONS]Next…
a.
57 1—When religious people get involved in politics, they should focus primarily on issues of equality and economic justice [OR]
2—When religious people get involved in politics, they should focus primarily
35 on issues of personal morality and responsibility
5 Neither/Both equally (VOL.)
3 Don't know/Refused (VOL.)
100 Total
56
1—In the end, being a religious person is primarily about living a good life and doing the right thing [OR]
38
2—In the end, being a religious person is primarily about having faith and the right beliefs
5 Neither/Both equally (VOL.)
1 Don't know/Refused (VOL.)
100
Total
IF ATTEND AT LEAST ONCE OR TWICE A MONTH (ATTEND<4), ASK:
Thinking about your place of worship…
Q.28 Does the clergy at your place of worship ever speak out about the issue of homosexuality
41 Yes
57 No
2 Don't know/Refused (VOL.)
100 Total
IF 'YES' SPEAKS ABOUT ISSUE (Q28=1), ASK:
Q.29 When your clergy has spoken about homosexuality, do they say it is something that should be accepted, something that should be discouraged, or don't they take a position on the issue?
12 Accepted
66 Discouraged
19
Don't take a position
3 Don't know/Refused (VOL.)
100
Total b.
ASK ALL:
Q.30 Now, as I read some statements on a few different topics, please tell me if you completely agree, mostly agree, mostly DISagree or completely disagree with each one. (First/Next)[INSERT; RANDOMIZE].
READ FOR FIRST ITEM, THEN REPEAT AS NECESSARY: Do you completely agree, mostly agree, mostly DISagree or completely disagree?
a. More environmental protection is needed even if it raises prices or costs jobs
21 Completely agree
39 Mostly agree
23 Mostly disagree
15 Completely disagree
2 Don't know/Refused (VOL.)
100 Total
b. We must maintain a strict separation of church and state
36 Completely agree
31 Mostly agree
18 Mostly disagree
13 Completely disagree
2 Don't know/Refused (VOL.)
100 Total
c. Political leaders can stay true to their core beliefs on abortion while working to find common ground
19 Completely agree
49 Mostly agree
18 Mostly disagree
10 Completely disagree
4 Don't know/Refused (VOL.)
100 Total
d. Over the past couple of decades, the government has paid too much attention to the problems of blacks and other minorities.
11 Completely agree
26 Mostly agree
40 Mostly disagree
20 Completely disagree
3 Don't know/Refused (VOL.)
IF BELIEVE IN GOD/UNIVERSAL SPIRIT (Q.26=1,2,4,9), ASK:
e. God always rewards those who have faith with good health, financial success and fulfilling personal relationships
11 Completely agree
28 Mostly agree
31 Mostly disagree
27 Completely disagree
3 Don't know/Refused (VOL.)
100 Total
IF CHRISTIAN (RELIG=1-‐‑4, OR CHR=1), ASK
f. On most political issues there is one correct Christian position
9 Completely agree
25 Mostly agree
38 Mostly disagree
24 Completely disagree
4 Don't know/Refused (VOL.)
100 Total
ASK ALL:
MEDIA
Which of the following television news sources do you trust the MOST to provide accurate information about politics and current events?[READ AND RANDOMIZE]
23 Broadcast network news, such as NBC, ABC or CBS
20 CNN
23 Fox News
6 MSNBC
4 Comedy Central's Daily Show with Jon Stewart
12 Public television
4 Other (VOL)
5 Do not watch television news (VOL.)
3 Don't know/Refused (VOL.)
100 Total
Q.32
Do you consider yourself part of the Tea Party movement or not?
11 Yes
82 No
7 Don't know/Refused (VOL.)
Public Religion Research Institute
END OF INTERVIEW.
THANK RESPONDENT: Thank you again for your time. Have a nice day/evening.
I HEREBY ATTEST THAT THIS IS A TRUE AND HONEST INTERVIEW.
INTERVIEWER ENTER INITIALS
Survey Methodology
The survey was designed and conducted by Public Religion Research Institute and funded by the Ford Foundation, with additional support from the Nathan Cummings Foundation. Results of the survey were based on bilingual (Spanish and English) telephone interviews conducted between September 1, 2010 and September 14, 2010, by professional interviewers under the direction of Directions in Research. Interviews were conducted by telephone among a random sample of 3,013 adults 18 years of age or older in the continental United States (600 respondents were interviewed on a cell phone). The final sample was weighted to ensure proper representativeness.
The weighting was accomplished in two stages. The first stage of weighting corrected for different probabilities of selection associated with the number of adults in each household and each respondent's telephone usage patterns. 1 In the second stage, sample demographics were balanced by form to match target population parameters for gender, age, education, race and Hispanic ethnicity, region (U.S. Census definitions), population density and telephone usage. The population density parameter was derived from Census 2000 data. The telephone usage parameter came from an analysis of the July-December 2009 National Health Interview Survey. All other weighting parameters were derived from an analysis of the Census Bureau's 2009 Annual Social and Economic Supplement (ASEC) data.
The sample weighting was accomplished using Sample Balancing, a special iterative sampleweighting program that simultaneously balances the distributions of all variables. Weights were trimmed to prevent individual interviews from having too much influence on the final results. The use of these weights in statistical analysis ensures that the demographic characteristics of the sample closely approximate the demographic characteristics of the target populations.
The margin of error is +/- 2.0% for the general sample at the 95% confidence interval. In addition to sampling error, surveys may also be subject to error or bias due to question wording, context, and order effects.
1 Telephone usage refers to whether respondents have only a landline telephone, only a cell phone or both types.
|
Diversity effects on sustainable group performance
Pause Foundation Scholarship for Future Leaders in Sustainable Leadership
Jesper Bernhardsson September 2014
Abstract:
The implication of research seems to be that some groups gain from heterogeneity while other groups benefit from homogeneity. Taking a long-‐term view, not only efficiency and effectiveness, but also the ability of groups to enhance their capabilities and to increase the wellbeing of group members are important for group performance. It is difficult to identify optimal group configurations because such modelling depends on a multitude of interrelated diversity factors, but such attempts are nonetheless needed given the great potential benefits that groups can achieve by drawing on dissimilarity.
Table of contents
6. References ....................................................................................................... 12
1. Introduction
1.1 Organization
The hunter-‐gatherers of the late Pleistocene some thousand years ago are one of the earliest forms of organization in human history. Due to the perils they faced in their everyday lives – tracking down mammoths and other wild beasts ranking among their challenges – grouping in bands vastly increased their odds for survival. What such pre-‐modern groups go to demonstrate is the consistent need throughout much of human history of relying on groups to achieve aims that could not be achieved by individual effort alone. When defining organizations we tend to bundle aspects such as goal formulation, priorities and decision hierarchies. Indeed, Ahrne & Brunsson (2011) suggest that a formal organization should (1) decide about membership, (2) include a hierarchy, (3) issue commands, (4) monitor compliance and (5) decide about sanctions. These criteria resonate in most organizations, found within the public sector as administration centres as well in the private sector as companies. This report focuses on the latter of these kinds but the implications of the report are valid for many organizational constructs.
1.2 Sustainability
So how do companies make the best of its individual talents in achieving its collective aims? Naturally – and much like the hunter-‐gatherers of ages past – by collaboration; for instance, by increasing the potency of decisions by basing them on syntheses of different viewpoints rather single points of view. There are caveats, of course. In much the same way that the venerable foragers of the Pleistocene could distinguish edible berries from poisonous without much advice, we can find situations today where one individual's experience suffices to make valid decisions – and where collaboration might in fact harm the efficacy of decision-‐making.
What does sustainability impart on this way of reasoning? It suggests that a long-‐ term perspective should be used to guide how equitable companies' choices are to their stakeholders. This is a problematic point of departure as companies have several stakeholders, and maximising the value for each of these is a process made complex by ambiguity. For instance, household investors on average have much shorter investment horizons than institutional investors and therefore they may perceive a company decision to make a large capital investment, respectively, as detrimental and beneficial to their interests. Moreover, company employees on average care less about short-‐term share value fluctuations, but care plenty about having a workplace in five to ten years' time. Here we take a holistic approach in thinking that companies need to consider how groups affect the performance of the company in the long run. We make the simplifying assumption that the performance
of a company, ceteris paribus, is the sum of the performance across all the groups that constitute the company.
1.3 Group performance
But how is group performance determined? Hackman & Katz (2010) write that the purpose of any group varies between the extent to which they are aimed at accomplishing group tasks, strengthening the capabilities of the group itself and fostering the wellbeing of individual group members. Different groups will strive to achieve these goals to varying degrees, but it is acknowledged that these aspects together make up long-‐term group performance. Availability heuristics make us think foremost of top management groups when we consider how group performance affects the future of a company, but group performance needs to be evaluated at lower strategic and operational tiers as well, since a company's efforts would inevitably falter if dependent on the performance of executive management alone.
To paraphrase what has just been said, groups need to perform; they must be able to work both effectively and efficiently toward achieving their group task. Strengthening the capabilities of a group means that the ability of the group to do the right rings in the right way is improved, for instance by agreeing on common rules of behaviour and ways of sharing information. For a group to continue to work together over a longer period of time it is also necessary for groups to foster the wellbeing of its members. A group in which members feel at ease with each other can increase the level of information sharing and allow the group to challenge mutually held false truths, which, if not uncovered may lead the group into making flawed decisions. Much research shows that each of these aspects is impacted by the degree of diversity existing in groups.
1.4 Diversity
Diversity research is a very broad field and findings have had incongruent managerial implications. Indeed, Harrison & Klein (2007) critique much of precious research on diversity for being inconsistent. To make research congruent, they propose that diversity be evaluated by degrees of separation, variety and disparity.Separationas an attribute of diversity regards the degree to which a group share similar values, beliefs and attitudes. Dissimilar attitudes in a group may lead to disagreement and/or opposition and decreased task performance. Variety as an attribute of diversity means that in-‐group differences in kinds, sources or categories of relevant knowledge exist. Examples include differences in educational or professional background, enabling a group to draw on different sources of knowledge. Disparity as an attribute of diversity means that team members differ scale-‐wise in terms of
their status or power. For instance, seniority can lead some group members to obtain higher social power than peer group members.
Ostensibly, demographic diversity variables – for instance age, gender, ethnicity and native language – do not enter the list of attributes above. This is because they fall into different attributes depending on what its incidence actually entails for team performance. Ethnic diversity in a group, for instance, can be attributed as a separation effect if divergent values and beliefs have a significant impact on group performance. Alternatively, it can be attributed as a variation effect if in-‐group differences in ethnicity enable group members to draw on different sources of knowledge.
A benefit with considering diversity in this way – in terms of how they impact group performance – is that it becomes possible to identify the ambiguous effects that it entails. For instance, a heterogeneous group in terms of, for instance, ethnic heritage will make a group more likely to draw on a broad knowledge base, which should increase the quality of made decisions: positive contribution to group performance. At the same time, the values espoused by different cultural regimes may cause various disagreements between culturally diverse groups that postpone agreement: possible negative contribution to group performance. Moreover, as Harrison & Klein point out, there are interdependencies between the diversity types as a function of time. For instance, separation can engender variety by making group members with technical backgrounds espouse evidence-‐based decision-‐making over rivalling means of reaching consensus. It is not possible to lay down rules for how these ambiguities operate; net outcomes will vary between situations, depending among other things on the group task, task evaluation, degree of diversity, and interrelation between different types of diversity, but itis important to be aware of the existence of these effects on group performance.
As we can see, the three kinds of diversity outlined above have very real consequences for group performance. More importantly, they may affect group processes and performance positively as well as negatively. Because of the broad-‐ based efforts that have been made over the years to uncover these positive and negative relationships, it is nearly impossible to give anything but an overview of the research field. Nevertheless, this report can be read as a brief introduction to the causal links that exist between diversity and group performance.
2. Group diversity
Diversity effects differ depending on the tasks that a group is charged with performing. It is therefore worthwhile spending a few moments considering what
tasks groups perform. Hackman & Katz (2010) categorise groups into focusing on production, services, decision-‐making, leadership, change, discovery and learning. The optimal relation between individual inputs and output will be different in each of these cases. If we consider a work group tasked with production, for instance, inputs will tend to be related to group output by addition: the sum of individual inputs constitute total output. Dissimilarly, estimating the value of a company for an investment decision is a complex operation and analysts are at all times unlikely to reach accurate estimates. If one analyst has a critical piece of insider information, on the other hand, that analyst alone will be likelier to make an estimate closer to the true value irrespective of his or her peers' estimates. This is termed a disjunctive relation between inputs and output. Alternatively, we can think of the situation predicted by the efficient market hypothesis, where no analyst is more likely than another to make a correct estimate due to information advantages. In this situation, group members will estimate values both above and below the true value. Research has shown that disparate results tend to compensate each other if averaged: the mean of estimated values will generally be the closest to the true value. Steiner (1972) found five distinct interdependencies between how individual inputs are related to group output: (1) additive – the sum of inputs equals total output, (2) disjunctive – selection (best) of individual judgments equals output, (3) compensatory – average of individuals inputs equals output, (4) conjunctive – all individual inputs are required and (5) discretionary – group decides freely on how to combine inputs, be it through joint or individual effort.
Contemplating what tasks groups perform and what the optimal relation between group inputs and output is should guide group creation – especially in terms of the kinds of diversity that may be required. For routine tasks, where there is little or no upside to out-‐of-‐the-‐box thinking, homogenous groups – on average better able to foster agreement and efficiency – perform better than heterogeneous groups. For other tasks, say creating a marketing campaign or finding a slogan, creativity will be needed and we expect the heterogeneous group to outperform the homogenous group on average. As Hackman & Katz write, "the implication seems to be that homogeneous groups should be created in certain contexts, and diverse groups in others". That said, it is useful to revisit our definition of long-‐term value to the company – sustainable performance means that groups should accomplish their objectives, increase the group's capabilities and foster the wellbeing of its members. This clearly implies that even in production teams where efficiency requirements would warrant that group members be carbon copies of each other, there will be a need to detect signals transmitted from the external environment and adapt to these. As might be expected, such detection becomes more successful if group members are not receiving the same signals.
3. Improving group performance
This section discusses some of the positive and negative consequences that diversity may impart on group performance.
Decision-‐making
Effective decision-‐making generally depends on the success of group members to contribute their views and information to group discussions and for the group as a whole to institutionalize frameworks that facilitate information sharing. A higher degree of variety and separation in the sources and kinds of knowledge is generally portrayed in research to be conducive to information sharing. Williams & O'Reilly (1998) suggest that group diversity in terms of functional background, tenure, and range of network ties may enrich the supply of ideas, unique approaches and available knowledge, enhancing unit creativity, quality of decision-‐making and performance of complex tasks. Clearly, having different sources of relevant knowledge in a decision-‐making unit will increase the quality of decisions. Conversely, having significant diversity in, for instance, tenure conceptualized as disparity may create a situation where more junior group members do not contribute for fear off saying the wrong things or otherwise upsetting senior group members. This empowerment of a group's backbenchers and disempowerment of its newbies can offset many of the positive effects of diversity on decision-‐making and it therefore deserves attention. At the same time, disparity based on task experience or expertise, may serve a group's goals well as the best and most relevant information will be communicated by group members with high credibility (Wittenbaum, 1998).
External network
Reagans, Zuckerman & McEvily (2004) have found that diversity is positively correlated with external network range – the collective width of ties that link the group to people outside of the group – which has a positive impact on group performance. Groups that can draw on a variety of educational and professional backgrounds can employ the external network effects that such differences entail to their advantage. People that break away from the median group background – because they come from different schools or have had different career paths – can support group performance by linking it to networks untapped by other members of the group. Groups can then use these networks as information channels to feed back and inform the group work. Networks can also be used to increase the set of available opportunities. If a group is looking for potential business partners, for instance, a more diverse network increases the odds that viable options can be found.
Groupthink
Janis (1982) calls groupthink a "mode of thinking that people engage in when they are deeply involved in a cohesive in-‐group, when the members' strivings for unanimity override their motivation to realistically appraise alternative courses of action." The term is used to describe the phenomenon where a group becomes dysfunctional despite its potential to achieve. Groupthink is especially dangerous when group members share the same values and have a strong need for affiliation, something that is more likely to happen in groups with low degrees of diversity. Variation in terms of, for instance, espoused values as a consequence of demographic diversity can help curtail the effects of groupthink. In other words, groups in which individual group members are more dissimilar are less likely to take assumptions and arguments at face value, and therefore better able to create a system of checks and balances to sanity check their decisions.
Reaching consensus
Knight et al. (1999) find that demographic diversity in top management teams have a negative correlation with the ability to reach a strategic consensus. While decision-‐ making can be augmented by diversity, diversifying groups appear to also limit it in certain respects. Disagreement stemming from knowledge differences may help turn over rocks that may otherwise be left unturned, but it also lengthens the time it takes for groups to make decisions. Seemingly, group effectiveness for tasks such as learning or making decisions is impacted positively by diversity, but such processes take longer time, i.e. become less efficient, in diverse groups compared to more homogenous group. The length of discussions depends also on how decisions are made within the group: the time it takes for a group to reach a decision will be longer if total unanimity rather than simple majority is required. Furthermore, group members may become more polarized rather than more moderate in their opinions following discussions. Polarization is likely to increase in magnitude the greater the degree of group variety, leading to increased separation in views (Harrison & Klein, 2007). It may thus divide members in camps and become a breeding ground for discontent and ineffective discussion in the long run. Indeed, on the far end of the outcome scale, groups may even break up if individual dissimilarity renders the creation of common ground impossible.
Group cohesion
A group is said to be cohesive when in-‐group bonds linking members with each other are many in number. The degree of group cohesion, also known as internal density, can be measured as the ratio of existing ties between group members relative to the maximum possible number of linkages (Balkundi & Harrison, 2006). Balkundi & Harrison find that both instrumental (work-‐related) and expressive (friendship) ties
are positively related to group performance. Sparrowe, Liden, Wayne, & Kraimer (2001) found that the positive effects of tie density are much stronger if the tasks completed by the group are complex. This is likely so because ties may translate into trust creation facilitating information sharing, which is needed to solve complex problems. An inference from these findings is that heterogeneous groups are less likely to reach the same level of group cohesion and are therefore less likely to benefit from improved performance. One explanation for this is the natural tendency for people to affiliate more with similar others.
4. Discussion
This section discusses a number of concerns that are relevant to those considering group formation in organizations.
As explained earlier, the degree of group diversity can be quantified by assessing group composition in terms of separation, variety and disparity. But deciding on an optimal mix of kinds of diversity is difficult to accomplish. First, because research is often contradictory, pointing practitioners in many different directions. Second, because results are sometimes ambiguous; often making functional or demographic diversity factors influences a matter of trade-‐off between efficiency and effectiveness goals or some other reciprocally related factors. Third, because individual contributions to group diversity cannot be reduced to single diversity factors – individuals represent specific sets of variables – netting their combined effects on group performance is very complex. As Katz & Hackman (2010) write, "so many contingencies [have been] identified and documented that conceptual models become inelegant and practical advice impossible". This problem is likely one of the primary barriers against translating diversity research into implementation.
It is not uncommon that companies seek social affirmation by making amends to rigid board or executive management structures. To illustrate this point, imagine a female executive recruited into a management group dominated by senior male executives. Group performance might improve for a number of reasons. The female executive may have different experiences or values by virtue of her gender. It is also possible that her gender is instrumentally related to her functional background; there are substantiated differences in life and career choices between men and women. At the same time, it is possible that the female executive's contribution is undermined by the uneven gender distribution in the group. Maybe non-‐linear effects impact the incremental improvements in diversity? Optimizing group performance should entail finding inflection points for diversifying groups, the point at which the marginal contribution to group performance of increasing/decreasing diversity changes from negative to positive or vice versa. Attempts to remedy gender
inequality in male-‐dominated executive groups by smaller increments in gender equality may serve to build a symbolic value of diversity, but is probably a far cry from optimizing group performance.
There are external conditions that impact the value that diversity brings to groups. In groups where the sum of individual inputs equals total output, time constraints can actually be a source of efficiency. If challenged to perform in a short amount of time, groups will be forced to prioritize goals and cut corners as necessary, leading to more efficient time utilization per unit of output. In some situations, however, time scarcity will tend to have mainly detrimental effects. Even if scarcity of time leads to quicker acknowledgement of a group's priorities, there might not be sufficient time to reach the best decision. A group tasked with making decisions need to find alternatives by synthesizing inputs from individual group members, exhaust the options that lack viability and decide on optimal solutions. This is rarely a process that gains from time constraints.
As touched upon previously, reaping the rewards offered by diversity requires a proactive approach. Deciding on common sets of rules to moderate group conduct can facilitate more effective utilization of dissimilarities across group members. For instance, rules may be set to decide how social rank is handled: does seniority in a group mean that the weighting of individual contributions to group performance is skewed in favour of those with longer tenure or higher academic degree? If so, is such differentiation mandated by the complexity of the group task? It is probably not uncommon that seniority overshadows most individual contributions – even in domains where other group members are better able to contribute. Another common pitfall is that more merit is assigned to the inputs of group members who are more capable of communicating their viewpoint. To summarize, inefficient utilization of the group's capacity to absorb and process information can be curtailed by rules that create a framework allowing group members to contribute their unique knowledge.
5. Conclusion
Diversity effects can have both positive and negative effects on group performance. Sometimes the same diversity attribute can both benefit and harm performance, rendering the net effect ambiguous. By taking a long-‐term perspective, i.e. considering group viability and capability enhancement potential, some of the attributes that impart negative effects on group performance can be reframed as contributing positively to sustainable group performance. To make such conjectures pertinent, they need to be reinforced by research on how group longevity is affected by diversity. Nonetheless, it is reasonable to assume that the reciprocal relationship
existing between, for instance, decision quality and time expenditure may be moderated by considering more carefully the importance that decision quality has on the long-‐term performance of groups and companies. As illustrated, different groups require both different kinds and different degrees of diversity. For a group tasked with production, one of the main key performance indicators will inevitably be some measure of efficiency. Conversely, a group tasked with learning or idea creation, will instead require group members to contribute with their individual talents and views. The implication seems to be that some group tasks warrant heterogeneity while other group tasks benefit from homogeneity.
6. References
Ahrne & Brunsson (2011). "Organization outside organizations: The significance of partial organization". Organization, 18, 83-‐104.
Balkundi & Harrison (2006). "Ties, leaders, and time in teams: strong inference about network structure's effects on team viability and performance". Academy of Management Journal, Vol. 49, No. 1, 49–68.
Forsyth, D. R. (2009). "Group dynamics" (5th ed.). Pacific Grove, CA: Brooks/Cole.
Hackman & Katz (2010). "Group behavior and performance" Handbook of social psychology.
Harrison & Klein, (2007). "What's the difference? Diversity constructs as separation, variety, or disparity in organizations" Academy of Management Review 2007, Vol. 32, No. 4, 1199–1228.
Knight, et al. (1999). "Top management team diversity, group process, and strategic consensus" Strategic Management Journal, 20, 445 – 465.
Reagans, Zuckerman, & McEvily (2004). "How to make the team? Social networks vs. demography as criteria for designing effective teams." Administrative Science Quarterly, 49: 101-‐133.
Steiner (1972). "Group Processes and Productivity". Academic Press. New York.
Van Knippenberg & Schippers (2007). "Work group diversity" Annu. Rev. Psychol., 58, 515-‐541.
Williams & O'Reilly (1998). "Demography and diversity in organization". Research in Organizational Behaviour, 20, 77 – 140.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.