text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
I a specific attribute
READER DISCOUNTSave $80 on terminal.training
I've published 38 videos for new developers, designers, UX, UI, product owners and anyone who needs to conquer the command line today.
$19 - only from this link
crop
See the crop image example
The crop plugin takes an IMG element and crops them to the dimensions given. The result is a DIV with a background image with the height and width and an offset. The new DIV should also carry across the existing style attributes of the image.
Crops image to dimensions given. If only width (and height), x and y are selected randomly based on the image's height and width.
$("img").crop(x, y, height, width, transparentURL) /* or */ $("img").crop({ x: x, y: y, height: height, width: width, transparentURL: url })
Unfortunately, this slick little plugin requires that pass in the transparent gif URL since IE doesn't support the 'data:' pseudo protocol (which is what I used to generate and transparent gif on the fly)...making it a little less slick in my eyes.
The only thing to watch out for is cropped images should not have any padding. Since we're using a background-image style to create the cropped appearance, it will bleed in to the padding. You can use margin however with the same effect.
labelOver
See the label over example
The labelOver plugin is a follow on from the text hints, but in fact the best practise solution.
It's based on the A List Apart article that demonstrates using a label positioned over the input field.
It's important to enclose the label and input within a div that has the following CSS applied:
DIV { position: relative; float: left; } LABEL.over-apply { color: #ccc; position: absolute; top: 5px; left: 5px;}
Obviously the top and left will depend on your own CSS, but it's easy to play with in Firbug to get the CSS just right.
Then apply the plugin using:
$('label').labelOver('over-apply')
The best way to understand how it works is to view the example, then view it with JavaScript turned off, and then CSS turned off.
Download the labelOver plugin
pluck
Finally, pluck is a plugin inspired by a comment on Dustin Diaz's web site by Dean Edwards that I found I needed in a project I was working on lately.
It simply returns an array of attributes from the matched selector - simple, but useful enough to save me some coding time and maybe even share-worthy.
I used it to validate a form contained non-blank values:
if (jQuery.grep(jQuery('form :input').pluck('value'), function(e) { return (e.length == 0); }).length) { submit.attr('disabled', 'disabled'); } else { submit.attr('disabled', ''); }
Download the pluck plugin here
|
https://remysharp.com/2007/03/19/a-few-more-jquery-plugins-crop-labelover-and-pluck
|
CC-MAIN-2019-09
|
refinedweb
| 459
| 62.58
|
Example code of reading binary file into byte array in Java
This example shows you how to read a binary file into byte array from Java program. This type of example code is need where you have to read the binary data into byte array and use the byte array data for further processing.
To the binary file into byte array we will use the class InputStream of java.io package. This class is providing read() method with reads the data into byte stream.
Following line of code can be used:
insputStream.read(bytes);
Here is the complete code of the Java program that reads the binary file into byte array:
import java.io.*; public class ReadingBinaryFileIntoByteArray { public static void main(String[] args) { System.out.println("Reading binary file into byte array example"); try{ //Instantiate the file object File file = new File("test.zip"); //Instantiate the input stread InputStream insputStream = new FileInputStream(file); long length = file.length(); byte[] bytes = new byte[(int) length]; insputStream.read(bytes); insputStream.close(); String s = new String(bytes); //Print the byte data into string format System.out.println(s); }catch(Exception e){ System.out.println("Error is:" + e.getMessage()); } } }
Read more examples at Java File - Example and Tutorials.
If you enjoyed this post then why not add us on Google+? Add us to your Circles
Liked it! Share this Tutorial
Discuss: Reading binary file into byte array in Java
Post your Comment
|
http://www.roseindia.net/java/javafile/java-read-binary-file-into-byte-array.shtml
|
CC-MAIN-2014-52
|
refinedweb
| 237
| 58.79
|
-:- i v P R I N T E D P O S T A P P R O V E D PP3 42 7 21 / 00 00 7
DECEMBER 1993 NO. 96
QUEENSLAND ISSUE
M. '-3 W .
WE COULD TELL YOU ALL ABOUT OUR DIVERSE AND SPECTACULAR LOCATIONS.
AND WE COULD IMPRESS YOU WITH OUR FREE ASSISTANCE IN OBTAINING PERMITS.
WE COULD ASSURE YOU OF BRILLIANT WEATHER, FINE FOODS AND 5 STAR HOTELS.
AND THRILL YOU WITH OUR INCENTIVE PACKAGE.
BUT AT THE END OF THE DAY IT ALL COMES DOWN TO THE BOTTOM LINE.
COST. It costs less in Queensland. Interested?
Contact: PACIFIC FILM AND TELEVISION COMMISSION, Robin James, Chief Executive Officer or Richard Stewart, Marketing Manager GPO Box 1436, Brisbane, Queensland 4001 Australia Tel: (+61 -7) 224 4114 Fax: (+61 -7) 229 7538
QUEENSLAND, AUSTRALIA.
P f TC
CMEJM (MTV PUBLISHING LIMITED)
DECEMBER 1993 NUMBER 96 INCORPORATING FILMVIEWS
BRIEFLY FILM IN QUEENSLAND: AN OVERVIEW BRUCE MOLLOY
ROUGH DIAM ONDS’ DONALD CROMBIE AND JASON DONOVAN INTERVIEWS BY ANDREW L. URBAN
■■¡■I COVE*: MARZtH A GODECKI AS NERI IN THE
16
RICHARD STEWART:
d ir e c t o r , f il m
Queensland
FORTHCOM ING TELEVISION SERIES OC EA N GIRL.
INTERVIEW BY S C O TT MURRAY
21
‘ OCEAN GIRL’:
p ic t u r e p r e v ie w
EDITOR Scott Murray A S S I S T A NT
VILLAGE ROADSHOW
and
WARNER ROADSHOW
EXAMINED BY STUAR T CUNNINGHAM AND LIZ JACKA
EDIT 0 R
Raffaele Caputo TECHNICAL
23
28
EDITOR
‘THE PENAL COLO N Y’ PRODUCTION REPORT BY ANDREW L. URBAN
Fred Harden A D M I N IS T R A T I VE
MANAGER
32
PART SIX: SURPRISING SURVIVALS FROM COLONIAL QUEENSLAND
J, Brodle Hanns M TV
BOARD
OF
CONTINUING HISTORICAL FEATURE BY CHRIS LONG AND PAT LAUGHREN
DIRECTORS
.Chris Stewart {Chairm an], .Patricia Amad, Ross Dimsey,
AUSTRALIA'S FIRST FILMS 1894-96
39
SHAKESPEARE ON FILM 'MACBETH’, OTHELLO’ AND MUCH ADO ABOUT NOTHING’
Natalie Miller
BRIAN McFARLANE LEGAL
ADVISER
Dan Pearce Holding Redlich, Solicitors
42
FOREIGN FESTIVALS:
VENEZIA, MONTREAL AND TORONTO
PETER MALONE, RUSSELL EDWARDS AD V E R T I S I N G
Contact Patricia Amad
46
SUBSCRIPTIONS
BLACKFELLAS KARL QUINN
Raffaele Caputo Fòt * N D I N G
CRUSH PAT GILLESPIE
PUBLISHERS
THE NOSTRADAMUS KID KARL QUINN
Peter Beilby, Scott Murray, Philippe Mora
THIS WON'T HURT A BIT! RAYMOND YOUNIS
DESIGN
THE WEDDING BANQUET CHRIS BERRY
Ian Robertson DISK
PR 0 C E S S I N G
FILM REVIEWS BEDEVIL JOHN WOJDYLO
53
Witchtype
BOOK REVIEWS THE FILMS OF WOODY ALLEN REVIEWED BY ANNA DZENIS
PRINTING
LITERATURE'FILM QUARTERLY: THE AUSTRALIAN CINEMA REVIEWED BY JOE STEFANOS
. Jenkip Buxton
SONDHEIM; SONDHEIM & CO AND ART IS N 'T EASY: THE THEATER OF STEPHEN SONDHEIM
DISTRIBUTION
REVIEWED BY RICHARD FRANKÜN
Network Distribution
64
PRODUCTION SURVEY INCLUDING FILM FINANCE CORPORATION FUNDING DECISIONS
70
TECHNICALITIES:
FULL EFFECTS, QUEENSLAND STORIES
COMPILED BY FRED HARDEN CINEMA PAPERS IS PUBLISHED WITH FINANCIAL ASSISTANCE FROM THE
80
TENEBRICOSE TEN
AUSTRALIAN FILM COMMISSION AND FILM VICTORIA © COPYRIGHT 1993 MTV PUBLISHING LIMITED A.C.N. 006 258 699 Signed articles represent the views of the authors and not necessarily that of the editor and publisher, while every care is taken with manuscripts and materials supplied to thè magazine, neither the editor
CHRIS
berry
is a Lectuer in Cinema Studies at LaTrobe University, Melbourne; STUART Cu n n i n g h a m is a
Senior Lecturer in Communications at Queensland University of Technology; A N N A DZENIS is a tutor in Cinema Studies at LaTrobe University, Melbourne; RUSSELL EDWARDS is a freelance writer; RICHARD FRANKLIN is the director of •Roadgames1 and ‘Link’ among others; GLENN FRASER runs Streetwise Films;
pat
GILLESPIE is a freelance writer;
LIZ JACKA is the author of several books on Australian film and television; CRISPIN LlTTLEHALES is a freelance writer
nor the publisher can accept liability for any loss or damage which may arise. This magazine may not be reproduced in whole pr part without the express permission of the copyright owners. $binema PaperS’i i published (approximately,) every.two months by
living in San Francisco; CHRIS LONG is a Melbourne film historian;
p e te r m a l o n e
is Editor of 'Compass Theology
Review'; BRIAN McFARLANE is an Associate Professor in the English Department at Monash University; BRUCE MOLLOY is Professor and Head, School of Media and Journalism, Queensland University of Technology; KARL Q U IN N is a freelance writer on film; PETER M. SCHEMBRI is a freelance writer; JOE STEFANOS is a freelance writer; ANDREW L.
urban
is the
MTV Publishing Limited, 43 C))jajrf(es Street Abbotsford, Victoria, Australia 3067 Telephone (03) 429 551%?Fax (03J 427 9255
Australian correspondent for Moving Pictures International; JOHN WOJDYLO s a translator of Polish novels and plays into English; Ra y m o n d
y o u n is
is a lecturer at the University of Sydney and a passionate lover of films.
B
R
I E
F
L
Corrigenda
V
Cinema Papers apologizes for the accidental omis sion of Jàn Epstein’s name from her coverage oj this year’s Cannes Film Festival in Cinema Papers No. 94, August 1993, page 22* -/
Letter Dear Editor
FFC renews Documentary Accord with ABC and SBS
I am somewhat puzzled by aspects of the review
Chief executive of the Australian Film Finance
by Jennings and Hollinsworth (Cinema Papers,
Corporation, John Morris, announced the renewal
New Exemptions will help film production
No. 94, p. 55-7) of Marcia Langton’s Well, I heard
of agreements on the funding of Australian docu
The Australian Securities Commission has re
it on the radio and I saw it on the television... (Australian Film Commission, Sydney, 1993).
mentaries pre-purchased by ABC and SBS. The
solved confusion about how cast and crew con
agreements known in the industry as the Accord
tracts should be structured to avoid contravening
The charge that Langton “seriously overreaches herself” (which implies that she is way out of her
have been in operation for two years with the ABC
the Corporations Law. A new exemption,is in place
and one year with SBS. Since the inception of the
that means producers can now offer.“points” —a
depth) is based in part on “her claim that there is no
Accord, more than 40 hours of television have
share of the net profits - in films to cast and crew
sizeable body of critical literature about represen
been funded and to date 20 hours have been
without potentially having to follow prospectus and
tation of Aboriginality [which] is manifestly ill-in
completed and screened, attracting significant audiences and critical acclaim.
prescribed interests provisions of the Corpora tions Law.
formed” . This statement is followed by a list of writers whose work has encompassed the repre sentation of Aboriginality. This is not what Langton actually states. In her
In a joint statement, ABC managing director
This means producers will have more flexibility
David Hill, SBS chief executive Malcolm Long, and John Morris welcomed the Accord’s renewal and
in packaging productions, while allowing creative personnel such as scriptwriters and directors to share in a film’s success. ,
discussion on the politics of Aboriginal representa
described it as essential to the maintenance of a
tion she acknowledges the work of particular writ ers who have produced critiques of filmic repre
viable level of Australian documentary production.
The ASC issued a Class Order, which came
“Without the Accord, local documentary pro
sentations of Aboriginal people. She then com
duction and its vital role in recording the Australian
into effect on 6 October, that recognizes such arrangements are service contracts which entitle a
ments: “But there is no sizeable body of literature
culture and way of life would be substantially reduced”, said the ABC’s David Hill.
person to a share of revenues or copyright in a film as part of their fee.
which provides an informed, anti-colonial critique of films and videos about Aboriginal people." (p.
The new FFC-ABC Accord renews a commit
“This ruling effectively recognizes what is a com
24, my emphasis). In other words, her comments
ment from the FFC to provide funds for up to twenty
mon industry practice”, says FFC chief executive
were far more specific than is implied by her
John Morris. “It‘s a very practical decision that ends
reviewers. Given that the majority of writers cited by Hollinsworth and Jennings have primarily
hours of documentary a year in 1993-94, ’94-95 and ’95-96. Under the new agreement, the ABC will provide cash pre-sales of $62,500 for budgets
can now continue to allow profit participation as part
critiqued textual representations (and here I am
up to $240,000, $67,500 for budgets between
of their negotiations with a minimum of fuss.”
referring to literary texts), it is incorrect to accuse
$240,000 and $280,000, and $72,500 for budgets
Langton of being so ill-informed. The reviewers' remarks read as a patronizing misrepresentation
between $280,000 and $320,000. For projects with budgets between $320,001 and $350,000,
The ASC says it recognizes this form of fee payment also acts as an incentive for creative talent
of Langton’s intent.
the ABC will provide either a pre-sale of $75,000 where the producer can attract a distribution guar
Further, in Jennings’ Sites of Difference (whose publication by the Australian Film Institute in 1993 was anticipated in the review), the author states: “Despite a burgeoning interest in Aboriginal Stud ies in recent years, there have been few general studjes of the representations of Aborigines in Australian films.” (p.18) This concurs with Langton’s original statement. It is apparent that there is some academic posturing in this review which reflects poorly on the reviewers and does little to advance the debate on the issues under consideration. Ian Anderson Karen Jennings replies:
I am surprised that Ian Anderson has placed such emphasis on virtually the only reservation that David Hollinsworth and I expressed about Marcia Langton’s book. It was a decidedly favourable review of a publication which deserves to be widely read. And I’m mystified that a criticism of one aspect can be construed as patronizing. But to set the record straight: all of the writers we cited.have
the confusion surrounding this question. Producers
in the industry and is consistent with royalties being the usual form of reward for literary and other creative work. An exemption has also been made, subject to
antee (payable in twelve months) of $10,000, or
some conditions, if the contract is with a writer for the
the ABC will provide a pre-sale of $80,000. The FFC will provide the balance of funds in all catego
acquisition of the rights in a script.
ries. SBS’s Malcolm Long said: The SBS-FFC Accord has allowed SBS to con tinue to pioneer documentary production in the area of multiculturalism. To date, completed Accord films have contrib uted to the strength of SBS programming. The continuation of the Accord will ensure that SBS can continue to work collaboratively with inde pendent documentary makers in developing inno vative, informative and entertaining television.
private investors. Subject to previous ASC policy statements, a prospectus is still needed for their participation.
Under the new agreement with the FFC, SBS will provide during 1993-94 cash pre-sales equal to 23% of the budget for up to 10 hours of docu mentary with budgets up to $190,000. The FFC will provide the balance of the budget. Subject to normal FFC marketing requirements, the FFC will provide the balance of funds required by ABC and SBS Accord projects. Morris said the renewal of the Accord was a
However, the exemption does not apply to
Documentary Conference 2-5 December The theme forThe Third Australian National Docu mentary Conference is titled “Reflecting the Fu ture” . Two major issues of this theme are the government’s move towards the Asia-Pacific re gion, and the impact of interactive multi-media on documentary filmmaking. Opening the Conference is The War Room, which looks at the inner-workings of Bill Clinton’s 1992 Presidential Campaign. The War Room is the latest film by seminal documentary makers D. A. Pennebaker and Chris Hegedus, who will also be in attendance.
written significantly about cinema and I’ll be happy
clear indication of its success and importance to
to supply Ian with bibliographical details if he
Other guests include major award winners of
the broadcaster, the Australian film industry and
wdüld like them.
the viewing public.
Japan’s Yamagata International Documentary Film Festival: Wu Wenguang {My Time in the Red Guards, China), Anan Patwardhan {In the Name of
READERS
POLL
God, India), Makasato Sato {Living on the River Agano, Japan) and Grand Prize winners Bob
To celebrate the 20th Anniversary of Cinema Papers in the next issue we will be polling various
Connolly and Robin Anderson tor Black Harvest.
industry figures to list their Top Ten Australian films. As well, Cinema Papers invites all readers
For further information contact Film Australia on (02) 413 8565 or 413 8777.
of the magazine to submit their Top Tens for collation into the Readers’ Top Picks. Any film that can conceivably be called Australian is eligible. The closing date is 7 January 1994.
(
2 • CINEMA
PAPERS
96
BRIEFLY
CONTINUES
ON
PAGE
62
]
Film Queensland... Not just a silent partner. Film Queensland is committed to the development and production of quality film and television. We not only provide a comprehensive range of programs for script development, pre-production and marketing, but also ajillhge of innovative schemes to assist with production financing. The Production Investment Fund provides up to :direct-: în^elÉMpït ^hiiéififee Revolving Film Vto $ 1 Million :^iilttlnst presale or distribution aifilm en ts at attractive : : interest rates. Other incentives include wage subsidies and payroll tax rebates and of course, we offer every assistance with loc4il§li surviÿs, permits facilities and crewing, ; \ i.,-r ^ l i ï p r i - - ; v^T goitties co-funding arrangem ents with other sf|gte and federal ag en cies.
You could say... we re more, than just a silent partner. For more information, contact: iij Film.; Q ueensland GPO Sox 1 4 3 6 I r i s b a n i Qid 4 0 0 1 D irector: Richard Stew art Telephone: (07) 2 2 4 5 8 0 9 M arketing and Developm ent M anager: Judith Crombie Téléphoné: (07) 2 2 4 4 5 3 6
FILM QUEENSLAND
CINEMA
PAPERS
96 • 3
* ■-M ■, -.'^-y ,$u*$ I
ÿ ïïiïé ÿ jî£ '.£;,*■?M%ïl & ¿'.^•.Ssi?S ■*:Æ i »i-W;-■<k A 4 x ■î -ï 'Sî !*;.ii '";-^ÎS. 'Zi*t?£ ?' ?> /:7 s
■
'S^-
;gfete
,; ^ • t ? ^ * ’»^; *ii",,^." 3'&&w^W*&r^^v4^/^» 3'1 ' ' *- 1
* | ,aw' S ' „' " ' ^A*-«.'
¡s r T?rr,* *,- - * * ,,v- - - /Æ .^^> -. ■s,, ^' ' „■■*'? - ■*«* -'' „ ttf Ê -„«*,« gjf &jf&? * * -*^-s*»»,. „~,j^. -¿tJ^K ***^ t~^„, \. ^^^^^^tÎMWfy^?. '4' *'*.-
A N
b y
4 • CINEMA
O V E R V I E W
B R U C E
P A P E R S 96
M O L L O Y
Late last year, the Australian Film Commission reported that in 1991-2 Queensland had replaced Victoria as the second largest producer of film and television drama in Australia.1Queensland’s share of production accounted for 24% of Australian production compared with New South Wales at 37% and Victoria at 20% . Considering that, in 19 88, production in the state had been less than $5 million and the state film agency, the Queensland Film Corpo ration, had been disbanded and several of its officers charged with
L A N D fraud, the emergence of a revitalized film industry provides an int eresting case study in the development of a regional film industry.2 The chances of the Queensland film industry reaching more than $100 million of production in a calendar year (as it has already in 1993) seemed so remote as recently as 1991 that when I presented a paper, “Hollywood on the Gold Coast? Towards a Regional Film Industry”|at the Australian Communication Conference in Sydney, The Australian sent its media reporter to interview me, expecting it
was a send-up. After I convinced him that it was in fact serious, he recorded a long interview then told me - in the nicest way - that I was deluding myself, and that there would never be a film industry in Queensland. The interview ended on a spike somewhere. What has caused this unexpected development to take place? The short answer is the existence in Queensland of a vision for a film industry among certain key players. This vision has been translated into a strategic plan for drawing the various elements of film industry, business, culture and education together to form an environment which encourages symbiosis.
Bruce Molloy is Professor and Head, School of Media and Journalism, at the Queensland University of Technology. He is a board member of the Brisbane International Film Festival and the Queensland Cinematheque, and a Commis sioner o f the Pacific Film and Television Commission.
What’s happening in Queensland?
T
he key players in the rejuvenated Queensland film industry comprise the state government through its film agency Film Queensland, a government-owned company called the Pacific Film and Television Commission, and Warner Roadshow Studios. Sup porting players include such cultural organizations as Brisbane Independent Filmmakers, Women in Film and Television, and the Brisbane International Film Festival, as well as the film and televi sion committee of Arts Training Queensland and various educa tional institutions. Analysis of the composition of the various film -related com m ittees and w orking parties indicates, unsurprisingly, a cross membership which is instrumental in ensur ing that the minor players in this filmic version of alphabet soup are at least aware of the overall strategic vision informing the broad plan, even if they do not always share it. This overview describes the operations of Film Queensland and the Pacific Film and Television Commission, and their place within the broader strategies for a Queensland film industry. Difficulties inherent in applying a strategic plan to the whole of Queensland can be appreciated when you consider that the distance from Melbourne to Brisbane is about the same as the distance from Brisbane to Cairns. When people living north of the Tropic of Capricorn, like the residents of Rockhampton, Townsville, Mt isa and Cairns,. O f course, the notion of a “Queensland film industry” is not unproblematic. Whether providing a location for American films and television series, thus ensuring some degree of continuity of employment for local actors, technicians and creative personnel, constitutes a Queensland film industry, or whether the production of films and television programmes relevant to Queensland is a critical element of such a regional industry, are questions for on going debate within local production circles. The foundations of this overall strategy were laid in 1990 and 1991. During this period, the sunset clause in the charter of the Queensland Film Corporation saw it replaced by the Queensland Film Development Office in late 1988. Plans for a multi-media complex adjacent to the Warner Roadshow Studios at Coomera were included in the Queensland bid for the Multi-Function Polis. When this bid, initially successful, was disqualified because the government was unable to guarantee title of the land, Premier Goss decided to pursue the more promising Multi-Function Polis propos als anyway. One of these was the Pacific Film and Television Complex. At about the same time, new management had taken over the film studios built at Coomera, some fifty miles south of Brisbane, as a result of a deal between the former National Party government and Dino De Laurentiis. The new owners were Village Roadshow, which then entered into partnership with Seaworld Industries and with the Time Warner organization to form Warner Roadshow 6 • CINEMA
PAPERS
96
ANGEL (ADEN YO U N G ) AND TATTS (DAVID FIELD). LAURIE MclNNES’ BROKEN HIGHWAY.
Studios, To recoup the government investment it was essential to convert the studios from white elephant to profitable business. The Pacific Film and Television Complex was to become an important catalyst in this process.
Film Queensland When the discredited Queensland Film Corporation was replaced in 1988 by the Queensland Film Development Office (QFDO), the newly-appointed director, Michael Mitchener, was reported in The Courier Mail as claiming that, “with proper location marketing”, an annual production target of $100 million was possible.3 Despite this prescience, Mitchener decided to return to Victoria in 1990, and the QFDO project officer Richard Stewart took his place. Stewart has presided over the revitalization of the state’s film industry ever since. The QFDO operating budget grew from around $700,000 in 1988 to $3.25 million in 1993. The appropriate if inelegant QFDO title was changed early in 1993 to Film Queensland, and the parallel growth of the Pacific Film and Television Commission (PFTC) allowed a division of responsibility between the two organizations. Film Queensland concentrates on the development of local films and filmmakers, while the PFTC attracts interstate and overseas production. This neat division of duties is complicated by Richard Stewart’s role as marketing manager of PFTC, thus ensuring some government say in its day-to-day operations, while executive direc tor of the Queensland government’s Arts Division, Greg Andrews, is a PFTC board member. Many of the initiatives to stimulate film and television produc tion in Queensland originate with Film Queensland, but are man aged by the PFTC, in conjunction with officers of the Queensland Treasury. One of these is the $10 million revolving fund available for low-interest loans, secured against pre-sales or guarantees. This was announced by Wayne Goss at the opening of the 1992 Brisbane International Film Festival, of which Film Queensland is the major sponsor. At this year’s Festival opening, Goss announced a further $750,000 available for locally-based filmmakers to bridge short falls in production funding. There is little doubt that Goss believes a bright future exists for the film and television industry in Queens land. Film Queensland offers a range of other incentives in scriptwriting, pre-production and marketing. Stewart states that present Film Queensland policy is to target specific individuals and their projects: “We’re able to identify particular producers, carefully evaluate their projects, and then support them with considerable funding. ” Among producers who have moved (or returned) to Queensland to take advantage of this approach are Ross.Dimsey, Damien Parer, Rosa Colosimo and Jonathan Shiff, whose company, Westbridge, is based in Port Douglas.
When people living north of the Tropic of Capricorn ....
Stewart looks forward to Film Queensland recording successes sirfiilar to The H eartbreak Kid (Michael Jenkins, 1993) or P r o o f (Jocelyn Moorhouse, 1991). “Our most notable success to date has been involvement in the production of B roken Highway (Laurie Mclnnes, 1993), which was invited for exhibition in Cannes.” He is adamant, however, that Queensland should not be seen as simply a location for Hollywood-replica films. “We’re confident that Film Queensland projects will reach the standard expected of the best Australian films, and our assessors are providing feedback that this is so.” He is heartened by the success of Laurie Mclnnes, and other directors with strong Queensland links such as Tracey Moffatt and Jackie McKimmie. Donald Crombie is currently completing a feature, Rough D iam onds, while the television series O cean Girl, produced by Westbridge, follows in the tradition of children’s television established by Butterfly Island, Animal Park and Skippy. Among Film Queensland’s other responsibilities is the task of stimulating film culture. This includes provision of funding for short films, first-project development programmes and other productionrelated initiatives. It also involves attention to the educational process, with Film Queensland working with the local Australian Film Radio & Television School representative, Queensland Uni versity of Technology, Griffith University and the TAFE sector to ensure a continuing supply of trained personnel. This role recently culminated in the appointment of the state’s first training coordina tor to facilitate secondments, internships and programmes bridging the transition of graduates into industry. The Film Queensland brief extends to supporting cultural or ganizations. Stewart sees a need for these groups to collaborate, a view supported by the Australian Film Commission. “There’s enormous enthusiasm and energy in this area”, says Stewart. “Our goal is to provide a focus and perhaps turn what’s presently somewhat of an unguided missile into a guided one.” Another long term aim is the establishment, with federal support, of a National Centre for the Moving Image in Brisbane. As Stewart says, “It’s almost an accepted part of Queensland mythology, this lack of federal support. I think we have evety right to demand it, and I think we’ll get it.” FILMING MARTIN CAMPBELL'S THE PENAL COLONY IN QUEENSLAND.
Pacific Filiti and Television Com misiiòii Although it started out in 1991 as a subsidiary of the QFDO, the PFTC now has a separate existence as a government-owned com pany limited by guarantee, nominally responsible to the DirectorGeneral of the Premier’s Department. The PFTC is controlled by a board of directors and functions as an economic benefits catalyst, designed to attract production to Queensland. From the outset, PFTC board members identified the need for a two-fold approach to the problem of attracting business. First, potential producers should be identified and approached; second, infrastructure would need to be in place. This infrastructure was seen as both “hard” (the technology and plant to support all aspects of the production process), and “soft” (the personnel required to provide creative, business and technical inputs into the industry). The aim was to make possible the full production and postproduction of films and television series in Queensland. Pivotal to these plans was Warner Roadshow Studios, and the objective Was to ensure that it became a “one-stop shop”. Both aspects of thè strategic approach had to proceed simultaneously if the objectives were to be realized. Robin James was appointed chief executive officer in 1991, while Richard Stewart, director of Film Queensland, was appointed marketing manager. During the early days of the PFTC, most business was expected to originate in Japan and South-east Asia, but increasingly the source of business proved to be the U.S. A major attraction for U.S. producers has been the differential between the value of the Australian and U.S. dollars. As James states, the bottom line is always the principal motivation for producers, but the professionalism of the PFTC has given Queensland a competitive edge over other possible production centres with equally weak currencies. This professionalism includes high-quality location surveys and attention to detail on location shoots, access to special ist expertise and equipment, and assistance in dealing with authori ties at all levels of government. This level of service, described by Gale Anne Hurd, American producer of the $22 million feature, The Penal C olony, as “equal to the best in the world”, is supported by the range and diversity of locations, and by the various incentives managed by the PFTC. These are the $10 million revolving production fund, the $1 million fund for payroll tax rebate on films with budgets that exceed $3.5 million, and the crew subsidy scheme, which returns up to $ 100,000 for productions which use Queensland-based crews. The PFTC is a lean operation with an operational budget of around $500,000. As well as the chief executive, it has a location liaison manager, an investment manager, a coordinator who han dles travel, marketing and programme coordination, and a secre tary. Projects range from movies of the week for U.S. networks, such as the recent Mercy Mission (with a $3.5 million budget), through tele-features and television series, such as Time Trax II ($21m), to big-budget feature films such Sniper and The Penal Colony. The next major production scheduled for television is the NBC mini series Gaijin, based on James Clavell’s novel, while the Australian CINEMA
PAPERS
96 • 7
LEFT: NERI (MARZENA GODECKI) IN MARK DEFRIEST'S OCEAN GIRL.
Film Culture in Queensland
t
component of the Paul Hogan project, Lightning] ack (total budget $35 million), was shot in Queensland. The supporting infrastructure has expanded greatly since the first series of Mission: Im possible used to beam the footage to Los Angeles for editing. The need for a film processing laboratory was identified early on and satisfied this year by the establishment of the Atlab facility on the Warner Roadshow Studios site. A pre-feasibil ity study jointly funded by the Multi-Function Polis and Depart ment of Industry Trade and Regional Development is currently assessing the economic viability of developing a state-of-the-art post-production facility on or near the Warner Roadshow Studios complex as part of new techno-park development. James is realistic about the levels of production that might be attracted from the U.S. and Asia: What we can do is provide services particularly to Asia because we have the creative expertise and the experience, and also to the U.S. for low- to medium-budget production. I’d be surprised if we attract much high-budget American production. The PFTC board is aware of the scepticism and criticism directed at the PFTC by those who believe its activities conflict with the need to preserve Australian culture through indigenous production. However, the PFTC board believes that the two types of activities can be reconciled. As James puts it, I see them as complementary. I don’t see why we can’t reflect Australian culture in local film and television production, while simultaneously marketing our services, our expertise and our loca tions, all of which are world-class. Policy directions for the PFTC are set by its board, which comprises a cross-section of members representing film and televi sion, government, tourism and marketing, finance and education. Such a cross-section brings a breadth of skills and experience, and this pays off particularly in the process of strategic planning. This emphasis on planning has, in James’ terms, distinguished the operations of the PFTC: “Too often the film business in Australia has been the preserve of the gifted amateur rather that the profes sional. If the film industry in Australia is to survive, it will be through thorough planning and the application of sound business princi ples.” 8 . CINEMA
PAPERS
96
The various organizations dedicated to advancing film culture in Queensland depend largely on Film Queensland and the Australian Film Commission for a considerable proportion of their funding. As Richard Stewart suggests above, the two government agencies seem to favour some rationalization of these organizations for economic reasons. An analysis of the role and functions of the various organizations, the Coulter-Pacey Report, was undertaken in 1992. Currently Andrew Zielinski, manager of the South Australian Video Centre, has been retained as a consultant to prepare a report on implementation of the Coulter-Pacey recommendations. In Brisbane, the major film cultural organizations include Bris bane Independent Filmmakers, Women in Film and Television and Queensland Cinematheque. Brisbane Independent Filmmakers, under the energetic leadership of Jonathon Hardy, has recently expanded its range of activities to include exhibition and seminars. Women in Film and Television continues to serve its members quietly and efficiently. Queensland Cinematheque, after a flurry of activity in 1992, is currently experiencing a minor identity crisis as it endeavours to redefine its aims following the implementation of the National Cinematheque programme. Following the success of the 1990 Queensland Images festival, moves occurred for the establishment of a full-scale international film festival in Brisbane. The first of these festivals was held in 1992, incorporating both considerable popular content and a significant Asian component as a distinctive feature of the Brisbane Film Festival. These Asian films were recommended by Tony Rayns, who, together with David Stratton, is a major programming consultant. The 1992 Festival was an outstanding success in terms of attendance and critical response. The more ambitious 1993 Festival retained the 1992 levels of attendance. Film Queensland is the Festival’s major sponsor, supported by the Australian Film Commission, Warner Roadshow and the stockbroking firm Morgans. One of the most successful screening series in Brisbane is conducted by the State Library of Queensland with annual attend ances of around 8000, despite the limited capacity of its theatrette. Also worthy of note are two regionally-based indigenous media groups: Murri Image is located near Gympie, and the Townsville Aboriginal and Islander Media Association (TAIMA) in north Queensland. Both Murri Image and TAIMA are active in produc tion, skills development and related cultural activities.
Conclusion In his response to receiving the Chauvel Award for his distinguished contribution to Australian filmmaking at this year’s Brisbane International Film Festival, Paul Cox stated, referring to the energy evident in Queensland film culture, that “There’s a fire burning in this city.” This comment might be applied with some justification to the level of film and television activity of all types occurring in Queensland. Acknowledgement: The assistance of Richard Stewart and Robin James in preparing this article is gratefully credited.1 1
AFC National Survey of Film, Television and Video Production, 1989-92.
2
The inglorious history of the QFC is described by Helen Yeates in her contribution to Jonathan Dawson and Bruce Molloy (eds), Queensland Images in Film and Television, University of Queensland Press, 1990.
3
The Courier Mail, 23 November 1988.
__
Level 1,33 Berry Street, North Sydney 2060 Telephone (02) 954 1477 Facsimile (02) 954 1585 P. O. Box 1155 North Sydney 2059
Australia’s leading Film and TV Insurance Underwriting Agency We ■ ■ ■ ■ ■
S p e c ia lis e in In s u ra n c e for: Film Producers Indemnity (Cast) Negatives and Videotapes Errors and Omissions Additional Expenses Props, Cameras, Lighting, Sound Equipment
TEAM J o h n H e n n in g s G ra h a m B u tt M ic h a e l W o o d w a rd M e g a n O ’R ile y ACN: 007 698 062
CINEMA
PAPERS
96 . 9
■ ■ ■ ■ ■
wmm,
§$■§1; CHRISSIE BRIGHT (ANGIE MILIIKEN) AND MIKE TVRELL (JASON DONOVAN). DONALD CROMBIE'S HOUGH DIAMONDS.
A H #1 A IIIIM »V i
I
|||A a h ||| V I W C 9 I I I I I I t i I If II& v v ^
With Roush Diamonds, director
is at temptin g so mething Donald Crom bie ¡s
deceptively treacherous: a genuinely Australian film with all the innocent charm o f a Disney family movie - and similar box-office success. O ne of the elements that will either make or
and ’s de bu t as a leading man - not as sm ooth-
break this ambition i
faced yo u n g po p star, but as a slightly scruffy cattleman - and single father. Donovan is Mike Tyrell, whose life changes when, in a moment o f inattention, the cattle truck he is driving hits a car parked on the side of the road. The car belongs to Chrissie Bright (Angie Milliken), an ex-singer turned .barrister’s wife on the run from suburban life.'1Roush Diamonds is based on an original script by Crombie and Christopher Lee. The film is produced by Damien Parer, in association with Beyond Films and Southern Star Entertainments, with major financing frofti the Australian Film Finance Corporation and Film Queensland. It was mostly shot on location in Boonah Shire, Queensland. A ndrew L. Urban visited the set during filming at Toongoolaw ha which, Urban notes, "the producers have carefully disguised in the film b y renaming Boongoolawha".
f
jjl
1 /
O zt A
^
^
Rough Diamonds
entertainment. The question was how to make it entertaining. The story about a cattleman going broke or battling the banks, even with moderate-level stars, is not going to easily attract money because people assume it will be depressing. So we swung it right around and introduced the music elements and the charm. Was the idea of Mike being a musician added after you considered Jason Donovan for the role? No, it was written into the script during the course of development, some three or four years ago. Mike can sing, but he is not the singer in the story - Chrissie is. She is the one with the experience and a gold record or two in her past. Mike’s just a reasonably good bush dance-hall singer. In theory, we didn’t need to have a singer like Jason as Mike. After all, Angie Milliken is not doing Chrissie’s singing voice. But, obviously, it is an advantage to have someone who is known as a recording star. To what extent does Rough D iam onds get to deal with the issues you discovered in the bush? When you first meet Mike he’s driving a cattle truck. You don’t realize he’s a cattleman; you think he’s a truck driver. And then it evolves that he actually owns this property and he is trying to stay out of the hands of the bank. He’s driving for a living, not because he wants to. Do we learn why he is in debt?
Donald Crombic While possibly best known for Caddie (1976), Cathy’s Child {1979) and Playing Beatie B ow (1986) - a mix of vastly different films Donald Crombie (above) has made six features, co-directed one and made several tele-features and mini-series.1 Five of these features have won awards at either Australian or international festivals.
What was the genesis of the film?
You know he owes money to the bank. We don’t go into it, though. We are not giving a lesson in rural economics. The bank is represented by Arthur [Jeff Truman], who keeps trying to repossess Mike’s prize bull. He is a rather pompous but quite likeable character. He’s somewhat ineffectual and tries des perately to be liked. He actually believes that he is doing the right thing by his customers in suggesting that perhaps it’s time they gave up and moved on. I think Arthur’s quite a real character from what I’ve read and heard about rural bank managers. But he’s not a villain; he’s not the archetype. Is there a villain?
It all began when we were filming The Irishman in North Queens land back in 1977. We were driving out of town one day and happened to go past a road gang. Whoever was showing us around said, “See that chap over there on the shovel. He owns Rockhampton Downs, 80,000 hectares of prime beef country.” I was fascinated with the thought of a man, who on paper would be a multi millionaire, having to work on the roads. I then learnt about rural debt and how people who own large tracts of land are sometimes literally penniless. I thought there was a movie in that, particularly for city dwellers. I then wrote a social drama on commission for Film Australia. It was all very serious and well meaning, but it never got made. But I kept thinking about the idea and over the years it evolved. I realized that if it were ever going to get up, it would have to be an
1
The other features are The Irishman (19 7 8 ), The Killing o f Angel Street (1981) and Kitty and the Bagman (1983). With Ken Hannam, Crombie directed Robbery U7ider Arms (19 8 5 ), which was made as both a feature and a mini-series.
12 • C I N E M A
PAPERS
96
No, I don’t suppose there is. Arthur is the nearest thing to one. He’s the threat. In terms of structuring the script, did you had any qualms about the fact that almost all the characters seem to be nice, positive people? No. It was a conscious decision not to create black characters. It is probably easier to write truly bad people, but we were trying to find the right lightness of tone. That was the biggest problem: not making it too slapstick, or too serious; trying to find the right levels of the comedy. The setting suggests a rediscovery of the original Australian style of humour, that laconic, colourful humour best exemplified by Croco dile Dundee [Peter Faiman, 1986]. Yes. Almost all the secondary characters have some laconic touch that is based on truth. The doctor in the film, for example, although he doesn’t have any lines, is based on a doctor that I actually saw once. I won’t name the town, but this particular doctor liked to
drink and it was well known that he liked to drink. At a rodeo - and I saw this - one of the buckjumpers came off his horse and was lying inert on the ground. There was a long si lence and suddenly this voice said, “Get the doc.” And the doc, who was there at the ring side, weaved out and ran to wards this fallen cow boy. Somebody then called out, “Look out Jim, the doc’s com ing.” With that, the cowboy looked up and ran in the oppo site direction ... We put that in the movie. A lot of the film is based on observation. Part of the enjoy ment will come from the obser vation of characters and the little things they do - like the dog on the property which sleeps in the boot of the car that’s always left open. Of course, this could also be the film’s weakness, too, be cause if you don’t notice these things or don’t find them funny, you might not find the film par ticularly funny, either. This is also not a film where the dialogue conveys all the humour. There are not many wisecracks. It mightn’t be the greatest dialogue in the world, but it’s real. I was very offended by one of the script assessments which said, “Didn’t like the American influence in the dialogue.” I thought, “Well, bugger me, I don’t know where the American influence is. I have no idea.” Sure, people say “okay”, but that’s been with, us a generation or more now. I think the film is very genuinely Australian, which will either make it or sink it. We took the deep breath and said, “This film’s going to be a genuinely Australian film. We are not going to allow any influences to come in from overseas. We are going to avoid having an American lead.” Actually, a Texan playing Mike was seriously suggested by one of our financiers in the past. We have been through a fairly tortuous trail, but we managed to resist having a story about a Texan who happens to be living in Queensland. Can you put a label on the film? Yes. The label is “romance, music and cattle theft”, which I hope is going to be attached to the title on the film. I think that sums it up really well.
You have a well-recognized career, having made some films of lasting value. Some better than others ... Where does Rough D iamonds fit in that context? Are you enjoying the process more than before? I find the process extremely difficult, maybe because this is a personal project. It was not something I was offered. . These days it’s so hard to make a personal movie, or a film that you have generated yourself with producers and other people, that for it not to work would be a real tragedy. For that reason it is harder. It’s also harder because we don’t have as much money as we should have. I know everybody says that. But when we were facing the reality of how much money we could get to make it, we took a deep breath and said, “We are still going to make the movie and not cut a lot of the scenes or replace the more expensive elements in the script with scenes of people just talking.” In other words, we tried to make it a movie, not a telemovie. And I think we might have succeeded, although it’s really too early to say.
Did you live in the bush for a while to observe all these things?
How did you do that?
No, but I come from a family where previous generations were on the land. Maybe I have some sort of affinity with those $ort of characters. The other thing about this film is that everything is coming out with some sort of truth. There isn’t any moment which I think is false. I hope that makes it work.
Well, it’s fairly scene-intensive. In telemovies and in mini-series, a lot of the drama is conveyed by people sitting in rooms and cars talking. With this film, there are quite a lot of little scenes where people move through. For example, there is a scene where the girls are talking about what the bull is going to mean to them and, instead of finishing all that dialogue and then a new scene, they actually jump CINEMA
PAPERS
96 . 13
Rough Diamonds
on a vehicle and continue the scene in the travelling vehicle. That is very expensive to do—to split a 45-second sequence into a 35-second sequence and a 10-second sequence done as a travelling shot. That’s the sort of thing that probably separates this movie from a telemovie, more so than the lenses you use. How, then, were the cost savings achieved?
The other thing we discovered is that by using a bull with a hump - our bulls are Brahmins - actors kept getting lost behind the hump. The bull is actually taller than young Haley Toomey [Samantha, Mike’s sister, and the bull’s handler]. You can laugh about it now, but it caused a bit of tension. Our shooting ratio is higher than it should be in a normal drama because we had to get the shots to get the drama right when working with the bull.
By not filming over 8 weeks, and trying to do it in 6. Every day had to be planned meticulously, right down to the number of shots. We can do about 20 set-ups a day, so we plan the coverage to fit that. There isn’t time to say, “That doesn’t work that well. Let’s try and do it another way.” It really has to work the first time. Everything has been planned to the nth degree, and it’s been an extremely efficient production. Apart from the weather problems, nothing really has gone wrong. We haven’t lost time because we hadn’t planned something properly. We did lose time with the animals, however. I think if we’d have known what was ahead of us we might have taken a deeper breath. But we didn’t know. We went into it like virgins, not having done intensive work with bulls before.
We got photographs out of A D ay in the L ife o f Australia and looked at the colours, the sun. We talked about how in scenes on the verandah of the homestead we should see the countryside. We didn’t want to expose just for the verandah and let everything else burn out. We also talked about the lenses. Virtually everything is shot on a 50mm and upwards lens. They give a slightly longer effect and everything is packed in. We don’t use wide-angle lenses very often - only sometimes with the bull. They make the bull look a bit bigger.
What didn’t you expect about the bulls?
Is it a black bull?
The nearest way of equating doing drama around a bull is being at sea. When you work with boats, everything moves all the time, and you can’t control it. Bulls also keep moving. They like shifting their weight. Our main bull weighs 1.3 tonne, so, when it decides it doesn’t want to stand there, no one is going to say, “Please, stay on your marks.” And we had seven bulls in a line when we did the cattle-judging sequence, in very powerful winds!
No, it’s a red and white bull. It knows it’s a champion. It’s better bred than most of the crew! In the script it’s described as a deep thinker. So, when things are happening around it, you cut to the bull and it’s thinking. The scene of the bull being towed through Brisbane is very funny. The bull stands on the back of this open cage. It’s a very regal animal, looking around. To me that is funny.
CHRISSIE AN D MIKE. RO UG H DIAMONDS.
How did you cast the bull?
When you talked to DOP John Stokes, what were the stylistic things you discussed?
We looked for an animal that created a concept that the audience would feel comfortable with. Some bulls look mean, and an audience wouldn’t feel comfortable seeing a small girl holding a big mean-looking bull. This bull you can cuddle. I don’t mean this in a pejorative sense, and I wouldn’t want it to be taken as such, but there is a whiff of Disney in this. We have a charming, good-looking cast and we tried to make the film as attractive and charming as possible. As far as the marketing of the film goes and its potential for success, I have a feeling that the films which have really worked in Australia have been three-generational films. That is, three genera tions can go and see them without their causing offence. A grandpa can take his 13-year-old granddaughter and know that they are not going to be confronted with nudity, sex, violence, etc. We are aiming at the people who don’t go to the pictures a lot, but who will come out for a special Australian film. I remember going to see C rocodile D undee at a suburban cinema, and I was amazed that whole families were at the pictures: mum and dad, the kids and the grandparents. I’d never seen that before. When we were designing this film, we aimed for that market. That probably explains best why there aren’t certain elements in the film. For example, there aren’t torrid sex scenes between Angie Milliken and Jason Donovan. I mean the nearest we get to that is when he takes his shirt off on one or two occasions. When I arrived on set, the first thing I saw was Mike and Chrissie kissing outside the door of a pub. You choreographed them to be turning around as they kissed, like in a slow dance. 14 • C I N E M A
P A P E R S 96
Jason Donovan After small roles in B lo od O ath and a student film in London, and with the frustration of several projects having faltered in preproduction, Jason Donovan finally has his sought-after lead in Rough D iam onds. Donovan had been starring in London in Jo sep h and his A m az ing T echnicolor D ream coat, and returned to that after finishing Rough D iam onds. In February, he will begin his new album for Polydor records. What attracted you to what has become your first major role in a theatrical feature?
JASON DONANVAN AS MIKE.
That’s the end of the movie. There is a song which Chrissie is singing, because she goes on and becomes a singer. The whole story is they keep pulling apart, coming together, pulling apart, and finally at the end of the movie they are together - in the good traditions of this sort of entertainment. What about the music? Is it all original? No, we are using three classics, “Help Me Make it Through the Night”, “Could I Have This Dance?” and the Johnny Farnham hit, “Two Strong Hearts”. The rest are original. I’m not quite sure whether they are Australian original or whether they have been got from a pool. There is only one song that has been written especially for the movie, which is the title song, “Rough Diamonds”. Lee Kernaghan sings that. The music producer is Garth Porter. Jason flew to Sydney and did his vocals with Garth. He sings “Help Me Make it Through the Night” How much singing is there all together? Jason will be singing 6-8 minutes. There is about 10 minutes of singing all together. Are you aiming for about 95 minutes? That would be about tops. The story itself is fairly slight, so we wouldn’t want to try and drag it out any more than that. Of course, if it worked at 100 minutes we wouldn’t say no. But it would have to really convince us all that it was working, because I think 90 minutes of entertainment is about right. Our screen times are up at the moment, so we are not quite sure what we will end up with. But hopefully we can edit it down. How would you like people to walk out of the cinema? With a smile on their faces and telling their friends to come and see the movie. I think it is so important that they actually enjoy it. If they don’t enjoy the film, it has no value because it doesn’t have any deep message to give the world. So if it works, it will work because it is a charming entertainment that you will actually enjoy in the 90 minutes that you spend with it.
I was impressed with the script. It’s a very Australian and commer cial piece. It has Australian people and humour, and an Australian cast. That attracted me a lot. It’s not the usual syndrome of trying to put an American or an Englishman in there to sell the product overseas. It stands up to the buyers on its own right. Without saying it wouldn’t be a challenge to me, I felt the part was something that wouldn’t throw me. I wouldn’t be trying to play someone far removed from me as a person. Instead of trying to do something in England, which might have required an English accent, I wanted a soft introduction, as it were. As you know, I have been involved in other productions that have missed out on finance. This one nearly did, too. It did twice, which was like, “Oh God, not the fourth time!” But I had faith in Damien [Parer] and I’d worked with Donald [Crombie] before [on H eroes]. I liked his sense of direction; he leaves a lot up to you. I think it is very important in the casting to get a lot of your acting work done, and Donald had faith in what I could do. I’ve always wanted to do cinema - it’s been high on my agenda - but, after getting out of N eighbours, I wanted to find the right project - and a project with the money to get made! Now that you have been doing it for a few weeks, have you found the creative stretch enjoyable? Oh, absolutely. The romanticism of it I haven’t touched for quite a while. Josep h isn’t exactly a romantic piece. In H eroes, I played a soldier and, in Shadows o f the H eart, I was a sort of drunk crazy type. It’s been challenging to relax in front of the camera enough to let your emotions speak for themselves and to let the story take over your mind. Since coming out of school, where one is more energetic and in peer groups where there is a lot more dominance between people, I’ve probably softened a lot. This guy in Rough D iam onds has a bit of punch to him. The first time I got on the set, they said, “Okay, we’re doing the fight stuff today.” It was like, “Oh, I haven’t done this in a long time. ” It hadn’t even crossed my mind! At school, I was a pretty sort of placid guy. But you do sports and you are a more physical sort of person. Apparently, your childhood was pretty uneventful, without any big family traumas. The biggest hassles you’ve probably had have been dealing with the British press. Can you gain anything from that? (
CONCLUDES
ON
PAGE
58
J
CINEMA
PAPERS
96 . 15
DIRECTOR,
FILM
QUEENSLAND
Richard Stewart is director of the Queensland state government's very active film instrumentality, Film Queensland. Brought in to help assess the wreckage o f the Queensland Film Corporation in 1987, Stewart has helped oversee a remarkable revival in the state’s film production fortunes. Much credit for this is due to Film Queensland, as well as to the spirit of several independent Queensland producers and directors, and, most important, the massive and financially successful presence of Warner Roadshow’s Movie World Studios on the Gold Coast. Stewart is also the marketing manager of the Pacific Film and Television Commission and recently became the first Australian appointed to the Association of Film Commissioners International. He is extremely well placed to give an extensive and forthright view on the state of film production in Queensland.
RVIEW
MU RRAY
What does Film Queensland owe in legal structure to the Queens land Film Development Office and earlier incarnations? In October 1987, the Queensland Film Corporation was wound up. It had a sunset clause, being only ever-intended to last for ten years. But clearly after the matter of Allan Callaghan1, and the perceived lack of success, it was a conscious government decision not to renew the licence of the Corporation. In early l988, two people came on the scene: myself and Michael Mitchener. Michael’s job was to prepare a report on what had really happened in relation to the Corporation: primarily why it failed and an exploration of future options. It was only a verbal brief from what I can gather - 1 never saw it in writing - and was given to him by the then Director of the Arts, Donna Grieves. I came in from a different perspective, in so far as I’d been working in government for a while. I have some accounting background and a background in film. I was asked to do a reconciliation of all the assets of the Corporation, to look at what films had been made, what their position was in terms of marketing, what recompense may be due, what amounts may be still outstand ing to individuals, and so on. As you know, the Corporation was also acting at that time as investors’ representative to a number of films. My job took up most of 1988. It didn’t take long for word to spread in the industry that there were two people sitting in the office there. We received a number of requests. Somebody then decided to call us the Queensland Film Development Office and we started with a very humble budget of $695,000.
The government still had a wait-and-see perspective, with no commitment at all to an ongoing film assistance organization. But we were able to change its mind on that one [laughs] by a demonstration of a number of things. A little bit of luck came into play as well. This was when Mike Ahern was still Premier. About that time, the De Laurentiis Studios on the Gold Coast were in their virtual death throes because of Dino’s bankruptcy overseas. The Studios were absolutely vacant and the only film that had been mooted there, T otal R ecall, had gone elsewhere and eventually ended up in Mexico - but that’s another story. There was a range of opinions to what should happen to the Studios. Fortunately, none of the other alternatives - such as converting it into a aircraft hangar, making an airport for the Gold Coast - happened. Instead, Village Roadshow decided to take over the facility. At the same time, Paramount came in with two television series: M ission: Im possible and D olphin Cove or D olphin Bay. That caused government to rethink the possibilities of a film industry. Here we were sitting in a state with a studio which had been perceived as a white elephant, but with about $30 million worth of production going through. Everybody felt that perhaps it could be turned into the nucleus of a developing Queensland film industry. It was along those lines that we convinced government to start ^reassessing its earlier position in relation to film development. We were then given $1.2 million for the next year. We already had developed a set of programmes of assistance, and participated in Locations Expo in 1989, so we obviously had a clear direction, from within the office and also from government, to market Queensland as a location. We also introduced a range of assistance programmes, encom passing script development, pre-production and marketing. We also started to work at a cultural level with the introduction of such things as the Queensland Young Filmmakers Awards. We then had a change of government and Wayne Goss came into power. Things went into a period of review, where we were obliged to look very carefully at our directions. The review lasted a long time - rather too long, actually, because it also led to instability in terms of the office. You see, we still hadn’t really been given an imprimatur from government; our activities were never enshrined in legislation. We were just simply a branch of the Arts Division, as it was called. We could have been told to wind up shop at any time. Was the internal review of the whole Arts Division, or just the film office? O f the entire Arts Division in Queensland, as well as a number of companies that had been funded by the Arts Division. The review was quite successful in terms of our perspective and it affirmed what we had been doing. The general feeling of the review committee was that they were happy with the programmes of assistance and with our policy. In fact, our policy in those days was quite radical for Queensland because we were the only arts body in the state which was funding individuals. All other arts’ grants operated by the Arts Division were provided to organizations for operational expenditure. When we came along and presented our report to the review committee, they were quite taken back. They said that what film had been doing was basically a blueprint for the other art forms. We were funding individuals, using peer group assessment, and these two things were Introduced throughout the arts in Queensland. After the review, it became clear that the Queensland Film
Development Office had a future under the Goss government. That was confirmed on a number of occasions by the Premier. His government made a strong commitment to film and he has contin ued to do so. Then, of course, the Queensland Film Development Office changed its name to Film Queensland earlier this year. That was really done to try and achieve a better national focus for film organizations. It seemed that Film Victoria had set the standard here, by its name. There was more to it than that, however. There was an underly ing philosophy that Film Queensland had in fact moved from an organization which was strictly a development office to an organi zation that could encompass a whole range of film activities. Film production had become very much a reality of Queensland life and the office had a contributing role into an industry which, in dollars and cents terms, had become a significant player for Queensland. As all this was going on, we set about developing a new range of policies. In 1991, we developed the Pacific Film and Television Commission (PFTC), which is a separate international marketing arm of Film Queensland. We also set about' some other initiatives, such as the Brisbane International Film Festival. W e’d had a smaller event called Queensland Images, which was a retrospective festival in 1991. W e’d been pleased with the general success of that event. The Festival was established initially to showcase the work of upand-coming Queensland filmmakers. It also had a strong Asia Pacific focus, again as part of overall government policy. The Queensland government, in its trade and investment sections, has a strong Asia focus. It has secretariats for places like Korea, Taiwan, Hong Kong and Japan. So, we have had a strong Asia focus in our Festival. It still reflects that, particularly with the excellent assist ance from Tony Rayns. What is the legal status of Film Queensland today? Film Queensland remains a branch, but now a formal branch, of Arts Queensland. The Pacific Film and Television Commission is a wholly-owned government corporation. Film Queensland is not a statutory authority like the others. Is that a disadvantage or an advantage? When we under the wing of the Premier’s Department, that was a much more difficult question to answer. Being directly involved then in the total infrastructure was quite useful, particularly in terms of matters relating to budget, flexibility and networks. There is obviously quite a lot to be gained by being in the Premier’s office. But now, being part of Justice of Attorney-General’s Depart ment, I can answer the question very easily. It is no way as convenient or as useful or as flexible as in the past. We are finding difficulties in that environment. It’s not because there is anything wrong with individuals involved in Justice of Attorney-General’s Department, except that it’s about time we actually become a statutory authority. Do you think that will happen? Sure. It’s just going to take time. Maybe Arts Queensland will become a statutory authority and we will become part of that authority. I guess there are three scenarios: we could become a statutory authority first, Arts Queensland could become a statutory author ity, they could both happen at the same time, or we may achieve our goal a little later. CINEMA
P A P E R S 96 - 17
I think it’s fair to say there is little legacy of the Queensland Film Corporation to haunt us, as was suggested when there was talk a few years ago of statutory authority for the Queensland Film Development Office. Now it’s a different scenario altogether. We are close to a statutoryauthority, because it’s now called the Office of Arts and Cultural Development. We have a good relationship with that organization and I see no reason jto rush into a statutory corporation, except when our response times are affected by excessive red tape. Your budget at the moment is $2.7million, plus $750,000 for the Equity Fund. That1s right. We also administer a range of other funds, including about $3.5 million in the Revolving Film Fund (RFF). We also administer another half a million a year or so in other government incentive programmes, such as the payroll tax rebate scheme and Queensland crew subsidy programme. We have a number of other incentives as well to encourage production, not just for foreign but for local productions as well. The sum total would put Film Queensland on a similar financial level, or even higher, than say Film Victoria. Pretty well. The mix is different because we are the only state running a discounting transaction. Our Equity Fund will be run along very similar lines to Film Victoria’s or the AFC’s funds. We are drawing up guidelines for that now. There won’t be too many surprises, except we may use some of the funds to possibly interface with the Revolving Film Fund, so a client coming to us can access RFF money and Film Queensland money. W e’ll work out some rather interesting combinations of both loan and investment. Having access to a loan fund as well as investment funds - the loan fund is much higher in quantum should give us an interesting advantage when it comes to putting deals together. Loan funds are being discussed in principle at the moment by the AFC’s consultant, John Maynard. John is actually a recipient of some RFF funding for his next picture, which is going to be directed by Gerard Lee, who is living up in Queensland now. John has had experience of what this fund is about. I haven’t actually spoken to him about it specifically, :but it wouldn’t surprise me if he was thinking about it, because it works well. And, of course, the American film industry is based on discounting transactions. That is how films are made there: people get a deal and go to a bank. That’s what we are doing here: we are running a bank and it works. How much production do you think Film Queensland can viably generate in a year? I don’t know. I read a report that Peat Marwick has done for Greg Smith at the New South Wales Film and Television Office, and he seems to be using the money wisely and well, [laughs] If one considers that the FFC requirement for pre-sales to be around 30% - the amount obviously varies depending on who is making application and other factors - it seems that an injection of $200,000 straight equity from the government film office can be very useful in that equation. It’s not quite 10% , but it’s closing in on 10% , and that can be hard to get. So, I think the $750,000 Equity Fund could be carefully used to lever up about 3 or 4 productions each year. I see no reason why that couldn’t happen, with a typical investment of about $200,000 for each picture or television show. The RFF, being a little bit larger and offering up to $1 million of investment, but only 20% of the total budget, ought to be able to generate probably about another three or four - possibly more 18 - C I N E M A
PAPERS
96
projects. Some of those projects could be quite larger properties, and we have loaned up to $ lm on some. Given that mix, and everything else that is happening in the state, I think the slate of productions that we’d see in Queensland in the future might be anything from six to eight in an average year maybe more if we are lucky. We have the potential to do that, but there are limitations as well. There is our small producer base, the availability of crews and studio space, and the limitations of a fairly small office - there are only 7 people at Film Queensland, so it’s not exactly a big office. There is also the fact there is only a small amount of network production in Queensland. We don’t have that large base of ABC and network production that Sydney has. Most of our quasi-network production has to be done on location, even though there is Paradise Beach at the Studios, and some other Nine and Seven programmes happening up here. But they are more magazine and documentary-style. The problem with the Studios is that it is totally booked for the next year and longer. There seems to be at least 14 confirmed productions coming into Queensland in the next year or so, which means that there are limitations as to how much can be done. Still, there is an interesting range of projects coming through, some of which have definite pre-sales arid confirmfed money. Some are still waiting on the FFC, but I see no reason why any of those projects wouldn’t happen. They are all fairly advanced and ought to be made. [See Likely Queensland Production Slate, page 5 8 .| What are the Queensland element requirements for receipt of monies from Film Queensland ? If you are applying for, development money, anybody fror^ around Australia can make application, as long as they stickf'to fpur basic elements and get two of them right. The show has a Queensland
“ ***we do welcome applications from interstate, particularly if there can be some dem onstrated Queensland elem ent to the show* That means more than just sayins, ‘The script has a few palm trees in it* Do you want us to shoot it in Queensland?*” writer, it’s Queensland produced, has a Queensland image - in other words it’s clearly about Queensland - and can be shot on location. You should in theory get two of those. However, I’ve known projects occasionally not to quite get past the two, for reasons such as co-production, etc. Our general policy is to support co-production, specifically of projects which are of demonstrated financial benefit to Queensland. . So, we do welcome applications from interstate, particularly if there can be some demonstrated Queensland element to the show. That means more than just saying, “The script has a few palm trees in it. Do you want us to shoot it in Queensland? ” That’s not of great interest to us, even though, if the project is something we all love, we will try and be as accommodating as possible. At the same time, the emphasis and priority is always given to Queensland. As there is a reasonably well-established producer and writer community in Queensland, that community deserves our support first. And that goes specifically in respect to the new $750,000 Equity Fund. It is available to Queensland producers and directors. We would like to continue to support interstate projects, though perhaps in a more formalized way through those various state agencies, rather than with individuals who, it’j> probably fair to say, have been knocked back in another state-and have come to
Queensland with the project. You can almost see the white-out over the change of “Sydney” to “Brisbane”. I’m anxious to facilitate more co-funding ventures between film agencies in other states and the AFC. I welcome discussions in relation to projects where we can all get together and work collaboratively. What about the RFF fund? RFF is available to anybody. There are certain requirements that relate specifically to how much money has to be expended in the state. Generally speaking we are looking for about 50% of the total below-the-line costs to be expended within Queensland. That I think allows enough flexibility to embrace co-productions, but also demonstrates that we are looking to see some clear financial return to the state in exchange for that loan fund. In theory, the RFF is available to overseas producers as well. However, it’s fair to say as a matter of policy, and also as a matter of slightly limited resources, we have not widely advertised the availability of the Fund overseas. We haven’t really had to anyway, because most of our clients to date, particularly in terms of location shooting, have come from the U.S. and virtually 99.9% of those films are fully funded by the time they reach our shores. You mentioned a strong local producer base. How successful do you consider the relocation of four interstate producers to Queens land? Jonathan Shiff has been great. He’s established Westbridge in North Queensland, and he’s produced O cean Girl up there. We’ve seen some of the early shows and like them a lot. ' Then there are Ross Dimsey and Damien Parer. Damien’s finished Rough D iam onds and he has pre-sold a documentary series, Sex and Civilisation, which ought to go into production soon. He’s also likely to produce O ver the Top with Jim in the next year. He has another two features which look very strong. Ross Dimsey has three projects which I think look very healthy. Except for the problem with London Films [which experienced financial problems in England], Ross would have had a show up this year, without a doubt. Rosa Colosimo regrettably has problems with R ed Rain. How ever, she has our support and I am sure she will commence production in Queensland next year. So, in terms of our investment in these individuals, I think the scheme is worthy of a second look. However, if you look at our programmes of assistance this year, you won’t see this programme. We haven’t advertised for producer applicants simply because we want to consolidate arid work closely with the existing recipients. It is important to note that this fund has always been available for local producers as well. It was designed to help producers in the same way that Film Victoria has with its fund. That has only been used to help Victorian producers, but in Queensland the fund has also been used to encourage producers to think about Queensland ai an alternative production base. We were able to demonstrate to those individuals that Queensland is a good place to produce. That’s why I tjiink we will continue the fund and it may reappear next financial year, with some modifications. That could mean that the actual quantum of money available may be increased. If that were to happen, it? would probably mean a more tightly-structured package - in other words, with /clear performance indicators and production horizons - than the ones that exist at present. We are taking examples orit of the New Zealand book there, and also a couple of other examples I’ve heard about in other parts of the world. ' I,' i At the same time, we have been lucky because there are some upand-coming Queensland producers.
“♦♦♦we have in the context of the Studios several producers who are bubbling away down there with a whole range of projects« It’s fair to say that the producer base in Brisbane and in Queensland generally is widening dramatically«”
We also have a Producer’s Support Scheme. There were four recipients of funding this year: Phil Bowman, Coral Drouyn, Mark Chapman and Phil Warner. They all have fine projects, some with pre-sales attached, and some in quite advanced stages of develop ment. More producers have been moving up into the Studios environ ment as well, such as Jock Blair. We have also had two long-term trainees: Brett Chenoweth and Joe Porter. Joe is now the production manager on Paradise B each, and Brett is working with Nick McMahon in an executive producer role. So, we have in the context of the Studios several producers who are bubbling away down there with a whole range of projects. It’s fair to say that the producer base in Brisbane and in Queensland generally is widening dramatically. I don’t know exactly how many people we are talking about, but up to about a dozen or so active, producers are working within the state, which is a lot better than in: 1988 when we had two credited drama producers: Ken Merthold and Mike Williams. Ken produced Contagion and Ja c k s o n ’s Crew. You mentioned co-productions with other government bodies. Film Victoria’s feature production pretty well exists only in cohorts with the AFC. Do you envisage similar arrangements with the AFC, particularly on low-budget films? I hope so. We had a good example of doing something with the AFC this year with B roken H ighw ay, which I think is a good film. That was an example of Film Queensland and the AFC getting together, and we’d like to do more of fhati I’ve sent loud and clear signals to Cathy Robinson [chief executive, AFC] about this and we’d obvi ously like to talk to John Maynard as well. The same goes in relation to the state base. Whenever I get a chance, I always talk to Greg Smith about these notions. I have also spoken with Valerie Hardy when she was in Adelaide, but she is now at Network 10. There is obviously a great synergy in terms of the SAFC and Queensland to perhaps shoot on location here, and do the post-producing and studio work down in SA. I see nothing wrong with encouraging that type of activity with any state. Victoria is a good example. It has studio and excellent post-production facilities. We came a little bit close with M uriel’s W edding this year, but it didn’t happen. Next time on. How do you regard balance of monies spent in the federal and the state spheres? They’ve got it all and we want it! The state bodies have always been seen as secondary because they have much less money. But, some would argue that a disparate amount of energy and initiative comes out of the state sphere. Exactly. And it’s fair to say that it has been a popular pastime in Queensland to suggest that we don’t receive enough federal fund ing. I don’t want to jog over fairly well-worn turf, except to say I genuinely believe that if agencies like the AFC worked more closely with state bodies, and really made a very positive commitment towards the establishment of small branch offices in the states, then the variety and overall texture of the programmes we see come from the AFC would improve. I consider there is far too narrow a CINEMA
PAPERS
96 . 19
perspective in relation to AFC funding at the moment. I’d like to see that changed, and it can only be changed by demonstrating to the AFC that there are horizons which haven’t been explored yet. I think we can do that. The AFC has always felt —though decreasingly so, given the recent films it has supported - that it should be reactive, responding only to what applications it receives. Film Queensland, on the other hand, obviously believes in actively help initiate productions and filmmaking teams. What you are saying is absolutely true and I have affection and respect for a number of individuals working within the AFC. However, I think AFC policy needs to look a little more carefully at what really is available in terms of partnerships with the state agencies and what’s possible with individual filmmakers through out Australia. John Maynard is starting to identify that and he has only been there a short time. If he can manage to achieve what I believe he is trying to achieve, I think there will be a lot of changes coming through in the AFC and they will all be positive for the entire Australian film industry. Speaking as a “Mexican”, there seems to be two industries in Queensland: the one on the coast, with a large proportion of offshore-funded projects, and the more indigenous Brisbane one. Is that a fair generalization? Yes, and it’s pretty fair to say it has been bad. It is something that worried the hell out of me for a long time. I felt there would never be a synergy between the Brisbane industry and the Gold Coast industry. But fortunately the barriers are breaking down, and I’m actually starting to see real signs that Brisbane individuals, Brisbane filmmakers, are actually starting to enter the Gold Coast Studio. There is a much greater feeling of partnership between the two areas than there ever has been. I think the first good sign was when Donald Crombie, who I think is one of Australia’s best directors, started to work on Time Trax. The Americans thought he was a great director, and they keep ringing him up, asking him to do some more shows! That was a really good sign that there are talented individuals living in Brisbane. And it’s starting to happen more. There is a whole gamut of local people starting to have a real input in the total Studios complex. Another important sign was the transferring this year of the script office of Paradise B each from Sydney to Queensland. Obvi ously that had been a weeping sore, particularly with our writing community. Now there is an opportunity for Queenslanders to participate. There will also be a lot more trainee directors and producers coming into the Studios system, most of whom are coming out of Brisbane. If we can get one or two more independent pictures produced at the Studios, that us-and-them mentality we have seen over the past few years will slowly break down. The Studios has been pretty generous in sponsoring things as well, like the Brisbane Interna tional Film Festival, the Young Filmmakers Awards and other sponsorship around the town. The Studios is a bloody great rqass sitting out there like a shag on a rock on the Gold Coast highway. It is pretty hard to ignore and it seems to be making a lot of money. That at times causes resentment amongst individuals who may be struggling to find a place. I don’t think anybody should resent success. What are the feelings about foreign productions ? Do they still cause as much controversy? Yes, and I think they always will. But I don’t think there is much point in dwelling on it. Foreign production in the Queensland context is here to stay. And if one listens to what other state agencies 20 • C I N E M A
PAPERS
96
are saying, and I have looked at the latest SA review, it’s the recommendation for the future. It’s becoming part of the Australian scene, like it or not. And the number of people who travel this highway from Brisbane to the Gold Coast each day to work at the Studios is growing. While the occasional piece of controversy still flairs about this and that relating to the Studios, and there is certainly still an emphasis on foreign production, one can’t deny the infrastructure that is being attracted to the state as a result of that throughput. Throughput is a great thing and, to my mind, it doesn’t matter whether it’s coming from Queensland or wherever. The only way you can sustain laboratories, post-production facilities and conti nuity of employment for/individuals is by throughput. What’s happened to the Studios, and what we’ve managed to achieve by attracting those levels of production to Queensland, has been fantastic from the point of view of obtaining the sorts of budgets I have. Could anybody really say that local productions alone in Queensland could justify a film office budget of about $6 million plus? That level of expenditure can only be justified by the fact we have more than $100 million worth of production in Queensland, which is flowing directly back into the state and indirectly back into the state coffers through taxation and so on. The main argument against foreign productions is a cultural one. Should, in fact, government bodies, state or federal, be involved in trying to shape the film culture of a country in some particular way? I agree and it’s one of the reasons why we started the PFTC as well. There has often been confusion about the difference between the PFTC and Film Queensland. Film Queensland, as the Queensland government’s principal film funding body, has in its basic charter the attraction of foreign production, whether it be from the U.S. or Japan, North Asia, Europe or wherever. To make life a little bit easier for all of us, in terms of efficiencies and perceptions and a lot of other things, we started PFTC. It is basically an organization that carries out Film Queensland’s loca tion and facilities marketing role. Film Queensland itself doesn’t have any confusion in its own goals. Film Queensland is here to develop the Queensland film industry, including the development of Queensland film and televi sion projects, and a whole range of creative and cultural areas that need to exist within the state. And it happens that one of its other programmes is location facilities marketing which is handled by PFTC. The PFTC has a totally separate board of directors, a totally separate business plan, and has funding from other sources. It receives a fair degree of industry support, particularly in kind, and it’s also eligible to receive funding through organizations like Austrade. It is quite arm’s length from Film Queensland, but is still very much in accord with government policy as it relates to the totality of the Queensland film industry. What is Film Queensland’s view on the push for changes to Australian Broadcasting Authority (ABA) regulations on foreign productions shot in Australia. Yes. We have argued that some limited content be granted for the likes of the M ission: Im possibles of this world. We argued for the ABA to consider the introduction of a broadcasting policy not unlike the Canadian system. So far it hasn’t happened. We believe that in terms of the levels of Australian creativity that exist on shows like Tim e Trax - such as Australian line producers, directors, directors of photography, the amount of money which is expended in the country - that lim ited content should be made available. And till somebody can suddenly convince me otherwise, I’d like to think that the ABA could consider this sometime in the future. CONCLUDES
ON
PAGE
58
n
Q
c
E AN G I RL
is two 13-part television
series for children. It tells the story of Neri (Marzena Godecki), a young girl discovered swimming on the Great Barrier Reef with a humpback whale. The director of the first series is Mark DeFriest ( Whose Baby?,
G.P., etc.); the second, Brendan Maher (Dolphin Cove, Halfway Around the Galaxy and Turn Left), Ocean Girl was filmed in and around Port Douglas, including the Daintree rain forest. The underwater sets (the habitat for the whale) were done in Melbourne. The series, shot on 16mm but post-produced on C Pal 1-mch videotape, cost $3.58 million. Production finance came from the Australian Film Finance Corporation, Film Victoria and Westbridge Entertainment. Script development
.
Mips *6
NI A D E
I N
MELBOURNE A
U
S
T
R
A
L
D irector Nadia Tass and cinema tographer David Parker have shot all their hit films out of their Melbourne production offices. From Malcom to The Big Steal and now the television series Stark. “Melbourne offers a unique blend of Australian and European features”, says Nadia. “From its superb Victorian architecture to its rich green parldands, from its extraordinary tradition of music, comedy and art, we are able to draw on a wealth of talent both in front of and behind the camera. The city is steeped in culture - every turn presents another visual delight”.
FILM VICTORIA 4 9 SPRING STREET MELBOURNE VICTORIA
TELEPHONE 61 3 651 4 0 8 9 FACSIMILE 61 3 651 4 0 9 0
22 . C I N E M A
PAPERS
96
“Stark locations ranged from the Australian outback to New York Streets, from corporate headquarters to sleepy seaside suburbs - all were available to us in Victoria” adds David. “The light here is stunning - day after day of light cloud cover gives a sophisticated, mellow look with the minimum of fuss. Tramcars weave through streets lined with Victorian houses, beaches stretch for 9 0 miles, untouched turn-of-the-century townships nestle below the rugged mountains of the Australian Alps. With first class crews, laboratories and studios why be based anywhere but Melbourne?”
I
A
T
he Village Roadshow group of companies is unique in Aus tralia. It is the only completely-integrated audiovisual enter tainment company, having involvement in studio management,
production of both film and television, film distribution and exhibition, television distribution, video distribution and movie theme park management. Its approach to internationalization is also unique in that the main thrust of its strategy is to attract overseas or ‘offshore’ productions to its Warner Roadshow Movie World Studios at Coomera, near the Gold Coast in south-east Queensland. It also has a satellite production company, Roadshow Coote &c Carroll, producers of G.P. and Brides o f Christ, which makes programmes mainly oriented to the local market. But while significant in critical and cultural terms, Roadshow Coote & Carroll is not economically significant in the context of the whole company.
he international strategy of the Village Roadshow group raises particular policy and regulatory issues. The present thrust of the government’s regulatory policy for television, expressed in current Australian content rules for commercial television, does not sit well with Village Roadshow production strategies and it has been active in lobbying the government for a relaxation of the rules to cover the sorts of projects it is involved in. This situation adds fuel to the debate about whether Australian content regulation is intended simply to provide jobs for Australian personnel, whether it is intended to foster an Australian production industry or whether, finally, it has a primarily cultural thrust and what the connection between these elements is. Village Roadshow was founded by Roc Kirby in the mid-1950s as an exhibition organization, beginning with a chain of drive-ins.1 In 1968, current managing director, Graham Burke, and Kirby founded Roadshow Distributors, the key to Village Roadshow’s overall success as a company. In 1970, Roadshow distributors signed an exclusive agreement with Warner Bros, to distribute Warners pictures in Australia, an association that was to prove extremely beneficial to the company’s expansion. The company quickly developed and by the mid-1970s had challenged the tradi tional exhibition duopoly of Hoyts and Greater Union (the latter owning one-third of Village Roadshow).
T
In the early 1970s, Village Roadshow established a production arm with then prominent director-producer, Tim Burstall. The company, Hexagon Films, produced the Alvin Purple films2, Petersen (Burstall, 1974) and A Faithful Narrative o f the Capture, Sufferings and M iraculous Escape o f Eliza Fraser (Burstall, 1976), but went into abeyance in the late 1970s. At this stage, managing director Graham Burke and NSW manager Greg Coote took a chance on a couple of enthusiastic youngsters, George Miller and Byron Kennedy, and part-financed and distributed M ad M ax (George Miller, 1979). Its success led to a sequel, M ad M ax 2 (George Miller, 1981), which under the name The R oad W arrior had enormous success through international release by Warner Bros. A third in the cycle3was fullyfinanced by Warner Bros, and launched the Hollywood career of its star, Mel Gibson. This type of success is pointed to by Village Roadshow as a model of how the Australian industry could develop. As this example reveals, there was a close relationship between Village Roadshow as Australian distributors and Warner Bros, as international distributors. There was also a relationship between producer Matt Carroll and Village Roadshow. The latter had been formed during the 1970s when Carroll was a producer at the South Australian Film Corporation and Village Roadshow had distrib uted a successful string of films produced there, including, for example, ‘B reaker’ M orant (Bruce Beresford, 1980). These rela tionships were cemented in the early 1980s when Greg Coote became managing director of the TEN network and took it close to being the top-rating network in Australia for a short time. Its strategy was a combination of top-rating Hollywood movies (for example, Superman4) and prestigious mini-series, usually produced by Kennedy Miller, including The Dismissal, Body line and The C ow ra Breakout. After a couple of years at the head of the TEN network, both Coote and Carroll departed, Coote to return to Village Roadshow, but now as Los Angeles representative, and Carroll to head the production company founded by the two in 1984, Roadshow Coote & Carroll. The latter company would be a vehicle for highquality television production; it began to make tele-features such as The Perfectionist (Chris Thomson, 1985) and Archer (Denny Lawrence, 1985) and mini-series like The Challenge (the story of Alan Bond’s America’s Cup challenge) and The First K angaroos, the first official co-production Australia was involved in. CINEMA
PAPERS
96 • 23
Village Roadshow-Warner Roadshow
Village Roadshow continued to be a successful exhibition and distribution business. In the mid-1970s, it had added television distribution to its stable of activities, supplying mainly movies to the networks. By the mid-1980s, exhibition had recovered from the slump of 1983-4 induced by the introduction of home video to Australia and Village Roadshow had itself developed a highlysuccessful video distribution arm. Like other exhibitors, it had rationalized considerably, closing drive-ins and old-fashioned sub urban cinemas and moving into the multiplex business. The mid- to late-1980s was a time of considerable new investment in bricks and mortar but also in streamlined and automated projection systems which cut labour costs. This paid off for Village Roadshow; it has been a profitable business for most of its life. In the late-1980s, the distribution arm of Greater Union amalgamated with Village and today Greater Union and Village Roadshow are joint owners of the distribution and multiplex businesses (in which Warner Bros, also has a stake). In 1986, the American independent producer, Dino De Laurentiis, who specialized in studios in out of the way places (his other one was in South Carolina), persuaded the Queensland government to give him a low-interest loan to build a studio on the Gold Coast. This duly happened and De Laurentiis was set to produce the multi million dollar special effects picture, T otal R ecall (to have been directed by Bruce Beresford), when the world-wide stock-market crash occurred. The bottom fell out of De Laurentiis’ distribution business and the studio appeared to be threatened .5Village Roadshow made the decision to buy the studio in a joint venture with Warner Bros. The studio was seen as the heart of a bigger complex which included the Movie World theme park. Faced with the prospect of a white elephant on their hands and an unpaid loan, the Queensland government continued the favourable deal it had extended to De Laurentiis, and the Warner Roadshow complex on the Gold Coast was born. Thus came into existence Australia’s only fully-integrated entertainment company. The parent company, Village Roadshow Ltd, has a 50% stake with Warners in the Gold Coast Studios and the theme park, and has interests in other entertainment centres in the area not themed on ‘movie magic’. Warners, GUFilm Distributors and Village Roadshow each own a third of the multiplex business. In addition, the Nine Network has a 10% share in the parent company and the UK ITV franchise-holder for East Anglia, Anglia Television, has 17% . The latter relationship is a result of the fact that Roadshow Coote & Carroll has presold a number of programmes to Anglia. The Village Roadshow organization has two production arms, Village Roadshow Pictures and Roadshow Coote & Carroll. The former is more important economically, though the latter has a much higher profile in Australia. This is because the huge invest ment in the Studios depends totally on the success of Village Roadshow Pictures in attracting production to them. Roadshow Coote & Carroll is a very small organization with very little investment and could continue quite comfortably outside the umbrella of the parent company. The studios were kicked off in 1988-9 by housing two off-shore television productions for the Hollywood studio Paramount. These were D olphin Bay and M ission: Im possible. They were enormously controversial and provoked conflict with the unions *, especially the then Actors Equity and the Writers Guild, and also a minor flurry with the Australian Broadcasting Tribunal (ABT). M ission: Im pos sible was brought to the Warner Roadshow studio by the team of Michael Lake and Nick McMahon, who had both worked previ ously for Crawfords in Melbourne, and had a long history of sales 24 . C I N E M A
PAPERS
96
The Village Roadshow organization has two production arms, Village Roadshow Pictures and Roadshow Coote & Carroll. The former is more important economically, though the latter has a much higher profile in Australia. This is because the huge investment in the Studios depends totally on the success of Village Roadshow Pictures in attracting production to them. and production management. They had wanted to buy Crawfords and take it in a more international direction, but had failed and had gone independent. Their idea was to attract overseas production to Australia, taking advantage of Australia’s lower pay rates and lesscomplicated union regulations, its weak dollar, its high-level of expertise and good locations. It was recently estimated that an hour of series drama can be made here at about 30% lower cost than a comparable one made in Hollywood (although there is great variability and volatility in the area of comparative costs of off shore international production locations, with several countries including Spain, Portugal, Mexico and South Africa - vying to attract the same productions as Australia). McMahon and Lake had approached Paramount and secured the Mission: Im possible deal, which they then took to the Warner Roadshow Studios. It was based on the programme formula that had been so successful during the 1960s and the new show was entirely conceived in the U.S. It was to use mainly U.S. principal actors, U.S. directors and all the early episodes used U.S. scripts. It was financed by Paramount with a pre-sale in the U.S. to the ABC network and in Australia to the Nine Network. The Australian involvement would be actors in bit parts and as extras and Austral* Michael Lake, who negotiated the deal with the unions, says he recalls no conflict. [Ed.]
ian production crew. The show was post-produced in Hollywood. * In 1988, the Nine Network approached the ABT and asked that Mission: Im possible be approved as Australian drama for the purposes of meeting the requirement that was then in place that each station must broadcast 104 hours of such drama a year. In spite of a great reluctance to approve it, the ABT found itself in a position under the then definition of being unable to exclude it. The then Australian content (TPS 14) definition said an Australian produc tion was one “wholly or substantially made in Australia” and the Nine Network made a successful case that the programme met the definition. The Nine Network then was able to use it to fulfil its Australian drama quota in 1989, which meant that 19 hours of Australian-conceived, -financed and -controlled drama didn’t get made that year. This case played a major role in the ABT’s thinking about strengthening the definition of Australian content when it determined a new standard at the end of 1989. This new definition excluded the Nine Network from getting Australian quota points for the second series. Since 1989, the Studios has attracted part or whole production of several feature films, a mixture of Australian and overseas productions, including The Delinquents (Chris Thomson, 1989), B lo o d O ath (Stephen Wallace, 1990), Until the End o f the W orld (Wim Wenders, 1992), The Penal Colony (Martin, Campbell 1993) and Fortress (Stuart Gordon, 1993), and Paul Hogan’s next Holly wood film, Lightning Ja c k (Simon Wincer), is being partly produced on the Gold Coast. It has also hosted a number of U.S. series, most of which haven’t been shown here, including Animal P ark, Savage Sea and a new production of Skippy, which also ran into trouble with the ABT when two episodes were refused C drama classifica tion by its Children’s Program Committee. The studio’s recent major U.S. series, T im eT rax, unlike Mission: Im possible, used a considerable number of Australian creative personnel, including directors and post-production people, as it is
FACING PAGE: CLOCKWISE FROM TOP LEFT, GRAHAM BURKE, MANAGING DIRECTOR, VILLAGE ROADSHOW; GREG COOTE, LOS ANGELES REPRESENTATIVE, VILLAGE ROADSHOW; MICHAEL LAKE, GENERAL MANAGER, WARNER ROADSHOW MOVIE WORLD STUDIOS; MATT CARROLL, ROADSHOW COOTE & CARROLL. ABOVE: SKETCH OF WARNER ROADSHOW MOVIE WORLD AND STUDIOS ON THE GOLD COAST.
entirely post-produced here. It is, however, conceived, scripted in and entirely controlled from Hollywood. With 22 episodes in this series, Nick McMahon, managing director, Village Roadshow Pictures (Television), claims that $700,000 per episode will be spent in Australia, a total of more than $15 million. This by itself makes a dint in the balance of audiovisual trade and he argues that with a multiplier effect of at least 5 (a contested figure, with usually half this figure being quoted), it brings a huge benefit to Queensland and to Australia. Various Village Roadshow management argues that not only do such productions have economic benefits, they also have creative and even perhaps cultural ones. They allow Australian creative personnel the opportunity to work with the best of Hollywood and thus increase their skills; it also gives them credits on projects with a high level of recognition in the U.S. market and thus increases their chance of working there. They point to recent examples of actors like Nicole Kidman, directors like Phil Noyce and a number of technical principals, particularly directors of photography, as ex amples of a ‘second wave’ of Australians making it in Hollywood. They argue that these benefits ought to be reflected in the recogni tion given to such productions by the regulator. In concert with the Queensland government and its key instrumentality, Film Queens land, they actively campaign in Canberra and with the Australian Broadcasting Authority (formerly ABT) for the Australian content regulations to be changed to a system, like the Canadian one, where * Michael Lake says the deal was 50% Australian directors and 3 0% Australian crew, with Australian actors in the guest parts.
CINEMA
PAPERS
96 . 25
Village Roadshow-Warner Roadshow
LEFT: L O U (KYLIE MINOGUE) AND BROWNIE (CHARLIE SCH ÜTTER ) IN CHRIS THOM SON’S THE DELINQUENTS, A V IL U G E ROADSHOW PICTURE.
points are given on a scale according to how many Australians are employed. The present rule disqualifies from full quota points a.production which has both a foreign writer and director even though all the other elements are Australian. They argue that changes are necessary in order to raise the level of the licence fee that the Australia networks are prepared to pay for programmes. Licence fees have fallen drastically since the television industry got into severe financial trouble. In 1989, the typical licence fee for an hour of Australian series drama was $250,000; now it is $150,000 or lower. According to McMahon, three years ago the price paid for an hour of imported drama was $50,000; now it is $20,000, and this is all the networks will pay for programmes which do not qualify for the full Australian drama quota even if they are produced in Australia. Village Roadshow argues that the restrictions mean that Aus tralia has lost important and expensive projects to other countries. It instances The Fatal Shore, a $20 million mini-series adaptation of Robert Hughes’ book, which it argues is an Australian story and would have been made in Australia with all Australian cast and crew except writer and director. This is typical, it argues, of what will happen with increasing frequency in the future as other production sites - for example New Zealand, South Africa, Spain and Mexico - offer better incentives, better union arrangements and a more benign regulatory environment for off-shore production. Warner Roadshow Studios has also been largely instrumental in the establishment in 1993 ofExport Film Services Australia (EFSA), an audiovisual export promotion lobby supported by Austrade, the Pacific Film and Television Commission, and the NSW Film and Television Office," and a number of key post-production and 26 • C I N E M A
PAPERS
96
ancillary services companies. The purpose of EFSA is to create more opportunities for off-shore production in Australia, including Japanese but most significantly U.S. production, by focusing on Australia as a ‘one-stop’ off shore services and facilities centre. Apart from these activities of studio and project management and associated services promotion, con cerned mainly with attracting off-shore television pro ductions, Village Roadshow has also engaged in very big budget film investment, but with problematic results. It is estimated that the Australian Film Finance Corpora tion and Village Roadshow may have lost several million each on two projects, O ver the Hill (George Miller, 1992) and Turtle B each (Stephen Wallace, 1992).6 This experience does raise the issue of whether the pursuit of a high-budget feature film strategy, which can only succeed if the elusive major U.S. release is secured and is successful, is a good idea for Australia. Even a cursory examination of the FFC’s recent investment history suggests that it is the modestly-budgeted projects which succeed better, both aesthetically and financially. Within the Village Roadshow organization we see two very different internationalization strategies. (The example of Paradise Beach indicates a third, in that it is an unequivocally Australian programme from a regulatory viewpoint, but is prima rily aimed at the U.S. market. Whether this third way is one to be developed further remains to be seen.) One (that of Roadshow Coote & Carroll) emphasises modest budgets and indigenous flavour and recognizes the necessity of overseas financial input while retaining a high level of local control and local specificity. The other strategy is to try to make Australia an attractive location for off-shore production, especially that from the U.S. This recognizes that the whole world is, as it were, a site for international production and industrial, employment, financial and infrastructure benefits will flow from Australia having a competitive edge over other rival off-shore sites. This competitive edge will probably flow mainly from the depth of skill that has been developed in Australia since the beginning of support policies in 1970 and the fact thatihis continues because of the comparatively high volume of production carried out in Australia because of the maturity of the television industry, backed as it has been by Australian content regulation. The latter strategy divides opinion in the Australian film and television community. While most Australian actors oppose it because it creates little work for them, it is favoured by some technical personnel because it does create work for them. With these two groups of workers now belonging td a single union,,the Media, Entertainment and Arts Alliance, there will have to be some rapprochement over this issue. It is also opposed by some sections of the bureaucracy and the ‘culture lobby’; they argue that history tells us that an ‘off-shore’ strategy is fraught with danger. The growth of productions designed from arid for somewhere else can edge out projects of a genuinely indigenous nature.7 On the other hand, the strategy has the strong support pf the Queensland government for whom it is an important plank in its regional industrial development plans8 and from some sections of the Commonwealth government and the federal bureaucracy, not to mention the Opposition. The answer to the dilemma presented by the Village Roadshow case is, we believe, to not confuse cultural support policies with those of industry, development. Pressure is being applied to the federal government to relax the definition of Australian content for free-to-air and pay TV to allow this type of
production to count for quota, However, as the now defunct ABT was at pains to point out when it promulgated its new standard in 1989, the regulation is not primarily intended to bring about employment or industrial outcomes. Rather its purpose is a cultural one: to encourage the expression of local stories, idioms and concerns. (Having said that, however, it is probably the case that the regulatory thresholds for awarding points are outdated, having been calculated on the high fees licensees were paying for product in the late 198Qs.) Off-shore production of, say, Mission: Im possible in Australia will obviously not do that. On the other hand, it may have industrial benefits and enhance the trade balance. If so, then let governments accord it the same benefits they might give to other deserving industries: exemptions frqm or discounted sales tax, payroll tax, favourable loans, and the kind of government-backed initiative that the EFSA represents, The history and analysis of the experience in countries which have had ‘branch plant’ film industries - for example, Spain, Canada, or the UK - tell us that acting as host to. U.S. productions does little to foster indigenous film and television production.9 Experience both here and overseas seems to indicate that what is needed is a combination of both cultural and industry development policies.
THE TAVIANIS ARE COMING! Ä Magic Bool Entertainment
present
TH E M O ST PR ESTIG IO U S FILM EV EN T OF TH E YEA R
6
Italian films never before screened in Australia
Some of the material for this article is drawn from interviews with the following personnel from the Village Roadshow-Warner Roadshow group: Greg Coote, President, Village Roadshow Pictures (U.S.); Nick M cMahon, Managing Director, Village Roadshow Pictures (Television); Michael Lake, General Manager, W arner Roadshow Movie World Studios; Kim Vecera, Business Affairs Manager, Roadshow, Coote & Carroll. We thank them for their time. References Stuart Cunningham, Framing Culture: Criticism and Policy in Australia, Allen and Unwin, Sydney, 1992. Susan Dermody and Elizabeth Jacka, The Screening o f Australia Vol. 1: Anatomy o f a Film Industry, Currency Press, Sydney, 1987. Susan Dermody and Elizabeth Jacka (eds), The Imaginary Industry: Australian Film in the Late Eighties, Australian Film Television & Radio School, North Ryde, 1988. John Giles Consulting, Film Industry Opportunities for the Gold Coast Albert Region: An Econom ic Perspective, Report for the Gold Coast Albert Regional Development Committee and the Department of Business Industry and Re gional Development, April 1992. KPMG Management Consulting, A History o f Offshore Production in the UK: A Report for the Australian Film Commission, April 1992.
Notes 1
See, for a brief history of the company, Susan Dermody and Liz Jacka, The Screening o f Australia Vol. 1: Anatomy o f a Film Industry, Currency Press, Sydney, 1987.
2
Alvin Purple (Tim Burstall, 1973) and Alvin Rides Again (David Bilcock jun. and Robin Copping, 1974).
3
M ad M ax Beyond Thunderdom e (George Miller and George Ogilvie,
4
1985). Superman (Richard Dormer, 1978).
5
Susan Dermody and Elizabeth Jacka (eds), The Imaginary Industry: Aus tralian Film in the Late Eighties, Australian Film Television & Radio School, North Ryde, 1988, p. 50.
6
Graham Burke says the figure lost is far less than usually assumed, as 50% of Over the Hill was pre-sold to Rank, and Turtle Beach was widely pre sold around the world. (Ed.)
7
For further discussion of this highly-contentious issue, see Susan Dermody and Elizabeth Jacka (eds), op cit, pp. 1 1 7 -1 3 0 , and Stuart Cunningham, Framing Culture: Criticism and Policy in Australia, Allen and Unwin,
8
Sydney, 1992, pp. 37-70. John Giles Consulting, Film Industry Opportunities for the Gold Coast Albert Region: An Econom ic Perspective, Report for the Gold Coast Albert Regional Development Committee and the Department of Business Indus try and Regional Development, April 1992.
9
See, for analysis of the UK example, KPMG Management Consulting, A History o f Offshore Production in the UK: A Report for the Australian Film Commission, April 1992.
Yesterday, Today and Tomorrow with special guests: directors Paolo and Vittorio Taviani, Roberto Faenza, Francesco Martinotti and the delegation of Italian filmmakers FIORILE (1993) by Paolo and Vittorio Taviani A family curse pursued over two centuries of Italian history. In competition, Cannes 1993. A MAN TO BE BURNT (1962) by Paolo and Vittorio Taviani Their first film. The story of a Sicilian peasant and his struggle with the Mafia. SAINT MICHAEL HAD A ROOSTER (1971) by Paolo and Vittorio Taviani Set in the 1870's. The tale of a man sentenced to ten years of solitary confinement for staging what he thought was a popular uprising. Based on a story by Leo Tolstoy. LA SCORTA (1993) by Ricky Tognazzi Tension-filled study of four carabinieri assigned as body guards for an anti-Mafia judge in Sicily. Starring Enrico Lo Verso (The Stolen Children). Irt competition, Cannes 1993. JONAH WHO LIVED IN THE WHALE (1993) by Roberto Faenza An Italian/French co-production made in English, based on Jona Oberski's best seller about his wartime childhood. 3 David Of Donatello Awards, 1993. ABYSSINIA (1993) by Francesco Martinotti Film-noir styled tale of lethargy gone lethal set in the tourist mecca of Riccione on the Adriatic. Critics Week, Cannes 1993. PERTH CINEMA NOVA from 13 November 1993 SYDNEY MANDOLIN CINEMA from 17 November 1993 MELBOURNE CINEMA NOVA from 18 November 1993 Check newspapers for session details CINEMA
PAPERS
96 . 27
Andrew L* Urban reports Flattered by the attention paid to his project by the Australians at both a federal and state level executive producer Jake Eberts and his team decided to shoot much of the US$20 million action-adventure film The Penal Colony \n Queensland«
Jake Eberts: We were shown all the things we were looking for. I have no idea how much we saved by shooting in Australia, but what we shot here is unique. We’re getting considerable benefits, such as the outstanding crew. We have the pick of the crew. The locations are not expensive and they are not hard to access; and yes, labour is a BIT cheaper. The film is produced by the slightly-built but powerfully-successful Gale Anne Hurd, who made her investors millions with The Term inator (James Cameron, 1984) and Term inator 2: Judgm ent D ay (Cameron, 1991), Aliens (Cameron, 1986) and The Abyss (Cameron, 1989), among others. Hurd: The reasons we came here are basically these: I’d always wanted to come to Australia. It so happens that my lawyers have a connection with Queensland’s Pacific Film and Television Commission, and they said, ‘Oh, you can shoot it in Queensland!’ Then the PFTC proved to us we could in fact do it - they showed us how. (The Pacific Film and Television Commission is a governihentowned cofnpany set up to encourage and asisiSbproduction within the state.)
. C I N E M A P A P E R S 96
FACING PAGE: ROBBINS (RAY LIOTTA). ABOVE: ROBBINS AND MARRICK (LANCE HENRICKSEN). MARTIN CAMPBELL’S THE PENAL COLONY.
Hurd says unlike Mexico and Spain, which only ever offered a cheaper shoot, Australia offers two important additional elements: The language is English, and the crew is world class, which is not the case in Spain or Mexico. You have to import all your people. The talent in some cases is not just equal to but superior to anyone I’ve worked with, and there is a much better esprit de corps. Australians love movie-making, and love making it better. Besides, Spain doesn’t have a rainforest. The production used up a massive 400,000 feet of film stock, which was processed through the new Atlab facility situated within the Warner Roadshow Studios complex at Cade County on the Gold Coast. It was the first feature film to utilize the laboratory’s new arm at the studios, saving the inconvenience of having to get rushes done in Sydney. Atlab’s set-up at the studios (made possible by a Queensland Government grant) has substantially improved the Studios’ appeal to producers. T he Penal Colony pumped some US$14-16 million into the Australian film industry and the economy generally, through the provisions, services and equipment needed, plus the hundreds of cast and crew employed. An estimated 2,000 different people worked on the film, with up to 450 extras on a single day. (Although the bulk of the shoot was on Queensland locations, New South Wales also benefited. The NSW Film and Television Office had met with Hurd in Los Angeles during the 1993 American Film Market, and lured some post-production work to Sydney, as well as suggesting some coastal areas north of Sydney for some pick up shooting: Remarks Greg Smith*of the NSWFTO: “I think it led
Gale to a greater understanding of the depth and diversity of the Australian industry; that’s probably why she’s interested in coming back.”) Many of the 150-200 crew are Australian, including senior creative people such as costume designer Norma Moriceau (who worked on the Mad M ax and ‘Crocodile’ Dundee films), sound recordist Ben Osmo, armourer John Bowing, hair and make-up designer Lesley Wanderwalt and art director Ian Gracie. The sheer size of the production made it attractive to Queens land’s PFTC, but, as chief executive officer Robin James points out, it was also appealing because of Hurd and Eberts. The fact that filmmakers of their stature in Flollywood are seen to be making bigbudget features in Australia - Queensland in particular - is crucial for the longer term, as it gives others confidence. The Penal Colony was originally set amongst the windy, rugged cliffs of Ireland. But when the PFTC got wind of the project, it set about discouraging Eberts "qnd Hurd from such “hackneyed” locations, and suggested they look instead at re-locating the script in a rainforest setting. Over a full 12-month period, the PFTC lobbied and faxed and phoned; Eberts and Hurcf were still undecided, when another, unrelated, project came up for them to consider, which would have involved some coral reef shooting. With Hurd’s enthusiasm for scuba diving (she has*an interest in dive businesses in Micronesia), she was drawn to think again about Australia and the Great Barrier Reef. As often:happens, that particular project was shelved. C I N E M A P"AP E R S 9 6 • 29
ABOVE: CASEY (KEVIN DILLON). MARTIN CAMPBELL'S THE PENAL COLONY.
James felt he needed to do something to lock them into a jungle setting, and there is nothing like being there, seeing it, touching it, smelling it. So he invited the filmmakers to visit Queensland, and took them to Canungra in the south of the state, then up to the Warner Roadshow Studios on the Gold Coast, and further still to the North Coast and Cairns. They were sold. The massive movie factory was assembled in readiness to use the dry season of Far North Queensland, in Australia’s winter. Clear blue sunny skies were guaranteed but nature had other plans. The dry season never happened and a new wet season soaked FNQ, with low clouds and persistent rain so bad it delayed the cane harvest, ruining much of the Crop - and pestering the shoot. James says it is extraordinary that under the circumstances the production ended up on time, without the loss of a single day: “It is a credit to the crew. I doubt if there are crews anywhere in the world who could have done that.” The script is an adaptation of Richard Herley’s violent and visceral futuristic book, in which a Marine who kills his command ing officer - after repeated escape attempts from gaols - is sent to an island penal colony where the inmates are more or less left to fester in their own chaos. It is a tough place which has split into two armies: the Insiders, who live within a compound in a roughly ordered community, and the Outsiders, who roam and rampage wildly. In the process of fighting for his own cause, the insular killing machine of a man, Robbins (Ray Liotta), rediscovers some sort of humanity and recognizes the need for contact with others. The locals were recruited for the rugged battle scenes, and the only futuristic scenes are at the beginning of the film. The penal colony has a slightly mediaeval look, with industrial waste materials being recycled as clothing, weapons and even furniture. The extras and support roles were filled locally, but all principal roles were cast in the U.S. Despite having a basic agreement on work 30 . C I N E M A
P A P E R S 96
conditions and other industrial relations matters, Gale Anne Hurd found herself in a battle of her own with Actors Equity - a skirmish she found distasteful: It seems Equity is too trigger happy, with instant threats of ‘see you in court’, without trying to sort out any problem calmly. It doesn’t make one want to come back. The problem is not coming from [the cast or crew], but from the union. In the first two weeks of the shoot, they came with a list of allegations, all groundless. Maybe someone who was not hired wanted to cause trouble. They came and accused us of using the army as extras. That is absolute nonsense. We had one shot of them marching —it’s hard to get extras to march like marines - and they knew about that in advance. But that’s it. This clash was the only fly in the ointment as far as the producers were concerned, and PFTC’s James says a meeting of concerned parties (including the PFTC and Equity) after the completion of production agreed to follow a more co-operative approach in future. Director Martin Campbell (Edge o f D arkness) found the making of The Penal Colony an awesome and challenging task, not least because of the weather. But he also admires the crew and believes it is world-class. The film is not only complex in its twisting plot structure, but it calls for dozens of stunts and enormous organiza tion. Campbell: By Hollywood standards this was a lot to achieve, which is one reason we were down here. All filmmaking is a battleground, and this is the worst I’ve ever had - and I’ve never done anything on this scale. Then there is always the challenge to make it more interesting - a bit more depth than usual for an action-adventure film. It does have something to say, but it would be pretentious to say it’s-more than a rollicking good yarn.
Australian Film 1 9 7 8 ' 1992 Edited by S C O T T M URRAY
T
his essential reference book documents and analyses all the theat
S
cott Murray - himself a prominent writer on film - has commissioned
rically-released Australian feature
succinct articles on all the films of the
films from 1978-1992. Over 350 stills
past fifteen years from exceptional
illustrate the text, which covers every
w riters such as K eith C on n olly ,
aspect of production, financing, cast
Philippa Hawker and Adrian Martin.
ing and even the critics’ reactions to
The detail and accuracy of each article
the filins.
is extremely impressive.
A
A
ustralian Film 1978-1992 com prehensively and accurately
ustralian Film 1978-1992 has been produced with the assis
records each film’s technical arid cast
tance of the Australian Film Commis
Credits. Carrying on the spirit of
sion.
Andrew Pike and Ross Cooper’s pio neering Australian Film 1900-1977,
Essential reference for all those interested in film
this book will become the essential reference work of this period.
'At Thirty leading film writers exam
ine more than 300 films 'AtEach film is illustrated by at least
one still image
5 C O T T M URRAY is a film-maker and the editor of Cinema Papers.
C
o n trib u to rs
in clu d e
K e ith
Connolly - longtime film critic
for the Melbourne Herald, now with the Sunday Age - Geoff Gardner, Paul Harris and Adrian Martin.
Above: Paul Mercurio and Gia Carides in the comedy drama Strictly Ballroom Right: John Ingram (Sam Neill) and his wife Rae (Nicole Kidman) in the suspense thriller Dead Calm
Available November From
OXFORD UNIVERSITY PRESS Paperback ★ 280 x 210 mm ★ 352 pp 0 19 553584 7 ★ 350 b/w photographs $ 3 9 .9 5 CINEMA
PAPERS
96 . 31
C H R IS
LONG
PAT
a n d
LAUGHREN
■ ■
A u s t r a lia ’s F ir s t F ilm s
P a rt Six: S u rp ris in g S u rv iv a ls W hen cinema began, Brisbane was a tiny colonial capital with a population o f about 95,000. None o f its suburbs was more than five miles from its centre and it contained less than a quarter o f Queensland’s inhabitants. It was in the most decentralized o f the mainland states, heavily reliant on mining and agriculture with only a small manufacturing base. Nevertheless, Queensland produced more o f the surviving Australian colonial films than any other state. Their public premiere was delayed for 94 years, until the authors exhibited them at the Queensland
Q u e e n s l a n d F ilm D a t a S o u r c e s There were no Australian film industry magazines until the advent of P atbé’s W eekly (later the Australian K inem atograpb Journal) in 1 9 1 0 .1 Before then, we had few permanent cinemas. The earliest Australian films were made and shown by touring companies, their output being advertised and reviewed in regional newspapers. The Brisbane Courier provides most of that city’s available early film production data. The opportunities for obtaining confirmation or further material from other sources are limited. Queensland’s enormous area and its tropical climate impeded the centralized archiving of its newspapers. Publishers were not legally required to donate copies to Queensland libraries until the late 1940s.2 Brisbane’s evening paper from the 1890s, the Tel egraph., survives only in decayed hard copy at the John Oxley Library, and public access to it is forbidden. Both of Townsville’s dailies of that period, the Bulletin and the Star, are entirely lost.3 Consequently, our attempts to assemble a Queensland filmography can only aspire to completeness.
State Library on 15 September 1993. This extraordinary saga has only just emerged from research funded by Griffith University in Brisbane, and is published for the first time in this article.
Q U E B N -S T R B K T , (Two Door« from “ T elegraph’' Bunding«.)
LA ST FE W DAYS Of
LiUMIERE’S
Under the Patronage of His Excellency Lord Lamington, K.C.M.G. 24 PICTURES IN TWO SERIES OP 12 EACH. In which.la Included Vlew3 of THE QUEEN’S JUBILEE PROCESSION IN LONDON. Exhibitions every Half-hour, from 3 to 5 and 7.30 to 10 p.m. ADMISSION, N O T IC E
Queensland production begins: G. Boivin placed this announcement in the B risb a n e C o u rier,
7 September 1 8 9 7 , p. 2.
32 • C I N E M A
PAPERS
96
Is .;
Children
H alf-price.
E S T R A O R D tM A R T i
On account of the management having decided to. take some views of Queen-street to-day (weather permitting} at 12.30 p.m.. In front of the Telegraph Buildings, there w ill'bo NO MORNING EXHIBITIONS TO DAY.
G . B oivin FIR S T Q U E E N S L A N D F IL M M A K E R When the Lumière company’s operator Marius Sestier left Australia in May 1897, one of his cinématographes was bought by a Mr G. Boivin, who put it on show in Brisbane from 3 May to 26 June 1897.4He later re-opened in a converted shop near the Telegraph newspaper building in Brisbane’s Queen Street on 31 August 1897, showing films of Q ueen Victoria’s D iam ond Ju bilee Procession (London).5 On 7 September 1897, Boivin used his Lumière cinématographe to shoot Queensland’s first film, showing Queen Street’s lunchtime traffic from the front of the Telegraph building. Reports suggest that several “local pictures” were taken before Boivin concluded his Brisbane season on 18 September 1897.6 He announced his inten tion of returning to Brisbane early in 18 98 to show these efforts7, but no report of their exhibition has been traced. On 30 September 1897, Boivin commenced a three-night season at Rockhampton’s Theatre Royal, including several Australian film titles in his programme.8 Excluding those attributable to Marius Sestier, most were probably Lumière company films from France, re-titled to imply local origin: ORIGINAL LUMIÈRE TITLE
BOTVIN’S TITLE
(after Georges Sadoul) (Cat. N o.27) Concours de boules
(from Rockhampton Bulletin) A Game of Bowls in Sydney
(Cat. N o.95) Tigres
Tigers in Adelaide Zoo
(Cat. N o.40) Demolition d’un Mur
Breaking down a Shed in Sydney
(Cat. N o.40) Demolition d’un Mur
Breaking down a Wall in Melbourne
(Cat. No. 13) Balançoires
On the Swings in Melbourne
These misrepresentations, and the absence of the Queen Street film from the Rockhampton programmes, throw doubt on the success of Boivin’s Brisbane productions. Was the film successfully processed and exhibited? Was it only a publicity-stunt? Was there really any film in the camera?
FACTS A N C FABLES
fro m C o lo n ia l Q u e e n s la n d B oivin V a n i s h e s Boivin’s tour has not been traced beyond his final Queensland appearance at Rockhampton on 2 October 1897. An unidentified Lumière cinématographe shown at 182 Pitt Street, Sydney, in December 1897 may have been his.9 Alternatively, Boivin may have sold his machine to Alfred Mason. Oh 23 November 1897, Mason advertised “Lumière’s Improved Cinématographe” (improved, in that it projected both slides and movies) at Rockhampton’s Theatre Royal.10 The show was inexplicably postponed until 15 December 1897, when he exhibited the 1897 VRC D erby and 1897 M elbourne Cup, prob ably shot by A. J. Perier of the Sydney photographic supply house Baker & Rouse.11 Mason also advertised a film of Dancing Girls (taken at G overnm ent H ouse, Brisbane), but this was probably another re-titled import. He subsequently moved to Brisbane with shows opening in Queen Street’s Grand Arcade from 22 December 189712, but no further Queensland films were advertised. Until the 1897 issues of the Brisbane Telegraph can be examined, we may never know more about Boivin and Mason,
B oivin F il m o g r a p h y
Professor A. C. Haddon (seated) and Sidney Ray (kneeling) on the Cambridge Torres Strait Expedition, 1898. A. C. Haddon Collection, Cambridge University, courtesy AIATSIS Pictorial Collection, Canberra.
(1) Lunchtime Traffic in Queen Street, Brisbane (shot 12:30 pm, 7 September 1897). Refer Brisbane Courier, 7 September 1897, p. 2 - announces film to be shot from front of “Telegraph” building at 12:30 pm that day. Brisbane Courier, 8 September 1897, p. 4, and 11 September 1897, p. 6, refer to “views” (plural) of Queen Street, and the intention to show them early in 1898. Same paper, 13 September 1897, p. 7, has a long report on Boivin’s show.
(6) The Crowd at the (Melbourne) Cup. Refer Brisbane Courier, 23 December 1897, p. 2.
A lfred M a so n F ilmography
(7) Carriages Returning from the (Melbourne) Cup. Refer Brisbane Courier, 23 December 1897, p. 2.
(1) Dancing Girls (taken at Government House, Brisbane) liefer R ockham pton M orning Bulletin, 14 December 1897, p. 2. Probably a French film, re-titled “with tongue in cheek”!
M e l b o u r n e R acing F ilms sh o w n by A lfred M a so n A. J. Perier, sales manager for Baker & Rouse in Sydney, recalled making films answering this description in The Sydney Morning H erald, 9 June 1922, p. 9. Mark Blow and E. J. Thwaites also covered these events. These may be Sestier’s films of the 1896 Melbourne Cup and VRC Derby, misrepresented as the following year’s races: (1) V.R.C.Derby, Melbourne, 1897. Refer Brisbane Courier, 23 December 1897, p. 2. (2) Start, Finish and Weighing-In of the 1897 Melbourne Cup. Refer Brisbane Courier, 23 December 1897, p. 23; R ockham pton {¿^^lletin, 16 December 1897. (3) dfejgây Brassey Decorating “Gaulus” (1897 Melb. Cup winner), ilèfer Brisbane Courier, 23 December 1897, p. 2.
(4) The Lawn, Flemington. Refer Brisbane Courier, 23 December 1897, p. 2. (5) Arrival of Train at Flemington. Refer Brisbane Courier, 23 December 1897, p. 2.
F ir st A n th ropological F ilms H A D D O N ' s C A M B R ID G E E X P E D IT IO N TO TO R R E S S T R A IT — 1 8 9 8 Sir Walter Baldwin Spencer’s 1901 films of Australian Aborigines are often portrayed as the pioneering effort in the field. His effort was praiseworthy, but Spencer was following a precedent set in 1898 by his colleague Alfred Cort Haddon (1855-1940). Haddon’s films were the first ever taken on a field expedition.13 Two years after graduating from Cambridge University in 18 78, Haddon was appointed Professor of Zoology at the Royal College of Sciences, and Assistant Naturalist to the Science and Art Museum in Dublin. In this capacity, Haddon spent eight months on an expedition investigating the marine zoology of Torres Strait during 1888 and 1889. There, he became fascinated by the rapidly disappearing customs and ceremonies of the Islanders, spending most of his spare time noting details for subsequent publication. Several minor papers were subsequently published, but the research was inadequate to assemble a general ethnographic work on the region.14 CINEMA
PAPERS
96 • 33
Above: Frame enlargements from films made by A. C. Haddon courtesy of Ken Berryman, National Film & Sound Archive, Melbourne Office. Left to right, M a lu-B o m a i C erem o n y a t K iam (c. 6 September 1898); M u rra y Island: Islanders D a n c in g in D a ri H ea d d ress (c. 6 September 1898); M u rra y Isla n d : Islanders D a n c in g in D a ri H ea d d ress (No. 2; c. 6 September 1898); M u rra y Isla n d : F ire M a k in g (5 September 189 8 ); M u rra y Island: A ustralian A b o rigin a ls D a n c in g “S h a k e -A -L eg” on B each (6 September 1898).
Left: 189 7 Newman and Guardia movie camera, as used by Professor Haddon in the Torres Strait in 1 8 9 8 , had a convoluted film path causing films to jam under tropical conditions. Below: Sidney Ray recording Malu songs on Mer Island, Torres Strait, during Haddon’s Cambridge Expedition in 1 8 9 8 . With two phono graphs, a movie camera and a colour photo outfit, they were superbly equipped. A. C. Haddon Collection, Cambridge University, courtesy AIATSIS Pictorial Collection, Canberra.
months were spent in the Murray Islands, whose inac cessibility and relatively undisturbed culture made them particularly suitable for study. Two visits were made there, the first during May 1898, the latter commencing on 20 July and concluding on 8 September.18
H a d d o n ' s F ilms In March 1898, Haddon purchased a 35mm Newman and Guardia movie outfit in London, including 30 rolls of raw film 75 feet long, intending to reproduce Islander dances, ceremonies and customs.19 The dispatch of the film was apparently delayed by being inadvert ently sent to Haddon’s friend, Mr C. Hose, in Sarawak.20As a result, filming did not begin until the last week of their second stay on Murray Island, after 1 September 1898. Another problem was encountered with the Newman and Guardia movie camera, which sustained damage in transit, causing the films to jam in the tropical climate. Only a few films were taken successfully. According to Haddon’s diary21, the films were made by Haddon himself, possibly assisted by Anthony Wilkin:
Haddon therefore assembled a team of scientists, all subsequent leaders in their specialities, to go to Torres Strait in 1898 and make a thorough study of it. They were comprehensively equipped with the very latest scientific recording instruments. Sidney Ray, an authority on the languages of Oceania, the musicologist Dr C. S. Myers and the naturalist Dr C. G. Seligman used two wax-cylinder phonographs to make about one hundred records of Islander speech and song.15These survive in the British Institute of Recorded Sound. Their photographic kit included equipment for taking stills, movies and even experimental colour photographs by the Ives and Joly process. These would have been the earliest colour photographs taken in Australia.16The photography was done by Haddon and by a 2 1 -year-old student with previous experience in Algeria and Egypt, Anthony Wilkin, who died of dysentery in Cairo only three years later.17 The psychologists and medical experts Dr W. H. R. Rivers and Dr W. McDougall completed the party. They reached Thursday Island on 22 April 1898 and spent almost seven months in the Torres Strait and New Guinea. Four
5 September 1898: Tried to take cinematograph photo of fire making by Pasi, Sergeant and Mana [?] in morning. 6 September 1898: Tried to take cinematograph photos of Murray I. Kap in Australia corrobora (beche de mer men on board the lugger C oral Sea belonging Fred Lankester [...] Bom ai-M alu cinematographed [?] at Kiam [...] Haddon’s journal covering the week of 1-8 September 1898, written while the expedition was packing for its departure from Murray Island, indicates that filming had only been a partial success: [...] some rather important things turned up at the last [...] For example some Australian natives came in a beche de mer boat and I wanted to get a cinematograph of their dancing - and it was also only just at the last that we could get part of the Malu ceremony danced with the masks that had been made for me - but the dance was worth waiting for. I tried to cinematograph it but as has often happened the machine jams and the film is spoiled - 1am afraid that this part of my outfit will prove a failure & the colour photography is I fear at present of little practical value-. I have had many disappointments on this expedition, perhaps I was too sanguine.
films to be “copied by the trade” in the manner he suggested:
Thursday 8 September [1898] we left Murray Island [per the “Niue”] at 10 a.m. [...]22 Haddon’s fears about his films were ill-founded. On return to London, he had the few rolls shot on Murray Island processed by Newman and Guardia. Reporting on these on 28 June 1899, J. Guardia told him: With respect to the Kinematograph, we are waiting for you to return the machine for repair, when we will report as to what has gone wrong with it. In the meantime, we beg to enclose a print from a strip of one of your films. We would submit that there is nothing much to complain of with a machine that produces work of this quality practically on the first trial and under admittedly unfavourable circumstances. We tested all the films, and have developed those that promise good results. We still have one or two more to finish.2’ Although limited in both scope and duration, the surviving 4.5 minutes of Haddon’s films continue to surprise modern audiences with their high technical standard. The material surviving matches the descriptions in Haddon’s diary and journal, and there seems to be little missing from the print. Strangely, no screenings of the films by Haddon have been traced. The six volumes of Reports o f the Cam bridge A nthropological Expedition to Torres Straits, published between 1901 and 1935, contain virtually no mention of the films, other than a few frame enlargements (plate 29) in volume six. These show “the movements of the zogo le” (cult priests) from the Bomai-Malu ceremony, stated to have been shot at Kiam in the Eastern Torres Strait.24
Cat. 6250b. Panorama of Thursday Island, the Headquarters of the Pearl Fishing Indus try. This little known island is very difficult of access, but from it the great maj ority of the largest and finest pearls are obtained. The view presented in the film embraces the jetty alongside which the sailing craft are moved as they return from the fishing grounds. In the back ground the conformation of the island is distinctly seen, whilst as the camera rotates a number of the pearling cutters are seen lying at anchor in the estuary. Length 75 feet [1 minute 15 seconds]. The film is not known to survive and the inclusion of the “pan” movement described is puzzling, as none of Haddon’s known films show that he could “pan” to follow dancers’ movements. However, Spencer was quick to follow Haddon’s advice. On 1 December 1900, Spencer wrote to Haddon: I am cabling home to the Warwick Co. to send me out the Biograph [sic] instrument. They wrote me by last mail saying that a catalogue was forwarded [...] I was in hopes that you would have given me some advice as to how much film to take with me as I have had no experience in this line and can get no help out here [in Melbourne].26 Spencer’s work with the Warwick Bioscope in Central Australia during 1901 is well known.27 Many popular histories credit him as being the pioneer of these techniques, ignoring the Torres Strait precedent. Haddon reaped more tangible rewards. In 1900, he was appointed University Lecturer in Ethnology at Cambridge Univer sity, and in 1901 was elected to a fellowship at Christ’s College.28 Haddon’s films were stored at Cambridge until 1967, when the British Film Institute copied them.29 Prints are now held by the National Film & Sound Archive and AIATSIS in Canberra, and by Ian Dunlop at Film Australia in Lindfield. They are the oldest surviving Queensland films, and the oldest films of Torres Strait Islanders. As a result of the bêche de mer men’s visit to Murray Island on 6 September 1898, they are also the oldest films of Australian Aborigines.
H a d d o n ' s T o r r e s S trait E xpedition F ilmography
In f l u e n c e on B ald w in S pencer
(1) Malu-Bomai Ceremony at Kiam (shot c. 6 September 1898). Three men in forest setting wearing leaf skirts; leading man wears the cardboard mask made for Haddon and last man holds a tailpiece. They dance in procession. Length 50 seconds at 16 f.p.s.
On 23 October 1900, hearing of Spencer and Gillen’s forthcoming expedition to Central Australia, Haddon wrote to Spencer: You really must take a kinematograph - a biograph - or whatever they call it in your part of the world. It is an indispensable piece of anthropological apparatus. Get an ordi nary commercial one. If you order from Lon don I think I would place myself in the hands of the Warwick Trading Company, 4 War wick Court, High Holborn W.C. I have asked them to send you a catalogue and to write to you as well. I have stated what you want it for. I have no doubt that your films will pay for the whole apparatus if you care to let some of them be copied by the trade.25 Examination of the Warwick Trading Com pany film catalogue for August 1901 reveals that Haddon may have allowed one of his
Walter Baldwin Spencer (1 8 6 0 -1 9 2 9 ), Professor of Biology at Melbourne University and Director of the National Museum of Victoria, followed Haddon’s instructions in the anthropological usage of motion pictures and sound recording. He took the usage of film on field expeditions much further than Haddon, shooting 3 ,0 0 0 feet of Aboriginal ceremonies and customs in the five weeks following 3 April 1901. Contacts and recommendations on film equipment in London were made for Spencer by Haddon. Photo from L ife (Sydney) 15 October 1904, p. 1055, courtesy
of M r Clive Sowry.
(2 ) Murray Island: Islanders Dancing in Dari Headdress (probably 6 September 1898). Three men in labalabas perform a proces sional dance on a beach. Camera jam occurs mid-shot and the dance re-commences. Length 70 seconds. (3 ) Murray Island: Islanders Dancing in Dari Headdress (probably 6 September 1898). Unidentified dance, same camera position as (2) , but with the camera panned slightly to the right. Three men dancing in procession on a beach. Length 21 seconds. CINEMA
PAPERS
96 . 35
Above, left: Frederick Charles Wills, Chief Artist and Photographer, Queensland Depart ment of Agriculture, 1 8 9 7 -1 9 0 3 . Photo from Queensland Agricultural Journal, June 1901 (opp. p. 40 0 ). Courtesy Peter Lloyd, Queensland Department of Agriculture. Above, centre: Henry William Mobsby, Assistant Photographer, Queensland Department of Agriculture, 1 8 9 7 -1 9 0 3 , Chief Photographer 1 9 0 4 -1 9 3 0 . Wills’ assistant on the making of the 1899 films. Above, right: Lumière Cinématographe No. 2 9 6 , 1898, used by Wills and Mobsby of the Queensland Agriculture Department to shoot the world’s first governmental films, 1899. Currently held by Queensland Museum, and still in working order. Photo by courtesy of Mark Whitmore, Queensland Museum. Right: Wills’ Lumiere camera opened to show the film gate and lightproof feed magazine with 75 foot film load capacity. There was never any viewfinder on this camera. The glass window behind the film gate (top right) provided a view of the image on the film itself before shooting commenced to indicate the field of view. Photo courtesy of Mark Whitmore, Queensland Museum.
(4) Murray Island: Fire Making (shot 5 September 1898) . Three men - Pasi, Sergeant and Mana - sit cross-legged on the ground, twirling a stick between their palms bearing upon a wood block (drill method). Length 30 seconds. (5) Murray Island: Australian Aboriginals Dancing “ Shake-ALeg” on Beach (shot c. 6 September 1898). Four visiting Australian Aborigines wearing labalabas clap, then dance, then clap again. A fifth man beats rhythm by hitting a long pole with a branch. Film in three sections with cuts separating them. Same locale as items (2) and (3). Length 70 seconds.
Q ueensland G overnment F ilm P r o d u c t i o n : 1899 Immigration to the colony of Queensland was promoted by a touring lecturer in Britain named George Randall, working under the direction of the Queensland Agent-General in London, Sir Horace Tozer.30 In the late 1890s, Randall illustrated his lectures with lantern slides prepared in Queensland by the official photog rapher of the Department of Agriculture, Frederick Charles Wills. Wills was young and enthusiastic, actively involved with the Queensland Amateur Photographic Society, and a frequent con tributor to Australia’s photographic magazines.31Appointed to the Department of Agriculture as Official Artist and Photographer on 13 March 189732, his innovations were constantly resisted by cons ervative co-workers. For instance, in March 1898 the Q ueensland Agricultural Jou rn al’s editor tried to eliminate its pictorial con tent.33 Fortunately for Wills, a Ministerial decision overruled this. In October 1898, Wills suggested that Randall’s lectures on immigration would be enhanced by “lantern slides [...] prepared on 36 • C I N E M A
PAPERS
96
the Lumiere Cinématographe principle”.34 The)|mminence of the prestigious Greater Britain Exhibition at Earl’s Court in 1899 provided an incentive to give this project a trial. Many of Wills’ lantern slides were exhibited there, though the films were not completed in time for it. Queensland’s Chief Secretary’s Department agreed to finance the motion picture venture for a year starting in October 189835, and the world’s first governmental film production project was launched. In December 1898, the Minister for Agriculture instructed Wills to go to Sydney to obtain a Lumière Cinématographe and the expertise to operate it.36 Baker &c Rouse imported the gear, and early in 1899 Wills made about five trial films with it in Sydney.37 Success was reported in the Australasian Photographic Review on 21 March 189938, the Sydney films including scenes of Redfern railway station and various types of ferry transport arriving at Milson’s Point. Few earlier films of Sydney survive today. On his return to Brisbane in March 1899, the Department gave Wills an assistant.39 Henry William Mobsby (1861 - 1933) helped Wills to produce and process many of the 1899 films. After Wills’ resignation in 1903, Mobsby continued to produce Queensland government films sporadically until he retired in August 1930.40 During Wills’ “still” photography excursions around Queens land for the Department of Agriculture between March and Octo ber 1899, he produced about 30 one-minute films on their Lumière cinématographe. Many of these illustrated agricultural processes in an attempt to attract British farmers to the colony, which was the immigration lecturer George Randall’s primary concern. However, Wills also filmed historical events which can be readily dated. Queensland’s Colonial Governor, Lord Lamington, is seen arriving at the opening of Colonial Parliament in Brisbane on 18 May 1899 - the oldest of Wills’ Queenslancfrfilms which can^bè
dated.41 On the evening of the following day, Wills gave his first film show to the Queensland Amateur Photographic Society, exhibiting “some very good specimens of locally taken cinematograph pic tures”.42 These probably included the surviving views of Brisbane’s Roma Street station, Queen Street and Victoria Bridge. Between June and August 1899, the Lumière cinématographe accompanied H. W. Mobsby on the tour of the government motor vessel “White Star” to Torres Strait.43 Queensland’s Home Secre tary, Justin Foxton, received reports of problems in the pearling industry, and of abuses of the natives in the Torres Strait. The subsequent expeditionary party included Aboriginal Protector Roth, Foxton, Foxton’s wife, Thursday Island Administrator John Doug las, Dr Tilston, Police Chief Parry-Oakden and Mobsby. The two surviving films of the expedition show the Channel Rock Light Ship receding astern off the Townsville Coast, and Foxton receiving a gi ft of bananas from Islanders on either Darnley or Murray Island in the Eastern Torres Strait. Mobsby also attempted to take a film at Weipa when Foxton officially gave it that name, but the attempt was aborted when “an expected corroborée fell through owing to shortness of time”.44 An extensive album of “still” photographs taken by Mobsby on this trip, probably intended for presentation as lantern slides to comple ment and supplement the films, survives in the John Oxley library.45 The expedition concluded on 5 August 1899 when the party returned to Townsville. The greater part of Wills’ surviving films were apparently taken in the Spring of 1899, following Mobsby’s return to Brisbane, and illustrate aspects of wheat harvesting on the Darling Downs, sugar harvesting at Nambour, and of stock management. These are easily the earliest Australian industrial documentary films, and are among the earliest films of their type in the world. Many of the 60-second rolls are constructed in sequences of two or three camera set-ups, and the rolls are intended for exhibition in a logical order to construct a narrative of the agricultural processes shown. “When a subject takes more than one film”, Wills casually observed in 1900, “they are joined with the aid of amyl acetate with some of the celluloid dissolved in it. ”46Wills made the earliest surviving Austral ian films exhibiting sequential editing techniques. Especially in the wheat harvesting series, the shots are superbly composed, logically sequenced and include a “jump cut” from a N o rth S h o re Steam F erry , P assengers D isem b a rk in g (filmed at Milson’s Point, c.
February 1899). Frame enlargement of one of Fred Wills’ films, shot as a trial while acquiring the Lumière machine from Baker &c Rouse in Sydney. Castellated turrets of Government House can be seen on the opposite shore, with Bennelong Point and Fort Macquarie on the left. Photo courtesy of Meg Labrum, National Film &c Sound Archive, Canberra.
wide view of a wagon bringing stooks to the thresher into a close view of operations at the thresher itself. In the Nambour sugar harvesting series a similar “jump cut” takes us from a wide view of a horse-drawn tramway bringing a load of cane to the mill’s conveyor, cutting close into a scene of trimming operations at the conveyor. The sugar harvesting series is particularly important for its inclusion of “kanaka” labourers at work - cheap Melanesian manpower imported to work under conditions resembling slavery in the Southern states of the U.S. The usage of this labour force ceased with the advent of Australian federation, and Wills’ films are among the few surviving reminders of this shameful chapter in Australia’s history.4^ Wills showed an artist’s care in his methods of composition and working, outlined in a lecture he subsequently gave on film-making: There is artistic taste needed in choice and arrangement of subject as much, and perhaps more, than in ordinary photography. I find it best to rehearse the scene I wish to photograph whatever it might be, that is when persons are to play any part in the picture, as those unaccustomed to photography often do the wrong thing at the wrong time, and possibly cause a film to be wasted, although I have been very fortunate myself, as out of thirty negative and thirty positive films which I have exposed only two negatives and one positive have been spoilt. It behoves one to be careful when each film costs 22/6d.4S O f his “out-takes”, one negative is included in the collection which appears never to have been printed. It shows a close view of railway tracks receding from the rear of a railway carriage in rural Queensland. Wills apparently misjudged the coverage of his camera from the rear of the train, pointing it downwards too far to record any meaningful scenery. Fortunately, two successful travelling shots of this type do survive in the collection, one showing scrub in the vicinity of the railway at Eumundi (near Nambour) and the other showing forests in the Atherton tablelands on the CairnsMareeba line. In 18 99, the concept of a camera with a moving point of view was unprecedented in Australia, and Wills’ “phaptom train rides” attracted favourable comment.49 A constant rule of documentary production is that the sponsor should be kept happy. Wills did well to include his employers, the Queensland government, in a film of them boarding t(he govern ment paddle steamer “Lucinda” for a Ministerial banquet. It was shot at a Brisbane River wharf just behind the (then) new Agricul ture Department building in William Street. Highgate Hill can be seen across the Brisbane River. The occasion is thought to be their outing in connection with the Queensland Federation League on 14 October 1899.50 Wills’ last and most impressive films recorded the departure celebrations of the First Queensland (Cavalry) Contingent for the Boer War in South Africa at the end of October 1899. The Queensland Mounted Infantry, 14 officers and 250 men under Colonel P. Ricardo, are seen receiving a spirited send-off during their final parade past Post Office Square in Queen Street on 28 October 1899.51 Later sequences show their Review before the Lieutenant-Governor Sir Samuel Griffith on the Brisbane Domain that afternoon52, and the loading of their reluctant horses for South Africa aboard the troopship “Cornwall” at Pinkenbp on 31 Octo ber 1899.53 This was the first occasion on which Queensland troops went to war, and it was attended with forcefully jingoistic displays of patriotism, as the film indicates. No other films of Australian Boer War troop departures are known to survive. At the end of October 1899, the Chief Secretary’s financing of the film experiment ceased. The value of this film to Queensland now had to be demonstrated. CONTINUES
ON
PAGE
59
CINEMA
PAPERS
9 6 - 37
A new series which provides a forum fo r revisionist studies of the classic works of the cinema Four new titles:
The Films of D. W. G riffith SCOTT SIM M O N
$29.95 Paperback
ISBN 0 521 38820 1
The Films of Joseph Losey JAMES PALMER and MICHAEL RILEY
$29.95 Paperback
ISBN 0 521 38780 9
The Films of Vincente Minnelli JAMES NAREMORE
$27.50 Paperback
ISBN 0 521 38770 1
The Films of Paul Morrisey MAURICE YACOWAR
$29.95 Paperback
ISBN 0 521 38993 3
Already published:
The Films of Roberto Rossellini
1 Specialising in moving the film industry.
PETER BONDANELLA
$29.95 Paperback
ISBN 0 521 39866 5
The Films of Wim Wenders ROBERT PHILLIP KOLKER and PETER BEICKEN
$29.95 Paperback
ISBN 0 521 38976 3
The Films of Alfred Hitchcock
M ovin g p e o p le an d p ro d u c t d o o r-to -d o o r a n y w h e re in th e w o rld .
DAVID STERRITT
$25.00 Paperback
ISBN 0 521 39814 2
The Films of Woody Allen SAM B. GIRGUS
$25.00 Paperback
ISBN 0 521 38999 2
Avant-Garde Film SCOTT MACDONALD
$29.95 Paperback
ISBN 0 521 38821 X
• International/Domestic Air Travel • Accommodation • On-site airline reservation system • On-site rental ear system • International/Domestic air, road and seafreiglit • Rushes/Videotapes • Customs clearance • Limousines • Mobile telephones • Personalised services • After hours communication Sydney 316 0300, Melbourne 521 1699, Perth 481 1699, Gold Coast (075) 88 6644, Auckland 627 1505, Los Angeles 310 645 4224. United Notions303fi
38 . C I N E M A
P A P E R S 96
F E A T U R E
REVIEW
Shakespeare for everyone OTHELLO, MACBETH A N D MUCH A D O ABOUT N O TH IN G BRIAN
McFARLANE
screen’s mise en scène. To retain a sense of the poetry at the same time as making it sound sufficiently conversational, as realistic as the
n the past few years there has been a heart
settings in which it is spoken, has
ening flow of Shakespeare-derived films, and
not always come easily to film
they have been films which seem to have taken
makers and actors. And yet, chal
notice of the fact that he - Shakespeare - was
lenging as this is, it needs to be
I
not writing fo rthe academy, but for large, enthu
remembered that what is at stake
siastic audiences. Kenneth Branagh’s Henry V
is no more than a convention, no
(1990) settled in for a long, comfortable run at
more of an affront to what is “real
Hoyts complexes and, in the following year,
istic” and acceptable than those
Franco Zeffirelli had his first box-office success
moments in musicals when walk
in years with Mel Gibson as Hamlet. In 1992,
ing, talking characters suddenly
Gus Van Sant’s My Own Private Idaho, a moving
begin to dance and sing in the
and eloquent re-working of themes from Henry
streets.
IV Part I and II, at least courted popularity by
The cinematic task of popular
casting Keanu Reeves and River Phoenix in the
izing Shakespeare has passed
leads. Now we are in the situation of having
from Laurence O livier through
Branagh’s exuberant Much Ado About Nothing
Zeffirelli to Branagh, with more
opening as a mainstream release at the same
uncompromising sallies from the
time as Orson W elles’ more than forty-year-old
likes of Welles and Derek Jarman
Macbeth (1949) and Othello (1952), in lovingly
( The Tempest, 1979). Olivier, still
restored versions, are both showing in more
working very much within the tra
limited releases, and preceding, though not by
ditions of British theatre and sur
too long one hopes, Christine Edzard’s As You
rounding himself with actors from
Like It.
the Old Vic and other theatre col
If Shakespeare is to be kept alive for the
leagues, scored a great succès
young and for the non-specialist audience, then
d ’estime with his morale-boosting
increasingly the cinema seems the most likely
Henry V (1944), and some con
medium. The theatre is becoming more and
siderable popularsuccess, though
more a middle-class pleasure, and a restricted
not enough to ward off the Ameri
one at that, except perhaps for big musicals, or,
can pun pan of “Hank Sank” as a
if we are thinking of Shakespeare, the prestig
comment on its mainstream re
ious reaches of Britain’s National Theatre or
ception in the U.S. His Hamlet
Royal Shakespeare Company, where a star
(1948), drawing on contemporary film noir mood
ABOVE: OTHELLO (ORSON WELLES) AND DESDEMONA
performer like Anthony Hopkins can still pro
and technique, and Richard III (1955), engross
(SUZANNE CLOUTIER). ORSON WELLES' OTHELLO.
duce a sell-out as King Lear. Television, demotic
ing if in a more academic mode, confirmed his
enough in its appeal, goodness knows, has
position as the screen’s most respected and
ensured th efilm ’s huge success. Zeffirelli’s films
adopted generally a rather staidly conventional
successful adaptor to date.
belonged to a more overtly cinematic sensibility
approach to Shakespeare, cleaving more to the
Nevertheless, it was really Zeffirelli who
and practice than Olivier’s. His 1986 Otello is of
traditions of the stage than to the greater free
achieved what Shakespeare himself would al
course Verdi not Shakespeare, and it was not
dom offered by filming. The BBC Shakespeare
most certainly have approved of: that is, large
until his 1990 Hamlet that he returned to the
series is a prime example (but by no means the
popular audiences for his Richard Burton-Eliza-
filming of the Bard. When he did, the result was
only one) of this tendency, whereby actors in
beth Taylor version of The Taming o f the Shrew
popular as a star vehicle for Mel Gibson, and
doublet and hose run up and down rostra and
(1967) and, above all, his Romeo and Juliet
was a decent enough piece of work, but seemed
peer around columns, as if inhabiting an all
(1968) . This latter caught the mood of youthful
to have nothing to say about the play, to have no
rebellion that was in the air and on the campuses
point of view.
purpose Shakespeareland. There are no doubt many challenges to be
in that year, and his handsome, then-unknown
No one could accuse Orson Welles of - or
met in filming Shakespeare, perhaps none of
leads. (Leonard Whiting and Olivia Hussey), a
praise him for - setting out to be popular. His
them more demanding than that of rendering the
tastefully nude love scene, and a bunch of hot-
version of Macbeth made on a shoestring for
stylization of his blank verse pentameters com
blooded Veronese youth reacting to the high
Republic, of all the most unlikely studios, met
patible with the intransigent realism of the
bright sun and against their dictatorial elders
with widespread critical derision and public apaCINEMA
PAPERS
96 . 39
thy when it appeared in 1948. Seen today, it
RIGHT: PUBLICITY STILL: KATE BECKINSALE AS HERO
appears as a fascinating, botched assault on a
AN D EMMA THOMPSON AS BEATRICE. KENNETH
great tragic drama. Set among craggy, dripping
BRANAGH’S MUCH A D O AB O U T NOTHING.
caverns, against a vast cyclorama, it eschews
BELOW: PUBLICITY STILL: GERARD HOARN AS BORACHI, KEANU REEVES (D O N JO H N) AND
the usual paraphernalia of screen realism and
RICHARD CLIFFORD (CONRADE). MUCH A D O A B O U T
the result is that the drama is focused where it
NOTHING.
most properly belongs: in the mind of Macbeth himself. Welles, as director and star, gives us a
Cloutier), and in the opposite direc
Macbeth who seems cut off from the social and
tion lago (Micheál MacLiammoir) is
political world in which he acts, but this is a
dragged by the neck and placed In
Macbeth who can make us powerfully aware of
a cage which Is then swung aloft,
his fear that his bloody hand might well make
the object of contempt and revile-
“the tumultuous seas incarnadine, making the
ment. All this takes place before the
green one red” . Much of the acting (especially
credits; no dialogue is spoken, but
Jeanette Nolan’s nagging Lady Macbeth) is in
music and the sound of guns, and
adequate to the point of being amateurish; much
the violent contrasts of imagery,
of the dialogue sounds like neither verse nor
have prepared the way for an in
conversation - indeed, is sometimes scarcely
tensely cinematic, far-from-conven-
intelligible - and the stylized setting is pitched
tional reading of the play. In spite of
uneasily between stage and screen. However,
the often out-of-sync sound-record
W elles’ own performance and the vision of the
ing and the scratchy nature of the
play it embodies mean that it is not a negligible
print, even in its restored form, it is
film and should make us grateful for the oppor
clear from the outset that this is a
tunity to see it again.
major piece of work.
W elles’ Othello, legendarily frustrated by fi
In swift, visual story-telling mode,
nancial problems and its filming subject to all
the tale of Othello’s courtship of the
manner of delays, is in spite of these vicissi
Venetian lady, Desdemona, the removal in the
tudes some sort of a masterpiece. As with
cause of war to Cyprus, and the sowing and
Desdemona’s infidelity, lago leading Othello
Macbeth, as with his glorious Chimes at M id
rapid germination of the seeds of jealousy in
through a labyrinth of passages and stairs, or
“ p roof” of la g o ’s insidious suggestions of
night (1966), arguably the greatest of all Shake
response to lago’s malign innuendo is accom
past fishing nets as Othello finally falls into his
spearian films, he has again been obdurately
plished with a fluidity that seems to belie the
trap and says, “ I’ll chop her into messes” : this is
true to his own vision of the play. He does not
film ’s fractured production history. Marvellously
a film full of eloquent compositions, but they are
make it easy fo rtho se unfamiliar with the plays
served by his multiple designers and camera
always at the service of the narrative and the
as he pursues his-own line on what it is that
men, Welles creates the ascendancy and fall of
drama. Equally, too, one can be moved by the
drives Macbeth or Falstaff or Othello. His search
Shakespeare’s simplest tragic hero in a series of
sudden simplicity of pain that informs the solilo
for the essential protagonist leads him to weave
finely-judged images.
in and out of the play’s structure until he finds
After a surprisingly low-key introduction to
quy “Farewell the tranquil mind ...” , as Othello, still, and shot from below, surveys the inner
one of his own that can sustain his idiosyncratic,
the vocal Othello in “ Keep up your bright swords,
wreckage of his life. The film ends visually where
wholly cinematic vision.
for the dew will rust them ” , the film suddenly
it began, with the cage, the procession and the
Othello, now being shown in a version re
offers a clear close-up of Welles that recalls that
parapets, images that now take on a new poign
stored by W elles’ daughter, Beatrice Welles-
sublime moment in The Third Man (Carol Reed,
ancy.
Smith, opens daringly on a close-up of the face
1949) when we first see his Harry Lime. Almost
Welles has made Othello a simple man, dig
of the dead Othello (Welles). The litter on which
wholly In close-up, too, and working quietly and
nified and brave, but fatally short on insight and fatally susceptible to lago’s manipulations. He is
he has been lain is picked Up and,.flanked and
persuasively against the potential bombast of
followed by a retinue of mourners and accompa-
the lines, he recalls the “round unvarnished tale”
a man whose descent into chaos has been
riied by a keening soundtrack, is borne up to the
with which he had wooed and won Desdemona,
swiftly accomplished, and the penultimate scene
parapets of a seashore castle. Another proces
Whose reactions are recorded in inserted close-
makes clear the basis for our pity for him. It is not
sion carries the body of Desdemona (Suzanne
ups. A little later, they are seen hemmed in by
true that he is “one not easily jealous” : the action
high b u ild in g s , o b
of the film gives the lie to, this: he is, though,
served from a balcony
“Perplex’d in the extreme” , and his face, lit in
above by lago and his
close-up, surrounded by darkness, reinforces
dupe Roderigo (Robert
visually the words pf the screenplay.
Coote). By contrast, on
As in Macbeth, though to a much lesser
Othello’s safe arrival in
extent, there are some depressingly inadequate
Cyprus, he and Des
performances in supporting roles. Loyalty to an
demona are reunited in
early m entor probably led W elles to cast
a low-angled shot that
MacLiammoir as lago, but, despite a very apt
seems to celebrate the
sense early on of a terrier at the heels of a large
security of their love.
dog, he seems too elderly, too lacking in the kind
The film invites one
of imaginative energy that would enable him to
to talk about it in this
seduce Othello to his purposes. Fay Compton, a
way because it so in
great stage actress, achieves some real a u th o r
s is te n tly m akes its
ity as Emilia in her final confrontation of Othello
m e a n in g s in v is u a l
and lago, but too often seems to be acting in a
terms. A mirror shot in
different, older theatrical tradition. Suzanne
which Othello m afes a
Cloutier is barely interesting as Desdemona,
brief self-appraisal; the
and several of the other? are but ciphers. Oddly,
turmoil of the ocean far
none of this matters very much; it is not only that
below as he demands
W elles’ own p erform an cejivets the attention,
FEATURING
- AUSTRALIAN W OM EN FILM DIRECTORS
..
Twelve m em orable images of the most significant wom en film directors spanning the history of Australian cinem a. This high-quality calendar highlights their careers w ifhup-to:dgte filmographies; and includes a special back page listing other feature directors and some newcomers. An ideal Christmas gift - invaluable throughout the year,.
LIMITED EDITION fS O ORDER NOW
Name Address................. I enclose a cheque / money order for $ ............ fo r ...............calendar(s) payable to: MTV Publishing Limited, 43 Charles Street, Victoria 3067. Please debit my □ Bankcard □ Mastercard □ Visa to the amount of $ ............ Card number
I
Expiry d a t e ..........
I
I
I
I
I
I
I
I
I
I
Signature.....
I
I
I
I
I
I
I
BACK ISSUES:
NUMBER 1 (JANUARY 1974): David Williamson, Ray Harryhausen, Peter Weir, Antony Ginnane, Gillian Armstrong, Ken G. Hall, The Cars that Ate Paris. NUMBER 2 (APRIL 1974): Censorship, Frank Moorhouse, Nicolas Roeg, Sandy Harbutt, Film under Allende, Between The Wars, Alvin Purple NUMBER 3 (JULY 1974): Richard Brennan, John Papadopolous, Willis O’Brien, William Friedkin, The True Story O f Eskimo Nell. NUMBER 10 (SEPT/OCT 1976) Nagisa Oshima, Philippe Mora, Krzysztof Zanussi, Marco Ferreri, Marco Belloochio, gay cinema. NUMBER 11 (JANUARY 1977) Emile De Antonio, Jill Robb, Samuel Z. Arkoff, Roman Polanski, Saul Bass, The Picture Show Man. NUMBER 12 (APRIL 1977) Ken Loach, Tom Haydon, Donald Sutherland, Bert Deling, Piero Tosi, John Dankworth, John Scott, Days O f Hope, The Getting O f Wisdom. NUMBER 13 (JULY 1977) Louis Malle, Paul Cox, John Power, Jeanine Seawell, Peter Sykes, Bernardo Bertolucci, In Search O f Anna. NUMBER 14 (OCTOBER 1977) Phil Noyce, Matt Carroll, Eric Rohmer, Terry Jackman, John Huston, Luke's Kingdom, The Last Wave, Blue Fire Lady. NUMBER 15 (JANUARY 1978) Tom Cowan, Truffaut, John Faulkner, Stephen Wallace, the Taviani brothers, Sri Lankan film, Chant Of Jimmie Black smith. NUMBER 16 ( APRIL-JUNE 1978) Gunnel Lindblom, John Duigan, Steven Spielberg, Tom Jeffrey, The Africa Project, Swedish cinema, Dawn!, Patrick. NUMBER 17 (AUG/SEPT 1978).
CINEMA
PAPERS
NUMBER 20 (MARCH-APRIL 1979) Ken Cameron, Claude Lelouch, Jim Sharman, French film, My Brilliant Career.
NUMBER 41 (DECEMBER 1982) Igor Auzins, Paul Schrader, Peter Tammer, Liliana Cavani, Colin Higgins, The Year O f Living Dangerously.
NUMBER 22 (JULY/AUG 1979) Bruce Petty, Luciana Arrighi, Albie Thoms, Stax, Alison’s Birthday
NUMBER 42 (MARCH 1983) Mel Gibson, John Waters, Ian Pringle, Agnes Varda, copyright, Strikebound, The Man From Snowy River.
NUMBER 24 (DEC/JAN 1980) Brian Trenchard-Smith, Ian Holmes, Arthur Hiller, Jerzy Toeplitz, Brazilian cinema, Harlequin. NUMBER 25 (FEB/MARCH 1980) David Puttnam, Janet Strickland, Everett de Roche, Peter Faiman, Chain Reaction, Stir. NUMBER 26 (APRIL/MAY 1980) Charles H. Joffe, Jerome Heilman, Malcolm Smith, Australian nationalism, Japanese cinema, Peter Weir, Water Under The Bridge. NUMBER 27 (JUNE-JULY 1980) Randal Kleiser, Peter Yeldham, Donald Richie, obituary of Hitchcock, NZ film industry, Grendel Grendel Grendel. NUMBER 28 (AUG/SEPT 1980) Bob Godfrey, Diane Kurys, Tim Burns, John O’Shea, Bruce Beresford, Bad Timing, Roadgames. NUMBER 29 (OCT/NOV 1980) Bob Ellis* Uri Windt, Edward Woodward, Lino Brocka, Stephen Wallace, Philippine cinema, Cruising, The Last Outlaw. NUMBER 36 (FEBRUARY 1982) Kevin Dobson, Brian Kearney, Sonia Hofmann, Michael Rubbo, Blow Out, Breaker Morant, Body Heat, The Man From Snowy River. NUMBER 37 (APRIL 1982) Stephen MacLean, Jacki Weaver, Carlos Saura, Peter Ustinov, women in drama, Monkey Grip. NUMBER 38 (JUNE 1982) Geoff Burrowes, George Miller, James Ivory, Phil Noyce, Joan Fontaine, Tony Williams, law and insurance, Far East. NUMBER 39 (AUGUST 1982) Helen Morse, Richard Mason, Anja Breien, David Millikan, Derek Granger, Norwegian cinema, National Film Archive, We O f The Never Never. NUMBER 40 (OCTOBER 1982) Henri Safran, Michael Ritchie, Pauline Kael, Wendy Hughes, Ray Barrett, My Dinner With Andre, The Return Of Captain Invincible.
NUMBER 43 (MAY/JUNE 1983) Sydney Pollack, Denny Lawrence, Graeme Clifford, The Dismissal, Careful He Might Hear You. NUMBER 44-45 (APRIL 1984) David Stevens, Simon Wincer, Susan Lambert, a personal history of Cinema Papers, Street Kids. NUMBER 46 (JULY 1984) Paul Cox, Russell Mulcahy, Alan J. Pakula, Robert Duvall, Jeremy Irons, Eureka Stockade, Waterfront, The Boy In The Bush,A Woman Suffers, Street Hero. NUMBER 47 (AUGUST 1984) Richard Lowenstein, Wim Wenders, David Bradbury, Sophia Turkiewicz, Hugh Hudson, Robbery Under Arms. NUMBER 48 (OCT/NOV 1984) Ken Cameron, Miel Lynne. NUMBER 50 (FEB/MARCH 1985) Stephen Wallace, Ian Pringle, Wal, The Naked Country, Mad Max:ryan Brown, Nicolas Roeg, Vincent Ward, Hector Crawford, Emir Kusturica, N.Z. film and TV, Return To Eden. NUMBER 54 (NOVEMBER 1985) Graeme Clifford, Bob Weis, John Boorman, Menahem Golan, rock videos, Wills And Burke, The Great Bookie Robbery, The Lancaster Miller Affair.
NUMBER 55 (JANUARY 1986) James Stewart, Debbie Byrne, Brian Thompson, Paul Verhoeven, Derek Meddings, tie-in marketing, The RightHand Man, Birdsville. NUMBER 56 (MARCH 1986) Fred Schepisi, Dennis O’Rourke, Brian Trenchard-Smith, John Hargreaves, Dead-End Drive-In, The More Things Change, Kangaroo, Tracy. NUMBER 58 (JULY 1986) Woody Allen, Reinhard Hauff, Orson Welles, the Cinémathèque Française, The Fringe Dwellers, Great Expectations: The Untold Story , The Last Frontier. NUMBER 59 (SEPTEMBER 1986) Robert Altman, Paul Cox, Lino Brocka, Agnes Varda, The AFI Awards, The Movers. Armiger, film in South Australia, Dogs In Space, Howling III. NUMBER 62 (MARCH 1987) Screen Violence, David Lynch, Cary Grant, ASSA conference, production barometer, film finance, The Story Of The Kelly Gang. NUMBER 63 (MAY 1987) Gillian Armstrong, Antony Ginnane, Chris Haywood, Elmore Leonard, Troy Kennedy Martin, The Sacrifice, Land slides, Pee Wee’s Big Adventure, Jilted. NUMBER 64 (JULY 1987) Nostalgia, Dennis Hopper, Mel Gibson, Vladimir Osherov, Brian TrenchardSmith,, James Clayden, Video, De Laurentiis, New World, The Navigator, Who’s That Girl. NUMBER 67 (JANUARY 1988) John Duigan, George Miller, Jim Jarmusch, Soviet cinema- Part I, women in film, shooting in 70mm, filmmaking in Ghana, The Year My Voice Broke, Send A Gorilla.
ALSO AVAILABLE NUMBER 82 (MARCH 1991) Francis Ford Coppola T he G od fath er Part III, Barbet Schroeder R eversal o f Fortune, Bruce Beresford’s B lack R o b e, Ramond Hollis Longford, Backslidifig, Bill Bennetts, Sergio Corbucci obituary.
NUMBER 83 (MAY 1991) Australia at Cannes, Gillian Armstrong: T he L ast D ays at Chez N ou s , Joathan Demme: T he Silence o f the L am bs, Flynn, D ea d T o T he W orld, Marke Joffe’s S potsw ood, Anthony Hopkins
NUMBER 84 (AUGUST 1991)
NUMBER 68 (MARCH 1988) Martha Ansara, Channel 4, Soviet Cinema, Jim McBride, Glamour, G hosts O f T he Civil D ead, Feathers, O cean, O cean.
■ ■
BACK OF BEYOND DISCOVERING AUSTRALIAN FILM A N D TELEVISION
A
LIM ITED N U M BER of the beautifully de signed catalogues especially prepared for the 1 9 8 8 season of A ustralian film and television at the U C L A film and television archive
in the U .S. are now available for sale in A ustralia.
James Cameron: T erm uiator 2: Judgm en t D ay, Dennis O'Rourke: T he G o o d W om an o f B an g kok, Susan Dermody: Breathing Under W ater , Cannes report, FFC.
researched articles by several of A u stralia’s leading
NUMBER 85 (NOVEMBER 1991)
w riters on film and television, such as K ate Sands,
Jocelyn-Moorhouse: P roof, Blake Edwards: Switch-, Callie Khouri: T helm a & Lou ise; Independent Exhibition and Distribution in Australia, FFC Part II.
NUMBER 86 (JANUARY 1992)
Edited by Scott M u rray , and with extensively
'W o m en o f th e W a v e ; Ross G ibson, F o r m a tiv e L a n d scapes'.; Debi Enker, C ro ss-o v e r a n d C o lla b o ra tio n : K e n n e d y M ille r ; Scott M u rray , G eorge M ille r , Scott
Cannes ’88, film composers, sex, death and family films, Vincent Ward, David Parker, Ian Bradley, Pleasure D om es.
Overview of Australian film: R om per Stam per, T he N ostradam us Kid, G reenkeeping, Eightball; plus Kathryn Bigelow, HDTV and Super 16.
NUMBER 70 (NOVEMBER 1988)
NUMBER 87 (MARCH 1992)
Film Australia, Gillian Armstrong, Fred Schepisi, Wes Craven, John Waters, A1 Clark, Sham e Screenplay Part I.
Multi-Cultural Cinema, Steven Spielberg and H o o k , George Negus filming T he R ed U nknown, Richard Lowenstein Say a Little Prayer, Jewish Cinema.
Adrian M artin , N u r tu r in g th e N e x t W a ve.
NUMBER 88 (MAY-JUNE 1992)
trated with m ore than 1 3 0 photograp hs, indexed, and
NUMBER 69 (MAY 1988)
NUMBER 71 (JANUARY 1989) Yahoo Serious, David Cronenberg, 1988 in Retrospect, Film Sound , L ast T em p-tation o f Christ, Philip Brophy
NUMBER 72 (MARCH 1989) Charles Dickens’ Little Dorrit, Australian Sci-Fi movies, Survey: 1988 Mini-Series, Aromarama, Ann Turner’s Celia, Fellini’s L a d olce vita, Women and Westerns
NUMBER 73 (MAY 1989) Cannes ’89, D ea d Calm , Franco Nero, Jane Campion, Ian Pringle’s T he Prisoner o f St. Petersburg, Frank Pierson, Pay TV.
NUMBER 74 (JULY 1989) T he D elinquents, Australians in Hollywood, Chinese Cinema, Philippe Mora, Yuri Sokol, Twins, True Believers, G hosts... o f the Civil D ead, Sham e screenplay.
NUMBER 75 (SEPTEMBER 1989) Sally Bongers, The Teen Movie, A nim ated, Edens L ost, Pet Sem atary, Marrin Scorsese and Paul Schrader, Ed Pressman.
NUMBER 76 (NOVEMBER 1989) Q uigley D oum Under, Kennedy Miller, Terry Hayes, B an g kok H ilton, John Duigan, Flirting, R o m ero, Dennis Hopper and Kiefer Sutherland, Frank Howson, Ron Cobb.
NUMBER 77 (JANUARY 1990) John Farrow profile, B lo o d O ath, Dennis Whitburn, Brian Williams, Don McLennan’s B reakaw ay, “Crocodile” Dundee overseas.
NUMBER 78 (MARCH 1990) T he Crossing, Return H om e, Peter Greenaway and T he C ook...etc, B an g kok H ilton, B arlow an d C ham bers
NUMBER 80 (AUGUST 1990) Cannes, Fred Schepisi interview, Peter Weir and G reencard, Pauline Chan, Gus Van Sant and D rugstore C ow bo y , German Stories.
NUMBER 81 (DECEMBER 1990) Ian Pringle Isabelle E berhardt, Jane Campion An A ngel At My T able, Martin Scorsese G ood fellas, Alan J. Pakula Presum ed Innocent
Cannes ’92, Baz Luhrmann’s Strictly B allroom , Ann Turner’s H am m ers over the Anvil, Kathy Mueller’s D aydream Believer, Wim Wenders’ Until the End o f the W orld, Satyajit Ray.
NUMBER 89 (AUGUST 1992) Full report Cannes ’92 including Australian films, David Lynch Press Conference, Vitali Kanievski interview, Gianni Amelio interview, Christopher Lambert in Fortress, Film-Literature Connections, Teen Movies.
NUMBER 90 (OCTOBER 1992) Gillian Armstrong: T he Lasst Days o f Chez N ous, Ridley Scott: 1492: C onquest o f Paradise, Stephan Elliot: Frauds, Giorgio Mangiamele, Cultural Differences and Ethnicity in Australian Cinema, John Frankenheimer’s Year o f the Gun.
NUMBER 91 (JANUARY 1993) Clint Eastwood and U?iforgiven; Raul Ruiz; George Miller and G ross M isconduct; David Elfick’s L o v e in L im bo , O n T he B each, Australia’s First Films.
NUMBER 92 (APRIL 1993) Yahoo Serious and R eckless Kelly; George Miller and L o re n z o ’s Oil; Megan Simpson and Alex; T he L over, Women in film and television. Australia’s First Films Part 2.
NUMBER 93 (MAY 1993) Australian films at Cannes, Jane Campion and T he Piano, Laurie Mclnnes’ B roken H ighw ay, Tracey Moffat’s Bedevil, Lightworks and Avid debate.
NUMBER 94 (AUGUST 1993) Cannes report, Steve Buscemi and Reservoir D ogs, Paul Cox interview, Michael Jenkins’ T he H eartbreak K id, Coming of Age films
NUMBER 95 (OCTOBER 1993) Lynn-Maree Milbum’s M em ories & D ream s, The Science of Previews, John Dingwall and T he Custodian, Documentary Supplement including M an Bites D og, Tom Zubrycki, John Hughes.
M u rray, T e rry H a y e s ; G raem e T u rn er, M ix i n g F a c t
a n d F ic tio n ; M ichael Leigh, C u r io u s e r a n d C u rio u ser;
T h e B a c k o f B e y o n d C atalogue is lavishly illus
has full credit listings for some 8 0 films. PRICE: S 2 4 .9 5 , including p ostage and p ackaging.
CINEMA PAPERS SUBSCRIPTION
H E BUBB
I wish to subscribe for
m m
6 Issues
12 Issues
18 Issues
Back Issues
1 Year
2 Years
3 Years
Add to Price per copy
Surface
Surface
Surface
□ 6 issues at $28.00 (one year)
LJ 12 issues at $52.00 (two years) L ! 18 issues at $78.50 (three years)
Zone 1:
my subscription from the next issue
Total Cost __________________
36.00
65.00
97.00
1.80
Niugini
Air
Air
Air
Air
48.00
90.00
136.00
3.35
Surface
Surface
Surface
Surface
Malaysia
36.00
65.00
97.00
1.80
Fiji
Air
Air
Air
Air
Singapore
42.00
77.00
116.00
3.20
Surface
Surface
Surface
Surface
Zone 2:
Zone 3:
ADDITIONAL ITEMS
Surface
New Zealand
Please □ begin f ! renew
IIIIIIE^El
;V ;
Hong Kong
36.00
69.00
102.00
1.80
India
Air
Air
Air
Air
Japan
59.00
112.00
168.00
5.15
Philippines
1. BACK OF BEYOND:
China
DISCOVERING AUSTRALIAN FILM AN D TELEVISION Surface
Surface
Surface
Surface
USA
37.00
67.00
101.00
2.40
Canada
Air
Air
Air
Air
Middle East
65.00
125.00
187.00
6.20
Surface
Surface
Surface
Surface
UK/Europe
37.00
68.00
187.00
2.40
Africa
Air
Air
Air
Air
South America
71.00
136.00
205.00
7.20
Zone 4:
I wish to order___________ no. of copies L1 $24.95 per copy (Includes Postage) Total Cost $ ________________
Zone 5:
2. BACK ISSUES I wish to order the following back issues CINEMA PAPERS Issue nos.
! I 1-2 copies @ $4.50 each ! I 3-4 copies @ $4.00 each
NAME ___ T IT L E ____
i I 5-6 copies @ $3.50 each i I 7 or more copies @ $3.00 each Total no. of issues
COMPANY ADDRESS _
Total Cost $
CO U N TRY___________________ POSTCODE ________ TELEPHONE Cheques should be made payable to:
MTV PUBLISHING LIMITED and mailed to:
MTV Publishing Limited, 43 Charles Street, Abbotsford, Victoria 3 0 6 7 NB. ALL OVERSEAS ORDERS SHOULD BE ACCOMPANIED BY BANK DRAFTS IN AUSTRALIAN DOLLARS ONLY
___________ w o r k ______________
Enclosed is my cheque for $ or please debit my I I BANKCARD
I I MASTERCARD
I I VISACARD
Card N o .__________________________________________ Expiry D ate_____________ Signature__________________________________________
but that he has conceived the whole film in such
vivified by two of nearly that age who perfectly
tion have also retained the moving sense the
visually persuasive, dramatically coherentterms
understand the requirements of the rôles. They
play offers of Shakespeare’s respect for and
that a few wooden performances seem no more
can (like the cast at large) speak the verse as if
belief in the powers and perceptions of an intel
than blemishes on what is still recognizably a
they had just thought of it; they clearly relish the
ligent woman.
masterwork.
cut and thrust of vituperation which character
For all that Branagh has described the play
The mantle of p o p u la riz e r-a te rm used here
izes their early dealings with each other; and
as a fairy tale with a darker undercurrent, he has not hesitated to invoke the screen’s effortless
with absolutely no pejorative resonance - never
they can move us with the sudden access of real
fitted Welles and has fallen to Branagh. Still in
feeling that enables them to recognize their love
naturalism in this version of it. The formal dignity
his early thirties, he has the triumphs of Henry V
for each other and their contempt for what they
of Leonato’s villa and the slumbrous, summery
and now of Much Ado About Nothing already
see as Claudio’s dishonourable behaviour.
Tuscan countryside in which it is set provide
under his belt, as well as a string of other
The Çlaudio-Hero sub-plot, in which Claudio
exactly the correlatives for the drama of cold
achievem ents in various media. Branagh’s
(Robert Sean Leonard) imagines he sees evi
purpose and sensuousness that constitutes the
avowed objective is “an absolute clarity that will
dence of Hero’s (Kate Beckinsale) sexual infi
plot, and could make one dissatisfied forever with pillars and rostra.
enable a modern audience to respond to Shake
delity, is not in itself very interesting or even
speare on film in the same way that they respond
convincing. Its real importance is in the way it
The casting, too, works remarkably well. Apart
to any other movie” , and the evidence of his first
provides the occasion for the deepening of the
from those already mentioned, there are such
two adaptations is that he has achieved his goal.
relationship between the mature lovers-to-be,
Branagh regulars as Richard Briers (Bardolph in
In narrative terms, the action is wholly clear, as
Beatrice and Benedick. The play’s most chilling
Henry V, a dignified Leonato here), Brian Blessed
it was in Henry V, as are the characters and their
moment is when Beatrice tests the strength of
(Exeter in Henry V, here a bluff Antonio), the
relationships. As before, he has not hesitated to
Benedick’s newly pronounced love with the two
wonderful Phyllida Law and Imelda Staunton
shear away what is likely to be obfuscating to
words, “Kill Claudio” . In Branagh’s film, this
(both in Peter’s Friends, here respectively the
modern ears not particularly attuned to Shake
scene, set in a small chapel, has been very
wise Ursula and the duped Margaret). As well as
spearian diction and rhythms. His casting, a
sharply directed and edited through a series of
these, all a ttheir considerable bests, there is the
mixture of his own repertory family and of some
rapidly alternating close-ups of the two, culmi
American contingent; Denzel Washington (a
of the most potent young American actors of the
nating in Beatrice’s full-face command. Very
striking leader of the returning soldiery, as Don
play, reinforces his belief that Shakespeare
movingly, the whole tone of the drama is deep
Pedro); Michael Keaton, seconded by Ben Elton’s
should be accessible to everyone, not the pre
ened as it should be, and gives weight to the ensuing scene in which Benedick, with new
Verges, doing all that could possibly be asked of the tiresome Dogberry, Constable of the Watch,
seriousness of purpose, attacks his old com
one of Shakespeare’s most Intractably unfunny
rade, Claudio.
lowlifers; Leonard’s forthright Claudio; and Keanu
serve of an élite theatre tradition. The film opens on a black screen, across which in white script the words of the song “Sigh
As always in Much Ado, it is very difficult to
Reeves’ intense study of the malignant Don
as they are sung off-screen by a voice which
retain any sympathy for the gullible Claudio or
John, the serpent in this hillside Eden. It is the
proves to belong to Beatrice (Emma Thompson).
any real interest in the blameless Hero. How
sort of cast that cries out to be listed one by one.
The camera eventually finds her seated in a
ever, they are played here with enough youthful
The use of name actors in small roles pays off in
tree, reading to a group of alfresco lunchers on
ardour and good looks by Robert Sean Leonard
sharpness and clarity, by giving an individuality
no more, men were deceivers e v e r...” are trailed
a Tuscan hillside. The sensuous gaiety of the
(star of Peter Weir’s Dead Poets Society, 1989)
not always found in the play. Further, the daring
scene is then interrupted by the approach of
and newcomer Kate Beckinsale to more than
use of actors from different backgrounds does
returning soldiers, first in an overhead long-
answer the demands the narrative makes on
not jar here, butunderlines the sense that this is
shot, then of horses’ hooves in close-up. In an
them. (If Branagh has Romeo and Jufiet \n mind,
not a production aimed at embalming a classic text in a classic tradition, but one intended to
exhilarating alternation of the women’s frantic
here are his stars.) Above all, they throw into
dressing and of the men arriving, bathing and
relief the greater wit and maturity of the Beatrice-
reach and attract as wide and varied an audi
changing, the ;scene is set for the opening ex
Benedick partnership, in the rendering of which
ence as possible.
ch an ge b etw een B e a trice and B e n e d ick
Branagh and Thompson suggest that they could
The British Government has been notori
be the heirs to the high-comedy laurels once
ously stingy in offering financial succour to its
won and worn by William Powell and Myma Loy,
ailing film industry in the past decade or more.
(Branagh):
■
V
.
.-
-i
I wonder that you will still be talking, Signior Benedick: nobody marks you. What! my dear Lady Disdain, are you yet living?
and by Tracy and Hepburn, and by virtually no
Probably the best it could do would be to stake
one for decades now. Certainly, Shakespeare’s
Branagh to film his way through the Shake
bickering lovers may be seen as the ancestors
spearian canon; it would be, in doing so, per
The strength of the play is of course in the
of, say, Nick and Nora Charles: that is, of lovers
forming a cultural service in the interests of
relationship between these two mature, corro
with minds that they are not prepared to check in
literature and film both. On the evidence to date,
sively witty and oddly vulnerable people, cre
at the desk as they register for marriage.
it seems as if nothing is beyond our Ken.
ated by Shakespeare in his mid-thirties and here
Thompson’s performance and Branagh’s direc
Sue M a n g e r Casting with
Startrakkers
Specialist Casting
Ring Scott or Tracejf Telephone 07 899 0339 Facsimile 07 899 1514 PD Box 411 Bulimba Bid 4171 Brisbane Australia C I N E M A PAPERS
96 • 41
FO R E I G N
F E S T IV A L S
5 0 a M OSTRA DEL CINEM A Dl VENEZIA AUGUST-SEPTEMBER, 1993 PETER
MALONE
“ ^ ^ i e s Irae” was the title for a retrospective selection of films screened at the Venice Film Festival. It means “Day of Wrath” , a biblical term for a time of suffering and judgement. It is also the title of Carl T. Dreyer’s austere 1943 film, which was included in the retrospective. The reason for this was that 1993 marks the 50th anniversary of the Mostra Cinematografica (cinema showcase) and a fitting way to mark the occasion was the screening of films which were released in those days of wrath of World War If. Continental directors represented were Jacques Becker, Claude Autant-Lara, Helmut Kautner, Gustaf Molander, Luchino Visconti, D reyerand Nikolai Avdeenko and Julia Solntseva (Bitva za nashu Sovietskuyu Ukrainu ( The Fight for Our Soviet Ukraine)). The British choice was Millions Like Us (Frank Launder), The Man in Grey (Leslie Arliss) and The Gentle Sex (Leslie f
Howard). The U.S. selection was more diverse: H itler’s Children (Edward Dmytryk), Cabin in the Sky (Vincente Minnelli), Watch on the Rhine (Herman Shumlin) and This Land is Mine (Jean Renoir). The Festival programme consisted of six sections: the films in Competition (18) and five others which were given as much attention but which were screened “out of Competition”. The latter were mainly American films, Jurassic Park (Steven Spielberg), The Age o f Innocence (Mar tin S corsese), M anhattan M urde r M ystery (Woody Allen), A Bronx Tale (Robert De Niro) and ErmannoOlmi’s IISegretodelBosco Vecchio (Secret o f the Old Forest). There was the “Panorama Italiano” , seven features and five shorts by younger Italian film makers. “Finestra Sulle Immagine” (Window on
The daily “Press Conferences” usually had
Images) included features, documentaries and
representatives from the main films screened,
shorts, mainstream; and experimental, and pro
but attention was focused on the celebrities.
grammes filmed on video (including Brownlow
There were 2,500 journalists accredited to the
and G ill’s series on D. W. Griffith and Robert
Festival and swarms of photographers. As they
ABOVE: NICK HOPE AS BUBBY IN ROLF DE HEER’S BAD BO Y BUBBY, WINNER OF SEVERAL PRIZES A T VENICE, INCLUDING THE FESTIVAL JURY AWARD, THE CIAK JURY AWARD AND THE BRONZE PLAQUE FROM OCIC, AS WELL AS SHARING (WITH SHORT CUTS) THE INTERNATIONAL CRITICS’ AWARD.
Altm an’s Black and Blue). “Special Screenings”
rushed Robert De Niro for yet another session,
included Johnny Guitar (Nicholas R ay,1954),
chair Gideon Bachman remarked, “The Invasion
and moving; Maria Luisa Bemberg’s De eso no
Pursued (fla o il Walsh, 1947) and small-budget
of the Body Snatchers” . They were excessively
se habla (Argentina), with Marcello Mastroianni
.films like The Hollow Men (Joseph Kay and John
in evidence at the awards evening staged at the
making his love for a fifteen-year-old dwarf ab
Yorick) and Joe Com erford’s High Boot Benny
Palazzo Ducale, which, from an audience point
solutely believable; Dove Siete? lo Sono Qui
froth Ireland. For those who enjoy the mainstream, there was the “Venetian Nights” , m a in lf U.S. films: In
Sandrine Blaricke that made the events credible
of view, was little better than a ‘scratch concert’
(Where are You? I am Here, Italy), a surprisingly
- presenters confused, talking over voice-overs
mainstream film from Liliana Cavani about the
- but a cheerful evening nonetheless!
hearing-impaired, a film of great feeling.
jth e Line o f Fire (Wolfgang Petersen), § J ie fu g i
In watching the films in Competition, one was
tive (Andrew Davis), Dave (Ivan Reitm an),
struck by their emphasis on individuals and
Rolf de Heer’s Bad Boy Bubby, with a powerful
Kalifornia (Dominic Sena), Posse (¡¡¡Irio V fn
groups who were ¡marginalized. Many of them
performance by Nick Hope as a middle-aged
Perhaps the big surprise of the Festival was
Peebles), Boxing Helena (Jenjnifer: Chambers
v ^ re f intense: Aline Isserman’s fine Ombre de
baby-man, kept locked away from people, who
Lynch) apd What’s Love Gotitg:DoljWith It (iria n
^ u t e (France), tacpling incest in a French mid-
eventually gets out, mirrors the society he en
dle-liassffam ily with a performance by the young
counters and em erges from hiis mental and
g | H o | ).
m
. CIN EMA PAPERS 9 6
emotional captivity an Idiot-figure. This does not
Isabelle Huppert, protested the invasion of Ameri
The majority of the presenters of papers as
do justice to the film with its powerful ugliness,
can films at the expense of local productions.
well as of the participants were from continental
language and anger. It is a gut-level, confronting
This kind of feeling was obvious at Venice and
Europe; several came from the U.S., but only
film. And it made its impact, winning the Festival
featured in many articles about the Festival.
three from Africa, two from Asia and one from
Jury Award, the CIAK (Italian C inem a-goers’
Festival director Gillo Pontecorvo chided the
the Pacific. Discussion tended to focus on Euro
Association) Jury Award, sharing (with Short
press for its bias in this regard, highlighting
pean film s with frequent genuflections to Andrei
Cuts) the International C ritics’ award, winning
clash, and pointed out the necessity of keeping
Tarkovsky and Kieslowski. The Venice award
an award from a large group of Italian high-
comm unications open with Hollywood. This was
seemed to set the seal on Kieslowski as the
school students who were attending the Festi
evident in the number of American films screened
successor to Luis Bunuel, Ingmar Bergman and
val, and m eeting and d iscussing w ith the
and the number of celebrities attending.
filmm akers, and the Bronze Plaque from OCIC
However, Pontecorvo convened a meeting
(International Catholic Organisation for Cinema).
of cinema ‘authors’, principally directors. A large
Federico Fellini as the great directors whose work could be deemed, in the broadest sense, religious.
(It was as a member of this jury that I attended
contingent from the continent and from North
Yet in looking at the films in Competition in
the Festival.)
America attended, the discussion ranging from
Venice, one noted the frequency of explicit reli
The Leone d’Oro was shared by Krzysztof
marketing to copyright protection and the rights
gious icons, of ceremonies, of language about
Kieslowski’s first in a trilogy, Trois Couleurs:
of ‘authors’. An international committee was
God. This tended to pervade the continental
Bleu, and Robert Altm an’s Short Cuts. In fact,
elected, published resolutions and have com
film s in a way that does not happen in the
these two film s won most of the awards: Trois
missioned a charter of rights to be drafted.
American cinema - yet it was there in the films of Ferrara and De Niro, and in Bad Boy Bubby.
Couleurs: Bleu tor CIAK, Italian Catholic Media,
Pontecorvo expressed disappointm ent that the
OCIC and for Juliet Binoche as Best Actress;
media gave scant attention to this ground-break
Short Cuts the International C ritics’ Award and a
ing meeting of minds.
European thinkers (and Latin Americans) are also concerned about ‘post-m odernism ’ in a way
special jury award for the cast ensemble of 22.
The meeting was well attended by directors
that those from America, Asia, Africa and Aus
The Silver Lion was given (one presumes in
from all over the world, taking advantage of
tralia are not. If the certainties of the classical
s o lid a r ity )
(B a k h tia r
those present at the Festival (including Peter
world-views of the Enlightened 18th Century
Khudoinazarov) from Turjikistan and the Presi
Weir, who was President of the Festival Jury,
and of the faith-in-progress of the I9th and 20th
dent of the Senate’s Award to the Chinese film
and Chen Kaige, a member of the Jury).
Centuries and the organizations and structures
to
K o sh
ba
K o sh
Za Z u i Z i (An Innocent Babbler, Liu Miamomiao).
Festivals are obviously significant for Euro
O therdirectors with films in Competition were
peans, as showcases for films and for promo
built on these can no longer hold, then we are in an age of post-m odernist search.
Abel Ferrara with Snake Eyes, Jean-Luc Godard
tion, as occasions for awards (sixteen groups
It was suggested that, in the early 1980s, this
with Hélas, Pour M oi (more of his private reli
beside the official Jury made awards at Venice)
led to an exultant trampling on the institutions
gious poetry, a critic remarked), Gus Van Sant
and as a key opportunity for quite wide media
and the certainties. In the early ’90s, it has been
with Even Cowgirls Get the Blues, Carlos Saura
coverage, intense newspaper and television re
a less arrogant self-confidence, more of a search
with Dispara, Bertrand B lierw ith Un, deux, trois
porting.
and an acknowledgem ent of the latent spiritual
soleil and Clara Law with You Seng. This might
Europeans also like to discuss cinema and
ity. Jean-Luc Godard’s career is Interpreted in
give an impression of a varied programme. How
the philosophies behind cinema. This became
this light, his 1993 Hélas, Pour M oi combining
ever, there were no entries from Germany, Scan
clear to me with the discussions about Krzysztof
word and image, peripheral narrative, but using
dinavia, Belgium, Holland, Britain, Canada,
Kieslowski’s Trois Couleurs: Bleu. Kieslowski’s
Leopardi’s poetry as a meditation on life and
Japan and India, and none from the whole of
moral dilemmas (so popular from his Dekalog,
faith through awareness of the seeming ab
Africa. The Bemberg film was the only entry from
then La Double Vie de Véronique) dramatize the
sence of God. For European thinkers, there is a
Latin America.
anxieties of contemporary Europe, the self
delight in the aesthetics of abstraction.
Australia, on the other hand was well repre
centredness of the West and the grappling with
Commentators from English-speaking coun
sented: Bad Boy Bubbyin Competition, Hercules
recession and its consequences, the collapse of
tries tend to be far more utilitarian in their ap
Returns in the section ‘Window on Images’ (with
structures in Eastern Europe and the quest for a
proach, and stronger on narrative and the
several press releases from David Parker and a
European Community. Trois Couleurs: Bleu cul
conventions of story-telling. Harrison Ford and
letter, one presumes tongue-in-cheek, from the
minates in a concerto for a United Europe that
Robert De Niro both made this the core of their
director Giorgio Capitani to Parker wishing him
touched the jury e m o tio n s -w ith a further culm i
answers to questions at their press conferences.
well but saying he was unable to come to Venice
nation in awards.
This still may be post-m odernist, but the empha sis is on story and the aesthetic satisfaction in
to see and hear what they had done to his epic),
My journey to Venice also took me to Rome
Lynn-Maree M ilburn’s Memories & Dreams and
for an international conference on “Cinema and
responding to a well-told story. And so, the
two shorts, Dennis T upicoff’s Darra Dogs and
Theology” , a firs tfo r Catholic professionals, and
Venice Golden Lion was shared with Robert
Monica Pellizzari’s Just Desserts, winner of the
sponsored by the Jesuit-run Gregorian Univer
Altm an’s intertwining of Californian stories (de
award for Best Short Film.
sity, OCIC, the International Catholic Organisa
rived from Raymond Carver), Short Cuts.
The sharing of the Golden Lion (or dividing
tion for Cinema and the Center for the Study of
However, the Europeans like their stories, their Hollywood stories. Clint Eastwood is a
the prize depending on your perspective) be
Communication and Culture (St Louis). The title
tween Altman and Kieslowski was sym bolic of a
of the conference was “The New Image of Reli
European hero. But they also tend to see the
mood in Europe and at the Festival. During
gious Film”. “ Religious film ” was not confined to
post-modern dim ensions of popular culture.
September, it was not only the French farm ers
explicitly religious films. In fact, as the confer
David Lynch fulfils these expectations. Twin
who demonstrated in the streets about the GATT
ence progressed, it was clear that the focus was
Peaks: Fire Walk With Me (1992) becomes a
Talks, but French filmm akers, including such
on film s and values, the latent spirituality in their
frequently-cited classic, as does Ferrara’s Bad
names as Claude Berri, Gérard Depardieu and
text and texture.
Lieutenant (1992). The m ulti-media dimensions CINEMA
PAPERS
96 - 43
also appeal - the books, diaries and music all
feel which dominates and is embraced by this
hobby is chopping up blonde women in white
part of Twin Peaks, and the concerts, perform
Festival.
dresses. Jones reveals their secrets early on,
ances, music videos, records, movie appear
Many films, like Claude Lelouch’s Tout Ç a ...
preferring to build suspense around their mutual
ances of Madonna. The conference might have
P o ur Ç a! (A ll T h a t...F o r This!), K rzyszto f
mistrust as the boarder convinces Carter to write
been titled “Madonna meets Tarkovsky” .
Kieslowski’s Trois Couleurs: Bleu (which is the
his memoirs. Chris Jones’ debut feature is shot
Bad Boy Bubby made an appearance with
first in his trilogy based on the qualities and
in a straightforward manner, but the complexity
the question of the confrontational, even repel
colours of the French flag) and Jane Birkin’s first
of the script prevents any chance of tedium
lent, film s and their latent spirituality. The edify
feature as director, Oh Pardon! Tu Dormais...
developing.
ing films may have their place but, as was said,
(Sorry, Were You Asleep?), seemed to be cel
Alongside other English films featured, such
if Marx declared “ religion was the opium of the
ebrated merely because of their use of French
as Mike Sam e’s indifferently awaited third film
people” , then explicitly religious films were an
language. Not to be fooled, however, the Com
(following Joanna, 1968, and Myra Breckinridge,
overdose. Bad Boy Bubby and the films of Ferrara
petition Jury gave the Best Film prize to
1970), The Punk, and the lame comedy Leon the
were seen as “De Profundis” films (from the
Margarethe Von T rotta’s Italian language film II
PigFarm er(V adim Jean and Gary Sinyor), White
Psalm of deep depression and longing, “Out of
Lungo Silenzio (The Long Silence), while the
Angel seemed more a companion piece to low-
the Depths...”). With their graphic images of the
FIPRESCI jury awarded their prize to the Brad
budget American thrillers rather than a special
victimized, suffering human condition and the
Pitt and Juliette Lewis vehicle, Kalifornia (Dominic
presentation of English drama.
Sena).
There were two such selections in thè
ourtim e. They are ‘question-parables’. Bubby is
The most impressive French language film of
a latterday ‘Idiot’, a holy fool who confronts the
the Festival came from a European émigré who
Edition” in which director Howard Libov skilfully
contradictions of life (and one rem em bers
had settled in France. Costa-Gavras’ new film,
synthesized the structure of Billy W ilder’s The
Montréal Festival. First, the exciting “Midnight
Chance, the gardener, of Being There (Hal
La Petite Apocalypse (The M inor Apocalypse),
Big Carnival (aka Ace in the Hole, 1951) with the
Ashby, 1979); Jobbe in The Lawnmower Man
was unheralded, yet this satire on veterans of
story of Gary Gilmore. The other low-budget
(Brett Leonard, 1992); and IILeggenda del Santo
the May ’68 riots was dead on target. The story
American stand-out was Public Access, which
Bevitore ( The Legend o f the H oly D rinker
describes how a handful of once-radicalized
had won the Grand Jury prize at Sundance
(Ermanno Olmi, 1988).
yuppies regain their political consciousness when
earlier this year.
One took heart from the fact that the OCIC
they discover (mistakenly as it turns out) a Polish
Public Access is a gripping tale about a
plaques at Venice went to Bleu and to Bad Boy
poet willing to take the risks that they, under the
stranger coming to ‘small-town, USA’ called
Bubby, in line with the awards from the Festival
doona of capitalism, are no longer willing to take
Brewster, to cause trouble. But rather than gun
Jury and other groups. The conversation be
themselves. Political comedies which are funny
play, W hiley Pritcher (Ron Marquette) simply
tween religion and cinema culture is not as far
are rare enough, but a political film this hilarious
asks a provocative question on a public access
apart as might at first be thought.
should be considered a milestone.
cable television programme: “W hat’s wrong with
One of the difficulties for Venice this year
In addition to the continental cinema, Montréal
our town?” From there, the tow n’s self-hatred
was that the Italian Government was limited in
ran a special selection of British cinema. Films
and fear takes over letting the populace feed off
its funding of the Festival. The exuberance of the
which will receive ample coverage elsewhere
itself. Unfortunately, Public Access suffered the
event and the seriousness of the discussions
are Mike Leigh’s Nakedand Stephen Poliakoff’s
fate of many elaborate thrillers and eventually
makes one hope that the economic recovery is
Century, they were the headliners and the re
became confused. The first hour, however, en
well on the way.
sponse was predictably favourable (deservedly
sures that Bryan J. Singer is a writer-director to
so in the case of the Mike Leigh film), though
watch.
they are not traditional British fare (i.e., period pieces or kitchen-sink drama).
The indisputable highlight of the Montréal Festival was the Taviani brothers’ film, Fiorile.
MONTREAL WORLD FILM FESTIVAL
relatives in the work of Peter Greenaway and
wife and two children through the Italian coun
AUGUST 24 - SEPTEMBER 6
Ken Russell, but was executed with a restraint
tryside to Tuscany. As Luigi Benedetti (Lino
AND
that marked an advance on either of those direc
Capolicchio) drives he tells the epic drama of his
TO R O N TO FESTIVAL OF FESTIVALS
tors. Shot in exacting black-and-white, Anchoress
family and the greed that has cursed their an
looks like the Middle Ages would have if Ansell
cestors.
SEPTEMBER 9 - 1 8
Adams had been there to photograph it. The
The first tale reveals how the curse is set in
story revolves around a young girl who has
motion when a young man robs a soldier of the
RUSSELL
EDWARDS
Chris Newby’s Anchoress has its closest
The film begins as an affluent man drives his
experienced visions of the Virgin Mary. Unable
gold meant to finance the Napoleonic campaign.
very September the Montréal and Toronto
to mould her spiritual convictions to the current
The wealth of the Benedettis is assured but so is
Film Festivals act as a splendid double-
Christian dogma, the local priest has the girl
the dishonour he brings on the family. Three
E
header for the over-indulgent film buff. Despite
bricked into the wall of the church. This ensures
more stories (one taking place in the 1870s, one
their proximity in both time and location (the two
she can be supervised by the clergy and visited
during World War II, and the last using, the
cities are only four hours apart by car), however,
by the parishioners in search of salvation. The
present-day framing device) make up this exqui
both Festivals have distinctive personalities. This
priest, however, is horrified that his Anchoress is
site film, but every :subsequent story/has its
is probably best expressed by each Festival’s
passing on pearls of wisdom more befitting of
roots in thé, o.rigina|;flashback. The transitional
choice for opening night.
her pagan origins than the Christianity he would
sequences from the present day to the past are
have her promote.
beautifully executed. ?
The Montréal World Film Festival commenced this year with a Quebeçois feature, Le Sexe des
The other surprising British film was also a
; Like the Melbourne and Sydney Festivals,
Etoiles (The Sex o f the Stars, Paule Baillargeon).
first-tim e feature by Chris Jones, White Angel.
the proximity,of the Montréal and Toronto Film
Given the powerful nature of previous Quebeçois
Opening with a woman ramming her husband,
Festivals means an Overlap in the product shown.
product (Jean-Claude Lauzon’s Léolo or Denys
against the garage wall with a car, Chris Jones
Hence, catching Fiorile, Howard Davies’ The
Arcand’s Jésus de Montréal, for example), this
is clearly bored by slow build-ups. Ellen Carter
Secret Rapture, Alain Tanner’s Le Journal de
film was disappointing. Essentially a ‘rites of
(Harriet Robinson) is a crime writer who suffers
Lady M., 'Emir Kusturica’s Arizona Dream, Dusan
passage’ story about a teenage girl coming to
writers block as her husband’s death is investi
Makavejev’s Gorilla Bathes at Noon, Ken Loach’s
terms with her father’s identity as a tran ssexu a l,
gated. When the body cannot be found the
Raining Stones and Hans Gunther Pflaum’s
Le Sexe des Etoiles is an adequate melodrama
charges are dropped, but her writers block re
documentary about Fassbinder, Ich Will Night
which, manages to avoid the excesses of soap
mains. Unable to pay her mortgage, Carter takes
Nur, Dass Ihr Mich Liebt (JJDon’t Just Want You
opera. What this film does have is the European
in a boarder, Leslie Steckler (Peter Firth), whose
to Love Me), saved a lot o fstioe leather when the
44 • C I N E M A
P A P E R S 96
T oro nto Festival opened three days a fter
the filmmakers said they were heavily influ
such a big festival, and partly because it is a
M ontréal’s close. But with 222 features to be
enced by films like Seconds (John
Frank-
mere domestic flight away. (It is amazing at
seen In 10 days (it’s not for nothing that Toronto
enheimer, 1966), Sisters (Brian de Palma, 1973)
press conferences the number of Hollywood
calls itself The Festival of Festivals), there was
and Spellbound (Alfred Hitchcock, 1945), which,
personnel who speak as if they are still in the
little chance Montréal would leave one short of
like this debut, also feature elements of amne
U.S.) Jeremy Irons, Matt Dillon (Fort o f Saint
films to see at Toronto.
sia, twins and plastic surgery.
Washington), Lorraine Braceo (Even Cowgirls Get the Blues) and Dennis Hopper (Red Rock
Like Montréal, Toronto opened with a Cana
The FIPRESCI Prize went to actor Forest
dian product, sort of. There are many who ques
W hitaker’s feature, Strapped, about the urban
West) all flew in and out for press conferences
tion that anything from David Cronenberg can
tragedy and reality of gun-running to street kids
giving the Festival its desired hit of glitz. The big
reflect the Canadian experience anymore, and
in New York. W hitaker’s direction is ambitious
fuss was inevitably over Robert De Niro coming
his version of David Henry Hwang’s play M.
and sometimes his reach exceeds his grasp, but
to town. It’s a pity that his first directorial effort
Butterfly further fuelled the argument. In fact,
there is no disputing his talent for directing.
was of insufficient mettle to justify the fuss.
due to the departure from what is regarded as
A Bronx Tale was expanded from a mono
Outside of the First Cinema selection, my
typical Cronenberg territory (i.e., blood and gore),
overall favourite was Thirty-Two Short Films
logue by Chazz Palminteri and has become a
some wondered whether M. Butterfly could be
A bout Glenn Gould. Bypassing the hazards that
two-hour parable about a boy’s soul being con
described as a Cronenberg film at all. Starring
caught the recent deluge of mediocre bio-pics,
tested by the forces of good (the boy’s honest
Jeremy Irons as Rene Gallimard and John Lone
Francois Girard found an original angle with his
bus-driving father as played by De Niro) and the
in the title rôle of Song Liling, the film is stylish
filmed biography of the Canadian concert pian
forces of evil (the corner gangster played by
and solidly made; but the critical crossfire from
ist. Just as the title describes, the film is a series
Palminteri, who also wrote the script).
devotees of the play and over-zealous auteurists
of thirty-two fragments compiled to create a
The most hyped-up film had to be Jane
meant that the film didn’t and probably w on’t
loving portrait of a complex and often unreason
Campion’s The Piano. It was predicted that like
receive a fair chance.
ably dogmatic eccentric. Actor Colm Feore in
Strictly Ballroom (Baz Luhrmann) the year be
Among the First Cinema programme there
habits the title role perfectly. Not only does he
fore, The Piano would win the People’s Choice
were two film s a cut-above-the-rest. Suture, a
look like Gould, but I have never seen an actor
award, and unlike its antipodean predecessor
film co-directed by first-tim ers David Siegel and
look more comfortable in a role. As Gould was a
would win the Critics’ Prize. The people and the
Scott McGehee, was clearly not the work of
Torontonian, it was no surprise the film gener
critics had other ideas. The critics chose Mike
talented beginners fumbling their way. Rather,
ated strong interest. But since the film had just
Leigh’s Naked, while the public placed Campion’s
with its sleek, black-and-white Panavision look,
received a standing ovation at the Venice Festi
movie second to Stephen Frears’ The Snapper,
devious script and seamless direction, it was
val the week before, Glenn Gould really was
which is based on a novel by Roddy Doyle. At
difficult to accept this film as the work of two like-
riding the crest of a wave.
Toronto the distributors and the publicists might
minded individuals instead of one gifted person.
Toronto is regarded as a good place for the
Respectfully acknowledging their predecessors,
studios to launch their films. Partly because it is
NewW ritings on Film, Television & Video ... THE MOVING IMAGE...
an exciting series of quarterly monographs published by the Australian Film Institute
try to sway the results, but it is the critics and the public who get the last say.
SOUNDTRACKS NEW & UNUSUAL SOUNDTRACK RECORDINGS F R O M OU R L A R G E R A N G E
NEW FROM 20TH CENTURY FOX: THE CLASSICS SERIES Up-SW
Now Available Issue Number One
TH E R O B E *
ALFRED NEWMAN
TH E D A Y TH E E A R T H S T O O D S TILL *
H!»c
S IT E S O F D IF F E R E N C E : C in e m atic re p re s e n ta tio n s o f A b o rig in a lity a n d g e n d e r By Karen Je n n in g s
S T O R M Y W EATHER * STAR* LAURA /J A N E E Y R E *
VARIOUS
DAVID RARSIN & BERNARD HERRMAN
H O W GREEN W A S M Y VALLEY* FE A R LE S S *
S U B S C R IB E NO W !
(all prices include postage- except overseas add $20 airmail)
ALFRED NEWMAN
MAURICE JARRE
BODY BA G S*
JOGHN CARPENTER
R 0B0C 0P 3 •
BASIL POLEDOURIS
R E M A IN S O F TH E D A Y *
□ $16.95 No 1 only Sites o f Difference O $55 Individuals - one year Nos. 1-4 (3 $75 Institutions - one year Nos. 1-4
BERNARD HERRMAN
VARIOUS
2 0 01 *
RICHARD ROBBINS
ALEX NORTH'S ORIGINAL SCORE
CONDUCTED BY JERRY GOLDSMITH RUDY*
JERRY GOLDSMITH
M A L IC E *
JERRY GOLDSMITH
ALL CD'S - $32.00 EACH
N A M E _____________________________ A D D R E SS _________________________ PH O N E
READINGS * SOUTH YARRA OPEN 7 DAYS A WEEK 153 T00RAK ROAD • 8671885 • BOOKS /LPS/ CDS/CASSETTES 73-75 DAVIS AVENUE • 866 5877 • 2NDHAND LPS/CDS/CASS.
THE MOVING IMAGE
AFI RESEARCH &INFORMATION CENTRE 49 EASTERN ROAD SOUTH MELBOURNE VIC 3205 PH (03) 696 1844 FAX (03) 696 7972
366 LYG0N STREET CARLTON 347 7473 269 GLENFERRIE ROAD MALVERN 509 1952 710 GLENFERRIE ROAD HAWTHORN 819 1917 MAILORDER • P 0 BOX 482 SOUTH YARRA VIC. 3141
CINEMA
P A P E R S 9 6 • 45
F I L M
R E V I E W
S
BEDEVIL; BLACKFELLAS; CRUSH; THE N O S TR A D A M U S KID; THIS W O N ’T HURT A BIT!; a n d , THE W ED DING B A N Q U E T
Queensland imbue their tales comes across clearly. Moffatt is of Koori background. “Mr. Chuck” is a deliberately drab piece which begins in interview style. A housewife (Diana Davidson) tells of an incident with an Aboriginal boy (Ben Kennedy) many years before, when she was living near a swamp in North Queens land. The fixed camera emphasizes suburban mundaneness. Interspersed within her recollec tion is an interview with an Aboriginal man (Jack Charles) in gaol who tells of an experience he had when young of a ghost. It turns out, though the film hands it to us and evokes no surprise, that the housewife knew the man: he was the boy of her recollection. The wom an’s eyes con vey longing while the man’s childhood delin quency is portrayed charmingly: the two of them look out from their mundane existence and re call a past bathed in the light of nostalgia; the wom an’s is warm, the man’s somewhat cold. W ithout making a meal of it - contrary to what seems typical in Australian artistry these days Moffatt suggests the housewife is a prisoner, too: the woman looks out from the glass door of her home as the camera rises above the subur ban ordinariness. Mr. Chuck turns out to be the nickname, perhaps invented by the boy, of a U.S. soldier who supposedly drowned in the swamp, on which a cinema is built. M offatt’s film is built overthe memory of the U.S. culture of the ABOVE: MINNIE (PATRICIA HANDY)
BEDEVIL
AND BEBE (PINAU PANO ZZO ) IN “LOVIN' THE SPIN I’M IN".
JOHN
WOJDVLO
racey M offatt’s debut Australian feature,
“Choo Choo Choo Choo” is deliberately more
BeDeviP, is comprised of three self-con
high-key, the first of the two “interviews” , this
tained stories: “ Mr. Chuck” , “Choo Choo Choo
time realized using a hand-held Super 8 cam
Choo” and “Lovin’ the Spin I’m In” . The vision
era. A party of Aboriginal women is heading out
T
combines intense visual and narrative stylistic
on a picnic; one of them, Ruby (Auriel Andrews)
innovation with that old American cinema in
- the character of Moffatt’s mother (according to
which characters strive to look outward and be
an interview with Moffatt in Cinema Papers2) -
part of life’s cabaret.
tells of the time she lived with her mother and
In a general sense, the triptych progresses
P A P E R S 96
progresses, the ghost seems to rise from the swamp and fill the film with the old Hollywood spirit.
TRACEY MOFFATT'S BEDEVIL.
46 . C I N E M A
1960s, the years of her childhood. As the film
father, a railway ganger, in a remote, isolated,
like a contemplation: a childhood bathed in me
ramshackle house beside a rail-line. Moffatt
diocrity; an adolescence spent with mother; and
herself plays the Ruby of the recollection. Every
emergence as a young adult with an optimistic
now and then the family hears the sound of a
outlook on life. The main characters strive to
ghost train. The fantastic set (designed by
connect with others - with people in their past,
Stephen Curtis) and style of photography (Moffatt
their current friends, or future lovers - resulting
is also a leading Australian photographer) lend
in moods of “lost chance” , “contentment with
intensity to M offatt’s memories of her mother.
life” and “hope for the future”, respectively. Heat,
Interspersed within the wom an’s recollect
mundaneness, isolation and nostalgia, charac
tions is an “interview” with a delightfully eccen
teristics of the Australian outback, are evoked
tric man of Chinese origin (Cecil Parkee) who
throughout. The mix of Aboriginal, multicultural
introduces the interview erto his shop in a sleepy
and “true blue” gives the triptych a look culturally
outback town. The interviewer notes that the
specific to Australia. The deep sense of ro
man repeats an odd gesture which townsfolk
mance with which Aborigines in tropical North
also made to him while he was driving along the
tow n’s main street. The simple link between the
with Dimitri near the beginning, so the focus of
tow nsfolk sym bolizes ordinary attachm ents;
the miniature becomes the street they are living
assert the specialness of its style. In Bedevil, we are left inside the director’s
moreover, the gesture seems to be saying, “Open
on. One is reminded of the film ’s social aspect.
aesthetic structure but our feelings - which are
your eyes - mundaneness does not have to be
The last scene shows the crooks haven’t a
called upon - do not fill it and seem to have, at
banal!” The man has called the interviewer into
chance of “bedevilment” - they just keep going
best, an extraneous connection with what we
the shop so he can mention to him the existence
around in circles with their folly.
see. The eagerness of the characters to convey
of a ghost - of a blind girl (Karen Saunders)
BeDevil is a difficult film to watch because of
something personal and the obvious mystery
killed by a train. Characters living in widely
the continuous conflict between naturalism and
evoked by the fantastic set wash past each other
different circumstances, a great distance apart,
anti-naturalism. On the one hand, we are pre
despite M offatt’s efforts to splice them together.
are linked by a similar kind of memory, as if by a
sented with the illusion that the characters are
The other problem faced by aesthetically-
tunnel.
free, and, on the other, we are constantly re
bound narrative films is characterization. By the
Meanwhile, the Aboriginal wom en’s barbe
minded of technology - the director’s will -
end of the second miniature, one has a sense
cue picnic is proceeding vociferously. In a hu
through M offatt’s obsessive preoccupation with
that, although we have seen extensive machina
morous scene, Ruby energetically argues with
style. It is like watching two films screened over
tions of the director’s imagination, we have
another woman over the aesthetics of yabbie
the top of each other. The hyperactive stylistic
learned little about the characters whose recol
cuisine, the position on the plate and pattern of
intervention strips away narrative feeling by in
lections are supposed to comprise the film. Since
the sauce. The women are suitably decked out
voking formal connections (which often seem to
conflict is only notional, opposing components
in designer shades; and the portrait photogra
lead nowhere), while the narrative feeling keeps
are unconvincingly drawn out from the happy
phy is first-rate.
trying to rise above the din. Putting it another
surfaces: the characters could be the same
The interview style is abandoned in the third
way, the director seems to be half-way between
person with masks. (I mean “conflict” as a colli
miniature, “Lovin’ the Spin I’m In” , as two ghosts
thinking that all representation is a pernicious
sion of ideas, not necessarily represented by violent acts.)
enter the land of the living with a flourish of
fiction and abandoning materialism altogether in
spontaneity: a dancer spins across the stage in
flowing naturalism. Sometimes it feels as if the
The film is very much the author’s space: one
pursuit of her lover. The ideal is set and the
director has intervened at length to safeguard
wonders what the film tells us about anybody but
miniature proceeds to sketch several characters
the telling of the stories; paradoxically, her ap
Tracey Moffatt. The triptych is a series of self-
meeting a psychological threshold between op
proach turns out to be extremely conservative,
portraits a la Frida Kahlo. What insights does it
timism and pessimism for the future. A merchant
conveying few, if any, genuine insights.
have to give to other people besides the image
of Greek extraction, Dimitri (Lex Marinos), meets
Moffatt does not seem to have given serious
of its creator? The stories are simple sketches -
with misfortune at the hands of high-class thugs
thought to the artistic problem of friction be
or even less. Nothing is ventured and nothing is
in front of a dilapidated warehouse he owns; he
tween showing characters naturally “as they
gained. Ultimately, the unhappiness from self
supports a wife (Dina Panozzo) and son Roxy
are” and her painter’s vision which is bound by
obsession which the Kahlo look-alike thought
(Midia Daniels) while operating an enterprise of
aesthetics. Even a flourish of spontaneity is not
he’d left behind by placing a candle at the altar
dubious integrity, so the path in life he has
enough to loosen the shackles of style which
of life is merely brushed over with a happy face.
chosen continually teeters between optimism
emanate from every gesture, word and piece of
Moffatt has failed to set herself free.
and pessimism, and seems unfulfilled. The con
set around it: one almost gets the feeling thatthe
flict is benign, notional as in staged dance. The
dancers, too, are the director’s puppets.
The question of whether Moffatt’s creation is a “moving painting” or a film is beside the point.
The final dance scene in the warehouse is
As a product of human hands that aspires to art
time-frame of conflict increases dramatically as
played on an empty set, emphasizing the pure
and not technology, it should be judged by the
the psychological states of the characters are
energy of the lovers, but it seems merely a
impression it leaves. The impression I am left with, long after seeing it, is that Bedevil is a
density of visual information within the short
brought out through their relative location, posi
filmed dance sequence which has somehow
tion of hands, gestures and so forth, as well as
found its way into the film. /Esthetically-bound
simplistic record of typical feelings of the Aus
through what they say. The viewing experience
films with too much naturalism necessarily seem
tralian outback, and is an extremely intricate, but
is like watching several mime artists working
to require a sojourn into purity during which they
not complex, way of saying “Don’t worry, be
simultaneously, who talk over an ever-present
lose their film character, resulting in conflict of
happy.”
mood evoked by a deserted maritime quay. The
purpose. There cannot be a breaking of all levels
conflict causes Roxy to dream of a better life (the
of technology to bring the film alive: one is
narrative link was that he witnessed the fight).
always reminded of material. This is why Pier
1. On screen, the title is beDevil.
Having gone to sleep still wearing his rollerblades
Paolo Pasolini believed that cinema has to be
2. “ BeD evil: Tracey M offatt interviewed by John
after wasting yet another day waiting for some
“naturalistic” .
thing to happen in his life, as artists are prone to,
The Georgian filmmaker Sergei Paradzhanov,
Notes
Conomos and Raffaele Caputo” , Cinema Papers, No. 93, May 1993, p28.
he wakes up one night thinking he is hearing
who also exhibited as a primitivist painter, solved
Further Reading
something from the empty warehouse across
the problem of conflict between naturalism and
“BeDevil: Tracey Moffatt interviewed by John Conomos
the road. He goes over to investigate and sees
anti-naturalism in aesthetically-bound, narrative
and Raffaele Caputo”, Cinema Papers, No. 93, May
the dancing ghosts: he is imbued with their
cinema by opting fortotal control in films such as
1993, pp. 26-32.
joyous spirit. He is “bedevilled” by love. Once
Ashik Kerib (1988) and Legends o Suramskoj
“Tracey Moffatt” , interviewed by Scott Murray, Cin
again, the photography (Geoffrey Burton) and
Kreposti ( The Legend o f the Suram Fortress,
set make even mundane occurrences such as
1985) in the sense that every element in the film
the rollerblader seem visually fresh.
seems to have been painted by his hand (evok
ema Papers, No. 79, May 1990, pp. 19-22. BEDEVIL Directed by Tracey Moffatt. Producers: Anthony Buckley, Carol Hughes. Scriptwriter: Tracey
Another thread within the miniature is its
ing immediate control). Characters appear, dis
Moffatt. Director of photography: Geoff Burton. Pro
occasional focus on a man (Luke Roberts) gaz
appear (spliced out) and re-appear in different
duction designer: Stephen Curtis. Art director: Martin
ing out from the window of a room in Dimitri’s
costumes in the space of seconds: the films feel
Brown. Sound recordist: David Lee. Editor: Wayne Le
warehouse he has occupied without paying rent;
coherent despite the extreme stylization and
Clos. Composer: Carl Vine. Cast: Diana Davidson
he is trying to come to terms with the delusion
manage to tell beautiful folk stories of the Cau
(Shelley), Jack Charles (Rick), Tracey Moffatt (Ruby
that he is Trotsky’s lover, Frida Kahlo. The
casus region. (Legend o f Suram Fortress only
Morphet), Banula (David) Marika (Stompie Morphet),
morbid self-obsession is making him unhappy.
has one character, an unseen narrator who
Eventually, he opts to look with hope towards
translates the Russian spoken on screen into
making a life. There is no obvious narrative
Georgian with ironic humour; the otherfaces are
Southern Star presentation of an Anthony Buckley
connection between this man and the rest of the
only sketches of characters.) They have a qual
production. Australian distributor: Ronin. 35mm. 90
characters, apart from a few remarks exchanged
ity of humility, while M offatt’s film still wants to
mins. Australia. 1993.
P a u lin e . M cLeod (Jack), A uriel Andrew s (Ruby), Mawuyul Yanthalawun (Maudie), Lex Marinos (Dimitri), Dina Panozzo (Voula), Riccardo Natoli (Spiro). A
CINEMA
PAPERS
96 . 47
BLACKFELLAS
and makes clear its position that there is no
option for Dougie, and, by implication, for youhg
alternative by having Floyd offer up his own (way
Aborigines in general: complete disdain for the
of) life so that Dougie and his girlfriend, Polly
ougie Dooligan (John Moore) is a 19-year-
white man’s law combined with an equally com
(Jaylene Riley), may have a shot at something
old Aborigine about to be released from a
plete ignorance of tribal lore. Floyd is a cheeky
better.
Perth prison, where he has done time for the
character, attractive in his immersion in the
Despite the clear moral dimension and di
stabbing of a white man in a brawl. Dougie
“now” of his existence, and in his refusal to view
dactic nature of its resolution, Blackfellas is a
blames his cousin, “Pretty Boy” Floyd (David
his position as one of disadvantage. His behav
film which appears to be best understood as a
Ngoombujarra), for his being there - it was Floyd
io u r-s e x u a l, criminal, s o c ia l-is , in many ways,
significant and accomplished piece of social
who started the fight - and bitterly resents the
affirmative. But it is also heavily contingent upon
realism. Yet there remains an element of reser
fact that he hasn’t been to see him once in his
not being caught, and as such bears the heavy
vation in this response. The film is aesthetically
18-month incarceration. As Dougie is being led
weight of inevitable closure.
KARL
D
t
a metaphorical focus, too. Floyd represents an
QUINN
a bit rough, and some of the performances
towards the front gate, he sees another, older,
During his time in prison, Dougie decides to
occasionally waver, but that is not where the
black man being brought in. It is his father, a
reject Floyd’s way of life. He has no desires to
problem lies - at least, not directly. The rough
regular participant in the prison system. Dougie
end up like his father, which is where he sees
edges are easy enough to forgive, and to explain
becomes emotional, but, after a scuffle with the
Floyd’s recklessness leading. But he doesn’t
away in terms of the film ’s “veracity” , its “authen
police escorting him to the gate, is freed.
want to live the life his white mother (Julie
ticity” . And that is where the problem lies. How
Outside the prison walls, Dougie finds him
Hudspeth) has mapped out for him either, work
do I, a white Australian with fairly limited expo
self alone. As he begins the long walk into town,
ing as a mechanic, and avoiding his black “peo
sure to Aboriginal culture - urban or otherw ise-
Floyd and some friends pull up alongside and
ple” in preference for his white ones. Instead,
come to be in a position to pronounce upon the
offer him a ride. Torn between his anger at
Dougie dreams of buying back Yetticup, the
film ’s veracity? I do not ask this in order to open
Floyd, his distress at seeing his father being
clapped-out country property - and a part of his
up the can of worms of critical legitimacy, but to
locked up again, fear that the car is stolen and
people’s D re a m in g -h is father once owned, and
ask how do any of us (whites) know the “truth” of
the realization that he’s got nothing else to do
re-establishing it as a viable horse stud.
Aboriginal culture? The answer, it seems to me,
anyway, Dougie accepts a ride with Floyd and
This ambition is a highly suggestive one in so
company, and soon finds himself at a bedrag
far as it navigates a course midway between the
The director of Blackfellas, James Ricketson,
gled Aboriginal encampment bn the edge of the
traditional Aboriginal culture from which Dougie,
comes from a background in television docu
city, where his release is celebrated in grand
Floyd and all the other urban Nyoongahs (Perth-
mentary, and has made programmes dealing
style with football, grog and song.
area Aborigines) in the film have become alien
with Aboriginal culture and issues in that format.
is through white media, television in particular.
So opens Blackfellas (aka Day o f the Dog), a
a te d , and th e c o m m e rc ia l, la n d -o w n in g
He would seem to be ideally placed to make a
study of the temptations and traps, the pres
imperatives of the white culture which would in
feature film about that culture and those issues,
sures and prejudices, which confront contempo
all probability reject them even should they em
and to employ some of the production tech
rary urban Aborigines. Decidedly and refreshingly
brace it. Dougie’s dream would seem to have the
niques of the television documentary in the name
unromantic in its portrait of Aboriginal culture,
function of offering black audiences a way out of
of realism (significantly, ABC TV was a produc
the film is also largely resistant to the easy point
what the filmmakers, presumably rightly, see
tion partner). In that sense, Blackfellas might be
scoring of painting all whites as racist villains
as a malaise. In re-forging a link with the land,
seen as an extension of the documentary into a
(though the police come in for some understand
even if not on the basis of a fully understood set
marginally more popular format: the limited-
able criticism, with John Hargreaves hamming it
of traditional beliefs and values, young Aborigi
release feature film. But it also means that the
up in the role of a racist sergeant). Although
nes will be taking control of their own lives in a
points against which the m ovie’s veracity can be
fairly loosely structured around a sense of immi
way they never can while allowing them to be
checked have been produced by exactly the
nent and inescapable tragedy, ratherthan a tight
defined by a relationship to white systems of law
same system - well-meaning white filmmakers
plot, Blackfellas also succeeds as drama.
and patronage (either living off handouts or
observing a culture which is not their own - as the movie itself.
Doug’s relationship to Floyd remains through
running the gamut of the authorities). The film
out the focal point of that drama, and serves as
strikes a sound blow for Aboriginal self-reliance,
This is hot necessarily intended as a criti cism, merely as a caveat to the implicit criteria which many will bring to bear when commenting upon the “worthiness” or the “accuracy” of the film. It seems to me that the film is, indeed, both worthy and accurate; but I have only the accu mulated evidence of (predominantly) w hite-pro duced and -directed television documentaries to back up that assessment. There is no way for a white audience to break free of that circularity, short of putting the power of critical appraisal in the hands and mouths of those who know best whether such things are accurate - the Aborigi nes who are the subject of the film(s). I am not trying to suggest that “truth” can only come from the mouths of the subjects of a film or other artefact, just that they might well produce a very different sort of truth if given the opportunity. To be fair, Blackfellas is aware of and goes some way towards addressing this issue. While its principal creatives are white, the film carries the imprimatur of being able to lay claim to the input of Aborigines on multiple levels. Archie VALERIE (LISA KINCHELA), “PRETTY BOY” FLOYD (DAVID NGOOMBUJARRA) AN D DOUGIE~(-JOHN MOORE). JAMES RICKETSON'S BLACKFELLAS.
,
W eller’s novel, The Day o f the Dog, is its source, and W eller consulted on the screenplay. Many Nyoongahs were reportedly involved in crewing on the film, and Ricketson and producer David Rapsey have commented upon what they con sidered to be the importance of leaving behind “a legacy of experience and knowledge in the Aboriginal community so that they will be able to produce and direct their own film s” . They are to be applauded for that. There can be no denying that, in front of the camera, many of those in the predominantly black cast show considerable promise; John Moore gives a performance streets ahead of the one for which he garnered some praise in Deadly (Esben Storm, 1992), and David Ngoombujarra is always compelling, whether Floyd is stealing cars, playing football or squeezing out his last words in a pool of blood. W hether future rôles will exist for them and the others to fulfil that promise is another matter. The best guarantee that they do is to place the right to speak and make films about the subjects that matter to them in the hands of Aboriginal people. Further Reading Archie Weller, “Films in Colour: or, Black and White Perspectives of Screenplay?” (re Day of the Dog [Blackfellas]), Cinema Papers, No. 87, March-April 1992, pp.44-5. “James Ricketson’s Day of the Docf’, op. cit., pp 46-7. John Harding, “Canons in the Camera”, op. cit., pp.42-3. BLACKFELLAS Directed by James Ricketson. Pro ducer: David Rapsey. Executive producers: Paul D. B arro n, P enny C h apm an. S c rip tw rite r: Jam es
is receiving emergency treatment.
ABOVE: LANE (MARCIA GAY HARDEN). ALISON MACLEAN’S CRUSH.
Pitched against Lane’s calm is chaos which now fills the mind of Christina, whose career has
crash, when deprived of voice, leg and hand
been destroyed by Lane’s provocative behav
control. Then the young girl finds her intimacy
iour. The cross-cutting reinforces the ambigu
with her father is crushed as he becomes more
ous relationship between them: Are they lovers
and more smitten by Lane. In the meantime,
or casual strangers? On one level, Lane’s bath
Lane, like a parasite, feeds off each person,
Archie W eller. Director of photography: Jeff Malouf.
ing, watching plump drops of water escape the
growing colder and dismissing them as whim
Production designer: Bob Ricketson. Costume de
faucet, is symbolic of her washing away her
strikes her.
signer: Ron Gidgup. Editor: Christopher Cordeaux.
crime. In the interim, Christina, bathed in blood,
Red is used as a sexual symbol in the film, a
Composer: David Milroy. Cast: John Moore (Doug
wrestles with her life, spilt blood releasing a
power colour that Lane wears like a badge. In
Dooligan), David Ngoombujarra (“Pretty Boy” Floyd
primal reaction, demonstrated by Christina’s
her tight skirt, racy leggings, leather jacket,
Ricketson. Based on the book Day of the Dog by
Davey), Jaylene Riley (Polly), Lisa Kinchela (Valerie), Julie Hudspeth (Mrs Dooligan), John Hargreaves (De tective Maxwell), Trevor Parfitt (Tiny), Attila Ozsdolay (Silver), Judith Margaret W ilkes (Nanna), Ernie Dingo (Percy). Barron Films. Australian distributor: Barron Films. 35mm. 98 mins. Australia. 1993.
CRUSH PAT
GILLESPIE
revenge during the film ’s second half. Despite
beret and bright-red hooker lipstick, Lane is a
Lane’s attempts to wash away the past, sym boli
garish, incongruous sight, wandering through
cally she will forever carry the bloodstains.
the landscape with no purpose and no under
Their relationship is sexually ambiguous. In
standing. Thefilm looks at heralienation and her
the scenes leading up to the crash, there is a
ability to alienate; ironically, despite her girlie
sense of tension and rivalry between the two
clothes, she is more of avenus flytrap, using her
women: Lane is the aggressor, who causes the
sex to tantalize and tempt but being indiscrim i
accident by playfully fighting off Christina who
nate and ruthless in her seduction.
wants to stop her reading her diary. The scene
Lane tries to woo the daughter by giving her
JL woman without remorse or conscience, her
highlights Lane’s need for control: Christina is
a red “seduction” dress. It hangs uncomfortably
embittered, crippled companion, a young
on her way to interview a famous author, Colin
on the young girl’s undeveloped body. Spurred
girl and her emasculated father are the luckless
(William Zappa); Lane resents her friend’s suc
on by Lane’s charisma, she wears the dress
characters in Alison Maclean’s pseudo-feminist
cess and sabotages it by fronting up to the
around the house, causing her father to look at
schlock-thriller, Crush.
author’s home after the crash and makes a
her in a more sexual way. Later, forced to rival
While tension in the first half of the film is well
sexual play for him.
Lane for her father’s attention, she shows her
sustained by Marcia Gay Harden’s performance
Sexual power games are the only way Lane
anger by refusing to wear the dress. The red
as the calculating, misanthropic and charismatic
can maintain control. At first she achieves this
dress on one level showed her equality with
Lane, the plot dissolves into a B-grade melo
by wooing the author’s daughter, Angela (Caitlin
Lane, but later shows the daughter’s loss of
drama during the second half, with a predictable
Bossley), feminizing her boyish looks by giving
power - she now irritates her father whose
and unconvincing dénouement.
her a red dress and taking her out on the town.
growing fascinations for Lane threatens his pa
Lane is an enigma; like aferal cat she lives on
The young girl initially submits to this make
ternal relationship. A red dress is a power state
her wits, ruled by a hedonistic agenda. Stranded
over, fascinated by Lane’s strength and devil-
ment for C hristina who dons it during the
in an alien country after surviving a car crash,
may-care antics, but soon sours when Lane
dénouement, symbolizing the shift in the bal ance of power between herself and Lane.
her first reaction is to steal her com panion’s
moves on to her father, a sex-starved writer
diary and leave her to die in the wreckage. What
whose sexual juices flow just as his artistic
follows is rem iniscent of the cross-cutting in
juices dry up.
The stagey ending, C hristina’s revenge, poses som e u n co m fo rta b le q u e stio n s. Is
Nicolas Roeg’s Bad Timing: in a series of quick
The title of the film is a word play on Lane’s
Maclean saying women are predatory and mer
mid-shots and close-ups, Lane soaks in a bath
ability to crush all she encounters. First it is her
cenary, and deep down cannot trust or like their
while her companion, Christina (Donogh Rees),
companion, whose career is cut short after the
own sex? Throughout the film, the wom en’s CINEMA
PAPERS
96 • 49
relationships are fraught with tension and mis
goes). Flashing back and
trust. Their one common link - the man - is a
forth between Ken at a
weak-willed, insipid character who is emotion
S ëventh Day A d v e n tis t
ally castrated by Lane’s machismo. It is difficult
camp in the Blue Moun
not to see parallels between Lane as the mis
tains in i 956, and Ken as a
tress, Christina as the wife and Angela as the
nineteen year-old at Syd
go-between in this weird love triangle.
ney University in thé early
While the first half of the film deals With primal
1960s, The Nostradamus
lust and hedonism, the second is a morality tale
Kid aims both to be highly
about repressed anger and its consequences.
personal, and to ëxtrapo-
However, the latter is unconvincing as Lane
late from
returns to the man like a drunk floozy suffering
s p e c ia l
an attack of remorse. Her attempts to assimilate
something like a universal,
K e n ’s ra th e r
e x p e rie n c e
to
into New Zealand life are thwarted by Christina
nostalgic appeal based on
and the daughtér. Christina confronts the Coüple
the shared tribulations of
by dropping by unannounced; her red dress
growing Up.
symbolizes that she now has the emotional
A t th e ca m p ,
E lkin
upperhand while Lane feels powerless - to es
spends most of his time
cape ordeal with Christina’s fury. In the end, one
w ith
feels sorry for Lane despite knowing that, like a
Mitsak), convinced that the
parasite, she will continue to feed off her sources
end of the world is nigh, but
if not killed.
uncertain about exactly
The film ’s awkward direction and editing is
W a y ia n d
(E ric k
how and when it will hap
showcased during the dénouement, a walk in
pen. Just across the road
the bush in which Christina decides to take
from the Adventists’ retreat,
justice into her own hands. Too much time is
a dissenter calling himself
spent building up to the moment of Lane’s death.
The Shepherd’s Rod (Pe
The psychological tension Maclean has built up
ter Gwyhne) hàs set up à
throughout thé film is prematurely dissipated
rival camp and cult, boldly
during the ending; what started out as a promis
predicting the exact date
ing exploration of the female sexual psyche runs
and time of the long-awaited apocalypse. He is
but of steam.
called a heretic, and his claims are branded
ABOVE! KEN ELKIN (N OAH TAYLOR). BOB ELLIS’ THE NOSTRADAMUS KID.
ridiculous by Pastor Anderson (Arthur Dignam);
they seem particularly to like each other, their
CRUSH Directed by Alison Maclean. Producer: Bridget
but the more impatient amongst the Adventist
only apparent bond being an obsession With
Ikin. Associate producer: Trevor Haysom. Scriptwriters:
young are attracted by the certainty - and pre
words, women and wine.
Alison Maclean, Anne Kennedy. Director of photogra phy: Dion Beebe. Production designer: Meryl Cronin. Costume designer: Ngila Dickson. Sound recordist: Robert Allan. Editor: John Gilbert. Composer: JPS
sumably the promise of the rest of life free from
The woman most in the eye of Elkin is Jennie
the imminence of world destruction should he
O’Brien (Miranda Otto), daughter of a newspa
prove ill-informed - of a deadline.
per proprietor and earmarked for marriage to
Experince, with additional music by Anthony Partos.
Elkin is amongst them. So too is Meryl (Loene
Kerry Packer! For some unknown reason, she
Cast: Marcia Gay Harden (Lane), Donogh Rees
Carmen), a strong-headed and slightly wayward
takes to Elkin and stays with him - off and on -
(Christina), Caitlin Bossley (Angela), W illiam Zappa
parishioner. Pastor Anderson’s teenage daugh
despite his infidelity, complete absence of social
(Colin). Hibiscus Films. Australian distributor: Foot
ters, Esther (Alice Garner) and Sarai (Lucy Bell),
graces, voyeurism, attempted rape and passing
remain as unmoved by the rival dogma as they
on of a dose of venereal disease. Not surpris
are by the ribald intentions of Ken and Wayiand,
ingly, Jennie’s father is aghast at the match. But
print. 35rhm. 100 mins. New Zealand. 1993.
THE NOSTRADAMUS KID KARL
QUINN
would-be suitors and, on the eve of destruction,
it’s not until Ken drags her off to the Blue Moun
potential rapists ^ .it occurring to the boys that
tains to escape the imminent destruction of
they’re hardly likely to suffer terribly as a result
Sydney at the height of the Cuban Missile crisis
f such a thing can exist, The Nostradamus Kid
of their actions if there isn’t going to be anyone
of 1962 that Jennie finally decides enough is
is an adult-oriêntéd teen pic. Built on a com-
around to punish them (in their testosterone-
enough. She later marries McAlister.
I
ing-of-age premise, and full of outrageous be
and fear-induced madness the possibility of di
That, more or less, is the narrative of The
haviour, scatological and sexual humour, it ap
vine retribution doesn’t seem to cross their
Nostradamus Kid, apart from a few scenes in
peals on a very broad - almost exclusively
minds). In the end, though, they relent and make
which the fates of Elkin’s fellow campers are
masculine - level. But above it all presides the
a mess of the toilet blocks instead, less out of a
revealed. Throughout, the mystery of Elkin’s
narratorial voice of Bob Ellis, an established and
néw-found respect fo r the tem ples of the
sexual attraction remains explainable only as a
recognizable figure on the Australian cinema
Anderson sisters’ bodies than a well-founded
combined figment of Ellis’ memory and imagina
literary landscape. By decrying the behaviour he
desire to hedge their bets.
tion (Ellis himself attributed it, in an interview
has had such obvious fun delineating on screen,
By the time Elkin is at Sydney University, the
he makes it palatable for a more mature - and,
religious fervour has become a distant, though
not wash after the act of sexual intercourse
hopefully, inclusively female - audience. It’s a
still influential, memory. The Anderson sisters
women can smell it on you, and it excites them
sleight of voice that is reasonably successful,
are long gone, as is Wayiand. The most constant
and you therefore achieve the next”). Depend
though the laddishness of the film is so essential
companion of the boy genius (for so we are
ing on your viewpoint, Ellis’ scenario and dia
to its being that Ellis’ laconic mea culpas seem
meant to think him) is a rough-hewn country poet
lo g u e
grossly inadequate to winning back those many
named McAlister (Jack Campbell), with whom
sophisticated sexual comedy, or pure and fanci
likely to be offended by the escapades of his
he shares floor-space of a night at the offices of
ful conceit.
youth.
the student newspaper which Elkin now edits.
Ellis’ major directorial conceit is to move
In this “fictionalized autobiography” , Ellis the
Just why these bright young things should be
back and forth between the film ’s two time-
younger becomes Ken Elkin (Noah Taylor), a
homeless at a time when the word would hardly
frames, with only slim expository need to do so.
tumbling, fearful, fairly repulsive youth on the
have had a meaning in the Australian lexicon is
These two interlocking strands each follow a
vergé of the great liberalization of Australian
never made terribly clear, though admittedly
straight temporal line, gaining little from the
Society that was the 1960s (or so the story
neither ever seems to have much money. Nordo
disjunction which Ellis has effected. The film
50 . C I N E M A P A P E R S
96
with this author, to body odour: “ I think if you do
is e ith e r th e s tu ff of re a s o n a b ly
doesn’t really gain either; the suspicion arises
the position of chief accuser of his own sexually
that its function is merely to make the film appear
not-very-correct past (his present, of course,
more complex than it actually is.
remains on trial). Rather, he seems as bemused
Much the same could be said of Ellis’ voice over narration, which captures perfectly the
and amused by the fact that he apparently got away with it as any audience is likely to be.
THIS W O N T HURT A BIT! RAYMOND
YOUNIS
■ ^ e n tis tr y , it seems, is second only to psychiatry as far as nervous breakdowns and
world-weary tone of one for whom every day
Lest anyone gain the impression that The
suicides are concerned - and, one presumes,
since the deferred end of the world has been a
Nostradamus Kid is so uniquely about Ellis that
this also applies to the patients! If this is indeed
disappointment. It seems intended to cast a
it could not possibly hold any appeal to any bar
so, why, one must ask, would any person in his
condescending but fond eye upon the misde
the most avid Bob-watchers, be assured that it
or her right mind, whatever that may be, choose
meanours of Elkin/Ellis as a youth, as if to say,
will go down in Australian cinematic history as
to become a dentist? Well, this is one of the
“He/I was a prat, but an entertaining one, don’t
som ething of a hybrid between the David
questions that is answered in the film. In any
you think?” There is in both the voice-over and
W illiam son-style exposé of our culture through
case, the phenomenon of dentistry and, espe
the structure a suggestion of something like
our sexual mores and appetites, and the John
cially, the abject horror and extreme panic which
discomfort at having to turn material so intensely
Duigan school of politically-aware yet highly
it engenders even in the most fearless individu
personal nostalgia (the Duigan similarity, it should
als, is a worthy subject indeed for a film script.
personal into something so public. If that embarrassment in fact exists, it may be
be noted, transcends the mere casting parallel
Evelyn Waugh claimed in an interview that for
a product of the distance between Ellis the
of Noah Taylor and Loene Carmen). The result
pleasure of the physical kind, he preferred to
scriptwriter and Ellis the director - a distance of
is extremely entertaining and highly uncomfort
visit his local dentist - a perverse sentiment that
some thirteen years (Ellis reportedly wrote the
able; but whether this blend signals a way for
this film captures nicely. And S.J. Perelman’s
script at the suggestion of David Puttnam who,
ward for Australian cinema or a mere stop-gap is
words are appropriate too: “For years I have let
having heard Ellis tell the story of his youth as a
dentists ride roughshod over my teeth; I have
Seventh Day Adventist, pronounced that it would
1
Ellis is not credited as the scriptw riter of Newsfront
been sawed, hacked, chopped, whittled, be
make a very good film and gave him an advance
(only Noyce is), though there is an end-credit
witched, bewildered, tattoed, and signed on
to write a screenplay; Puttnam gets a special
acknowledgment for “Based on a screenplay by
again; but this is cuspid’s last stand!” (Crazy
“thank you” in the final credits, although his role
Bob Ellis” . Man o f Flowers was scripted by Paul
Like a Fox)
as producer ended long ago) and, according to
Cox; Ellis is credited with “dialogue” . Goodbye
This film also explores the life of a man who
Ellis, the screenplay which was filmed was virtu
Paradise was co-written with Denny Lawrence.
takes up sawing, hacking, chopping, whittling
ally unchanged from the original. It seems strange that he should have made this film now; it is his third outing as director, and his undeniably im pressive writing credits date from the late 1960s, which include Newsfront (Phil Noyce, 1978),
(Ref. Australian Film 1978-1992: A Survey of The atrical Features.) Further Reading Andrew L. Urban, “Bob Ellis’ The Nostradamus K id ’, including interview with Ellis, Cinema Papers, No. 86, January 1992, pp. 12-7.
Nostradamus Kid is in subject matter and tone rather like a writer-director debut.
Pickhaver) - a clever, punning, ironic and notso-ironic name - who leaves the dust of Wagga Wagga to study at Sydney University. He initially chooses Australian history and poetry. Clearly,
Man o f Flowers (Paul Cox, 1983) and Goodbye P a ra d is e (C arl S ch u ltz, 1 9 8 3 )1, ye t The
and so on, but, mercifully, not on these shores. This Is the story of “Dr.” Fairweather (Greig
THE NOSTRADAMUS KID Directed by Bob Ellis. Producer: Terry Jennings. Executive producers: Roger Simpson, Roger Le Mesurier. Scriptwriter: Bob Ellis. Director of photography: Geoff Burton. Production
this is no ordinary young man. Nor is he a genius. A five-year dentistry course stretches over ten, and Fairweather, somewhat clouded
This is not to say that the film should not have
designer: Roger Ford. Sound recordist: David Lee.
over, is sent off to an asylum and, once out,
been made. The tone of embarrassment which I
Editor: Henry Dangar. Composer: Chris Neal. Cast:
decides to practise as a dentist. He establishes
detect (of course, it could be a projection of my
Noah T aylor (Ken Elkin), M iranda Otto (Jennie
himself in Portsmouth, England, and armed with
own embarrassment, in recoil from certain sim i
O ’Brien), Jack Campbell (McAlister), Erick Mitsak
the Oxford Handbook o f Clinical Dentistry be
larities of behaviour and attitude between my
(W ayland), Loene Carmen (Meryl), Alice Garner
gins whittling, drilling and pulling away on the
remembered self and Ellis’ remembered Elkin) actually helps the film, deflating the sexual braggadocio that might otherwise have seemed
(Esther A nderson), Lucy Bell (Sarai Anderson), Jeanette Cronin (Christy), Arthur Dignam (Pastor Anderson), Colin Friels (American Preacher). Beyond Films presentation of a Simpson/Le Mesurier produc
molars of the oblivious Poms. Soon, the authori ties become rather suspicious since he is seeing more than 100 patients a day, and insists on
to validate some of Elkin’s more odious behav
tion. Australian distributor: Ronin. 35mm. 120 mins.
taking x-rays of everyone, even those who no
iour. Not that Ellis has gone so far as to take up
Australia. 1993.
longer have teeth! The inquiry into his practice proceeds, even as his bank account swells enormously. This point is made with economy and humour: he arrives in the town with a bicycle, buys a moped, then a Rover, before he pays cash, first, for a Jaguar and then a Rolls Royce. Even the man ager of the local bank enjoys special treatment because Fairweather has no knowledge of term deposits and leaves his money in low interestbearing accounts. Fairweather falls in love, is found out as a charlatan, and flees to Hong Kong, where he is captured by a loud Australian detective and his sidekick. The film begins here. The story is a relatively straightforward one but it is told in an Interesting and fragmented manner. The use of a non-chronologica! narra tive technique is used skilfully to convey the life of a man whose existence is itself a series of abrupt beginnings and ends. It soon becomes clear in the film that his competence is question able, to say the least, despite the fact that his LEFT: RILEY (DENNIS MILLER) AND GORDON FAIRWEATHER (GREIG PICKHAVER). CHRIS KENNEDY’S THIS W O N 'T HURT A BIT!.
CINEMA
PAPERS
96 . 51
patients make a point of returning to him, and,
especially from Jacqueline McKenzie as the
Chinese and Western values collide to the merry
moreover, of singing his praises. In this respect,
wife-to-be, Vanessa, Patrick Blackwell (her rav
orchestration of a Latin American tango sound
their testimonies are contrasted with those of
aged father) and Maggie King (the rather boor
track.
the dentistry teacher, the owner of an Indian
ish and imperious mother). The film is also an
In The Player (Robert Altman, 1992), they
restaurant, a young woman and a chap from
attractive plea for happiness and liberty, particu
might have pitched this as Guess Who’s Coming
Wagga Wagga, among others. This is one of the
larly in relation to two more or less odd charac
to D inner (Stanley Kramer, 1967) meets La
film ’s strengths: it soon emerges that we cannot
ters who find the courses of their lives converging
Cage Aux Folles (Edouard Molinaro, 1978) by way of B etsy’s Wedding (Alan Alda, 1990). It
really rely on most if not all of these people. The
in spite of the considerable forces that are intent
film, it seems, is not just an exploration of a
on preventing the union. The optimism that the
sounds like a cinematic disaster as well as a
peculiar man - who is perhaps insane - his
film offers with regard to a so-called lunatic, and
social one, and about as appetizing as the beef
peculiar profession and peculiar patients, but
a daughter who is subject to a domineering
stewed in liquorice I was served once in Beijing.
also of the perils and pretensions of certain
elder, is both welcome and admirable.
But the beef turned out to be pretty tasty, Wai-
types of documentary filmmaking.
Tung somehow does manage to satisfy every
For example, we are told, supposedly by a
THIS W ON’T HURT A BIT! Directed by Chris Kennedy.
one in the end, and The Wedding Banquet
dispassionate observer, that Fairweather is a
Producer: Patrick Fitzgerald. Co-producer: Chris
succeeds against all the odds. Admittedly, there
character who prefers to fade into the back
Kennedy. Scriptwriter: Chris Kennedy. Director of
ground, but subsequent events, such as the
p h o to g ra p h y : M arc S p ic e r. A rt d ire c to r:
progression from bicycle to Rolls, do not rein force this view. We are told, for example, by
Ken
Muggleston. Wardrobe: Ruth Bracegirdle. Sound re cordist: David Glasser. Editor: Peter Butt. Composer:
are a host of small problems that might disturb the politically-correct thought police. B u tthefilm nimbly negotiates the fine line between farce
M ario G rigoriv. Cast: G reig P ickhaver (G ordon
and sentiment to create a dish that is easy to
F a irw e a th e r), J a c q u e lin e
M cK e n zie (V a n e ssa
swallow but leaves an interesting aftertaste. No
owner, that the dentist is a reasonable fellow,
Prescott), Dennis Miller (Riley), Maggie King (Mrs
wonder it has been a hit across Europe and the
but the dentist’s rather liberal approach to cavi
Prescott), Patrick Blackwell (Mr Prescott), Gordon
U.S. as well as in Taiwan.
ties, bridges and dentures, not to mention the x-
Chater (Professor), Alwyn Kurts (Psychiatrist), Col
rays, wild stories about “O rr-stralia” and the
leen Clifford (Lady Smith), Peter Brown (Railway
Fairweather’s neighbour, the affable restaurant
manic look that sometimes appears on his face, tend somewhat to undercutthis claim. The young
Friend), Fiona Press (Old Girlfriend). Oilrag Produc tions. Australian distributor: Dendy Films. 35mm. 83 mins. Australia. 1993.
woman who describes him as a man with greasy hair and a big nose is also difficult to believe. Even Fairweather does much to contribute to the reader’s puzzlement: if we believe him, or, to be more precise, the accounts of what he says to one of his patients, then “O rr-stralia” is a country that is constantly ravaged by disasters that are no less serious than the ravages that are going
Director Ang Lee manages to get the right balance of sweet and sour with the help of a secret ingredient: the old Chinese melodrama of the 1940s. The Taiwanese New Wave directors of the 1980s like Edward Yang (A Brighter Sum
THE W EDDING BANQUET
m e r D ay) and Hou H siao -H sien (B e iq in g
CHRI S
Venice winner, Xim eng Resheng ( The Pupp-
Chengshi (A City o f Sadness) and this year’s BERRY
he Wedding Banquet won the Golden Bear
etmaster)) drew on the art film to make their
at Berlin this year, but nothing I heard about
mark. But Ang Lee returns to an older Chinese
this cross-cultural gay farce before seeing it
tradition to give us another face of Taiwanese
T
piqued my appetite. Taiw anese W ai-Tung
cinema. The result may be less cinematically
on within the dentist’s surgery! “O rr-stralia”
(Winston Chao) lives in New York with his Am eri
flashy and even appear mainstream, but one
emerges as a country which is in turn overcome
can boyfriend, Simon (Mitchell Lichtenstein).
should not ignore the subtle depths of the script
by drought, fire and then the crown starfish. (The
His parents don’t know he is gay and keep
and the hidden implications of the actors unspo
land, fo re s ts and ree fs, no d ou bt, o ffe r
pressuring him to marry. In an effort to satisfy
ken glances that underlie the frothy surface.
correlatives of the teeth which are system ati
everybody, he gets hitched to Wei-Wei (May
Chinese melodramas from the 1940s like A
Chin), a mainland Chinese woman who needs a
Spring River Flows East, Myriads o f Lights and
cally attacked ...) The film is also a somewhat philosophic ex
green card. When his elderly and infirm parents
ploration of the motives that drive such a dentist.
decide to attend the wedding, the fun begins as
(
One interesting theory, which is neither affirmed nor negated explicitly by Fairweather, is that dentistry is one way of getting back at the Poms for leading his ancestors to their deaths during the Great Wars. It is striking that many of his patients are older patients. This, though, is clearly not meant to be taken seriously. Fairweather, decent fellow that he is, theorizes that it is the loneliness that brings these patients back - and this theory does sound convincing when one sees the types of people who return. If this theory is intended to endear the dentist to the viewer, it succeeds. This is a clever, witty film in which many of the pleasures are small but notable. There are puns on words and accents, eccentric characters and memorable situations. One might complain that the film is not really funny enough for a comedy - a n d judging by the audience at one screening, the pleasures were somewhat too few and some what too small for most - and that the pacing is not quite right. But the strengths are numerous: the script has more than enough strange char acters, puns, jokes and turns to keep the viewer interested; the playing is uneven, but there are some convincing (and very funny) performances, from Adam Stone as the bank manager, and
52 . C I N E M A
PAPERS
96
WEI-WEI (MAY CHING) AN D W AI-TUNG (W INSTON CHAO). AN G LEE’S THE WEDDING BANQUET.
CONCLUDES
ON
PAGE
63
)
B O O K
R EVI EWS
Girgus, however, does not investi
even further elevated. It allows him to argue, as
gate the entire oeuvre. Rather, his study
he does, that Allen is “on the cutting edge of
traces what he describes as the evolu
contemporary critical and cultural conscious
tion of a maturing artist whose work
ness” . This is “theory as theory” rather than a
evidences ever-increasing complexity.
tool or product of critical analysis. One should
The cycle of films from Play it Again,
always be suspicious when a work of art is
Sam through Annie Hall, Manhattan,
judged to be worthy solely because it can be
Purple Rose of Cairo, Hannah and Her
neatly slotted into something as provisional as a
Sisters to Crimes and Misdemeanours
theory. Theory itself is constantly changing as it
easily supports this case for the artist
tries to accommodate our changing responses,
growing from strength to strength. But
attitudes, observations or comprehension.
this neat, overly-simple summation ig
It would probably come as no surprise to
nores the more quirky, partial, uneven,
discover that Girgus’ background is in literature.
eclectic journey through a diverse out
There’s little evidence of a visual grasp or under
put that gives perhaps an artistically
standing of the cinematic canon. There are nu
more interesting, more truthful, sense
merous comparisons made to writers such as
of the work and career of Woody Allen.
Philip Roth, E. L. Doctorow and Mark Twain.
Girgus’ pre-determined, simplistic vi
Also Ike’s story and actions in Manhattan are
sion of the complexities of artistic crea
frequently compared with Jay Gatsby in F. Scott
tion cannot accommodate this. Girgus’ method of analysis submits
Fitzgerald’s The Great Gatsby. The filmmakers whom Girgus cites as having been influenced in
the films to what William Rothman calls
m ajorways by Allen and hisfilm s are Rob Reiner
“a reading of the sequence, moment by
and Spike Lee - an odd couple to say the least.
moment” . It’s his stated intention to
Added to this, what constitutes “visual inven
also apply contemporary critical theory,
tiveness” for Girgus are those moments which can
specifically psychoanalysis, feminism
be interpreted symbolically. In Annie Hall, “evil is
and semiotics, to the reading. The theo
the lobsters crawling around the floor and behind
retical net is cast wide. Sigmund Freud,
the refrigerator”. These tend not to be those sub
THE FILMS OF W O O D Y ALLEN
Julia Kristeva, Jean-Louis Baudry, Jacques
lime images or poetic sequences in Allen’s films
Sam B. Girgus, Cambridge University Press,
Lacan, Teresa De Lauretis, Roland Barthes and
that are “purely cinematic”. Instead, Girgus is par
New York, 1993, 146 pp., rrp $25(pb), $80(hb)
Mikhail Bakhtin all getaguernsey. Thetrouble is
ticularly engaged by the appearance of Marshall
the result rarely transcends eitherthe opportun
McLuhan when Alvy and Annie stand in a movie
ANNA
DZENIS
istic or the circumstantial. There is no sustained
line - a memorable sight gag but not a moment of
The Films o f Woody Allen by Sam Girgus is one
analysis. It remains descriptive, metaphorical,
great “visual inventiveness”.
of the Cambridge Film Classics series. The films
remote.
of Woody Allen may be classics, but this book certainly is not.
While I recognize that it is part of the struggle
Here are some examples. The opening se
of the writer to find the right word, the most
quence of Play it Again, Sam is described as a
evocative metaphor, I did not find it particularly
Girgus explains in his preface that the study
moment of “split subjectivity” - “a semiotic,
illuminating to read that the Cinemascope screen
was finished and in page proofs when the stories
presymbolic phase of development” . Alan Felix’s
of Manhattan had come to be called the “D-
and publicity about Allen’s personal relation
experience in the theatre is described as an
screen” because “it decenters, displaces, dislo
ships and domestic turmoil broke. Not being one
“almost perfect dramatisation of Jean-Louis
cates and distorts.” It also seems overly reductive
to miss an opportunity, however, Girgus sug
Baudry’s poststructuralist theory of the psycho
and simplistic to interpret the sensuous, pano
gests that ail of the sensationalist, media-driven
analytic dimension of cinema” . Manhattan is
ramic Manhattan images in the following way:
publicity surrounding the “breaking story” in fact
described as Bakhtinian: “Bakhtin’s emphasis
“Tops of heads disappear, obviously indicating
dramatized how important Allen and his films
on utterance and the social context of voice that
mindlessness, and legs are fractured, suggest
have become to our critical and cultural con
imbues a complexity of meanings to speech and
ing a group of truncated grotesques.” They are
sciousness; hence by implication, how impor
words relates to Allen’s penchant as a director
films of far greater artistic subtlety and innova
tant and necessary is this book. Exactly how this
for voiceovers and the separation of bodies from
tion than this analysis suggests.
personal and public tragedy might have influ
speech, as well as his own dialogic technique of
Girgus’ aspiration to a “textual erotics” is clearly not to be found in his analysis of the
enced the writing of this book, which purports to
overlapping speech and words together.” For
study the films of an artist, is fortunately left to
Girgus, Allen also “typifies Bakhtin’s concept of
images. There are, however, moments when
our imagination.
the ‘carnivalistic’, which concerns the annihila
this study does come alive, and that is when
tion of rigid boundaries in communication and
attention is paid to the characters and their
human relationships” .
conversation - t o Annie and Alvy, Ike and Tracy,
At the heart of it, Girgus comes across as a classical auteurist. In the opening pages he insists Allen’s work should be studied with the
This is all a kind of a gesturing towards
Hannah and her sisters. Girgus quotes dialogue
same close attention given to other serious
‘theory’. It is theory as a “value added” commod
and conversation quite extensively and It is this,
artists and writers. He suggests that few de
ity, a criterion of value, or glib evidence of
finally, the fabric and texture of quotation, that I
tailed studies of the “artistry” of the “individual
cultural worth. Because Girgus interprets and
found most significant and engaging, retracing
film s” have appeared, and it is his intention to
evaluates Allen’s films through such theoretical
the paths through my memories of the films. And
redress this situation.
posturing, Allen’s status and worth is seen to be
so you read, and discover: CINEMA
PAPERS
96 . 53
CINEMA DISCOUNTS Very Tempting! Say goodbye to full-price tickets and see more movies for less. You can when you buy your ...
Kookaburra Card The Kookaburra Card is a fundraising program which helps the National Film & Sound Archive preserve Australia’s film and sound heritage.
And for you? 'k Cinema concessions ★ Discounts on videos, records and books ★ Discount rates at Golden Chain Motor Inns ★ And knowing you’re contributing to the survival of Australia’s film heritage How much will my Kookaburra Card cost? ★ $30 Single ★ $50 Double (2 adults living at the same address) Don't delay! Cinem a concessions are now at your fin g e r tip - dial toll f r e e 0 0 8 0 2 0 5 6 7 9-5 w eekdays .
Focal Press
NFSA documentation collection - Louise Lovely
Ring us now for a copy of the latest Focal Press catalogue and price list. Payments by cheque, credit card (Diners not accepted), or current Butterworths account must accompany orders. Prices are subject to change without notice. Books sold 30 days on approval.
M a n a g e m e n t a n d th e A r t s W illia m J Byrnes T h is b o o k e x p lo re s th e o r y a n d issu es p e r tin e n t to m a n a g e rs o f arts in s t it u t io n s - t h e a t r e , d a n c e , o p e ra , g a lle r y o r m u s e u m . T h e p ra c tic a l to o ls n e e d e d to m a n a g e an a rts in s titu tio n a re p ro v id e d th r o u g h e x a m p le s ra n g in g fr o m jo b d e s c rip tio n s to b u d g e ts . 1992 31 I p p cl 0 240 80131 8 $69.95
T h e T e le v is io n P A 's H a n d b o o k A v ril R o w lan d s
2 n d E d itio n A n u p - t o - d a t e , c o m p r e h e n s iv e in tro d u c tio n to all a s p e c ts o f th e P A 's jo b , w ith s p e c ia l e m p h a s is on th e skills re q u ire d b y th e PA in a m u jti-c a m e r a s tu d io g a lle ry . 1993 2 44pp pa O 240 51353 3 $49.95
W o r k in g in C o m m e rc ia ls A c o m p le t e s o u r c e b o o k f o r a d u lt a n d c h ild a c t o r s E lain e K e lle r B eardsley A c o m p le te g u id e fo r o n -c a m e ra a n d v o ic e -o v e r c o m m e rc ia ls , liv e a n d ta p e d in d u s tria ls , a n d c o m m e rc ia l p rin t. C o n ta in s in -d e p th in te rv ie w s w ith re s p e c te d in d u s try p ro fe s s io n a ls a n d w o rk in g a cto rs. 1993 194pp pa 0 240 80160 1 $39.95
U T T E R W O R T H |E ! N E M A N N
54 . C I N E M A
PAPERS
96
271r-272 Lane Cove Road [Entrance 34 Waterloo Road] PQ Box 345, North Ryde, NSW 2113 —«... Telephone [02] 335 4444 Facsimile [02] 335"4655
"... I ... I ... I just met a wonderful new man.
it into an opportunity to showcase a variety of
ture/Film Quarterly and wished the articles were
He’s fictional, but you can’t have everything.”
Australian writing and scholarship to good ef
longer. I am also happy that Literature/Film
(Purple Rose o f Cairo)
fect.
Quarterly continues to set its type tightly; so many film journals these days assume their
or Hannah asks, “Could you have ruined
Literature/Film Quarterly is a middle-of-the-
yourself somehow? As a result, for example, of
road academic journal, not much interested in
excessive m asturbation?” Mickey responds,
the cutting edge of what’s-happening-now theory
“Hey, you gonna start knocking my hobbies?
(until it has become part of the curriculum), nor
SO N D HEIM
Jesus.” (Hannah and Her Sisters)
in that vein of American film commentators who
Martin Gottfried, Harry N. Abrams, Inc., New
or “Well my book is about decaying values.
choose not to present their expertise in aca--
York, 1993, 193pp., hb, $89.95
It’s a b o u t... see, the thing is, years ago, I wrote
demic essay format (J. Hoberman, Jonathan
a short story about my mother called T h e Cas
Rosenbaum, etc.). Giventhat, McFarlane seems
trating Zionist’. And, urn, I wanna expand it into
to address the collection to American readers rather than Australian specialists; little here will
a novel.” (Manhattan). or “The heart is a very resilient muscle ... It
surprise in-country followers of our film culture,
readers need a larger-print edition.
SO N D H EIM & CO Craig Zadan, Nick Hern Books, London, 1990, 2nd Edition, Updated, 454 pp., pb, rrp $34.95
ART IS N ’T EASY: THE THEATER
really is.” (Hannah and H er Sisters) and so on.
but it is a lively declaration of our mainstream
OF STEPHEN SONDHEIM
Despite these bright passages, in the end I’m
activity. (The next step might be the guest-
Joanne Gordon, Da Capo Press, New York,
left not being sure who this book is really written
editorship of a Northern Hemisphere journal
1992, 2nd Edition, 364 pp., pb, rrp $29.95
for. It’s not a gossipy exposé, full of tantalizing
featuring a range of our harder-to-characterize
RICHARD
hypotheses and innuendoes, nor is it a serious,
thinker-stylists, not necessarily writing about
consistently developed theoretical study. It is,
Australian film.)
however, full of wonderful funny old gags.
Choices must be made: the issue deals with Australian film after 1946, and the films dealt with are theatrical fiction films of feature length. As might be expected, many of the pieces in this
QUARTERLY
THE AUSTRALIAN CINEMA Edited by Brian Md'ariaae ABstrafian Literary Adaption,Trohlems of tìenrf/W<«nan*s Vefee & Autobiography/ Horfeoits tut CoramoRÌty/Austraiiao Features l9#-iYM/€rossT/«HuntI Reception Studies St CmewBfc Dundee!A Hard-Boiled World: Gmtdbyc Paradise and The Empi) Bmdi
UTERATURE/FILM QUARTERLY: THE AUSTRALIAN CINEMA
collection deal with the adaptation of films from literary sources.
FRANKLIN
I feel I should justify the review here of three books about musical theatre and (to quote the satirical review Forbidden Broadway) its “dem i god” Stephen Sondheim. First, let me say that as someone who grew up in the era of the Arthur Freed-MGM musical (my first film was Lili), film and musical theatre have for me always been inextricably linked.
The sequence of articles works well. The
And the dearth of modern film musicals is no
first, Bruce Molloy’s survey of Australian feature
where more lamentable than with ground-break
film 1946-74, fills out details of production prior
ing works like Sondheim’s Company, Follies,
to the explosion of activity generally associated
Sweeney Todd and Into the W o o d s -all of which
with the rise of nationalism and the Whitlam
would certainly have been filmed in another era.
Government’s sponsorship - a critical mass
Second, by way of establishing Sondheim’s
waiting to transform. Next is McFarlane’s over
screen credentials, let me give a brief (reverse)
view of literary adaptation as the major form of
chronology:
production from the mid-1970s; he makes dis
a) He and William Goldman have just com
tinctions about the sort of literary works Austalian
pleted the screenplay for Rob Reiner of an
cinema chose to adapt in the period and sug
original screen musical entitled Singing Out Loud,
gests that these choices may have limited for
about the making of a film.
mal innovation. Graeme Turner’s “The Genres
b) He won the Best Original Song Oscar in
are American: Australian narrative, Australian
1991 for the Madonna song “Sooner or Later”, a
film, and the Problems of Genre” expands the
pastiche of the Arlen-Gershwin-Judy Garland
discussion beyond individual works to consider
Oscar winner “The man that got away” (and I
Australian relations with American genres in
suspect a wry comment on Warren Beatty’s
terms not only of industrial, but also of cultural,
proclivities).
survival. Geoff Mayer looks at Goodbye Para
c) He wrote the scores for W arren Beatty’s
dise and The Empty Beach in terms of the
(VOLUME 2 1 , NO. 2, 1 993)
Reds and Alain Resnais’ Stavisky, and the song
American hard-boiled writers Raymond Chan
Edited by Brian McFarlane, Salisbury State
“I Never, do Anything Twice” for Herbert Ross’
dler and Dashiell Hammett; the piece helps me
University, 1993, 169 pp., pb, rrp $12
The Seven Percent Solution.
understand why I prefer Goodbye to Empty.
JOE
STEFANOS
d) He and Anthony Perkins wrote the screen
Rose Lucas reads Dead Calm well as psy
play for Herbert Ross’ earlier The Last o f Sheila,
The current issue of the American publication
choanalytic fam ily romance. Ina B ertrand’s
based on a murder mystery party held in Man
Literature/Film Quarterly (Vol. 21, No. 2) is an
‘“ Woman’s Voice’: the autobiographical form in
hattan by Sondheim and Perkins, which was
A ustralian cinem a special edited by Brian
three Australian filmed novels” is an elegant
also the basis of Anthony Shaeffer’s play and
McFarlane. Dr. McFarlane is well-known in these
condensation of narrative and psycho-narrative
film Sleuth (originally titled Who’s Afraid o f
pages and teaches film and English literature at
arguments about voice(s) sliding from print to
Stephen Sondheim.)
Monash University. Literature/Film Quarterly may
film.
e) His shows West Side Story, Gypsy, A
be less known to Cinema Papers readers: it has
Lorraine Mortimer’s study of ‘Breaker’Morant,
Funny Thing Happened on the Way to the Forum
been publishing for 21 years; its base of readers
Sunday Too Far Away, the Mad M ax films and
and A Little Night Music have all been filmed -
and contributors are those who work in English
the idea of community operates via a tough-
but all are poor facsimiles of the originals (even
Lit departments and are interested in film as
minded resistance to received ideas about how
the Academy Award-winning adaptation of the
well; its bread-and-butter format over the years
we read films and how we conceive nationality.
first mentioned is not to Sondheim’s liking).
has been the comparison of films to the literary
Her piece, most dramatically, expresses the
works (most often novels) upon which they have
view running through the collection that cultural,
Todd, Sunday in the Park With George and Into
been based.
social and political approaches to a national
the Woods which are somewhat more faithful
Only twice before have issues been guest
cinema are not, and cannot be, simple. Finally,
representations of Sondheim’s art. He also did
f) There are television versions of Sweeney
edited - kudos to Brian McFarlane. Even more
Stephen Crofts continues his research into
an original television musical in 1966, entitled
to the point, while the existence of the issue
Crocodile Dundee, in this instance looking at
Evening Primrose.
affirms a continuing serious interest in AustraL
cultural differences in the film ’s reception abroad.
g) Before his Broadway debut as lyricist for
ian cinema in the U.S., McFarlane has parlayed
For the first time, I’ve read an issue of Litera
West Side Story at 26, Sondheim the enfant CINEMA
P A P E R S 9 6 • 55
pendix that includes cut songs, num
Gottfried’s Sondheim is again of the coffee-
bers of perform ances, ,etc., Zadan
table variety and its colour stills alone would
chronicles the blow-by-blow evolution
make it worth the purchase price to any fan of
of each of Sondheim’s shows. Whether
musical theatre. But its text also qualifies it as
or not you know the shows, this book is
the best book to date on Sondheim.
to Broadway what Frank Capra’s auto
Proceeding chronologically show by show, it
biography is to Hollywood - definitive.
is both a behind-the-scenes account and a criti
Particularly fascinating is the chapter
cal analysis of each. The fact that it is therefore
about the fraught last Sondheim-Harold
less detailed on either front than the other two
Prince collaboration on the reverse
books is, I feel, more than compensated for by
chronology M errily We Roll Along which
the overview offered in its introductory chapters
Zadan entitles enigmatically “ It’s Still
(on which Sondheim has clearly collaborated).
Backwards” .
Sunday ■■ ■ Into the Woods in the Park ; with George . M A R T IN G O T T F R IE D
A ssassins
terrible wrote ten episodes of the Toppertelevision series. h)
Sondheim is a considerable film buff: A
In “ Portrait of the Artist as a Young Man” , we
The title of Joanne Gordon’s A rt
glimpse for the first time fragments of six com
Isn ’t Easy comes from the Sondheim
plete Sondheim shows which pre-date the
lyric “Putting it Together” from Sunday
unproduced Saturday Night (a backers’ audition
in the P ark With George. Barbra
of which prompted Bernstein to employ him on
Streisand sang this song (with Sydney
West Side Story). The evocation of the summer
Pollack and David Geffen) as the title to
of 1950 at the Westport Connecticut County
her mega-hit “Broadway Album” and
Playhouse, as apprentice stage-hand Mary
also at the 1986 Academy Awards to
Rodgers listens with a teenage crush on 20-
introduce the Best Director award. So
year-old “Steve” , as he plays his score for Mary
it’s not much of a stretch to apply it to the movie business:
Hammerstein’s only student wrote for her fa
Art isn ’t easy,
ther’s partner - is truly spellbinding.
P oppins, the th ird of fo u r show s O sca r
Even when yo u ’re hot,
The chapter “The Crafts of Lyrics and Music” ,
Advancing art is easy,
perhaps inevitably for a written text, tends to
Financing it is not,
favour the former “craft” or “elegant puzzle” as
Little Night Music \s an adaptation of Bergman’s
A visio n ’s just a vision,
Sondheim characterizes the art of the “lyrist” (he
Smiles o f a Summer Night and he is currently
if it’s only in your head,
once said the word has too many syllables). But
turning Ettore Scola’s Passion D ’Amore into a musical.
If no-one gets to see it,
on this subject, it rivals even Hammerstein’s text
it’s as good as dead,
(Lyrics, Oscar Hammerstein II, 1949, revised
It has to come to light!
1985, with introduction by Sondheim , Hal
update over and over) a book about Sondheim,
The song goes on to argue that the politics of
Leonard Books, Milwaukee).
movie executive Craig Zadan said that the term
cocktail parties are not only necessary, but a
I am not alone in considering Sondheim one
“genius” is so bandied about in Hollywood that it
part of the artistic process, which would suggest
of the two or three most important people cur
was refreshing to write about “the only true
another behind-the-scenes book. But Gordon’s
rently writing for the theatre (musical or other
is a critical work - the first such analysis of
wise). Nor in observing that he has taken the
When asked why he wanted to write (and
genius I’ve ever met” . In my travels, I’ve encountered three (the
Sondheim’s shows.
musical so far that the downside of acquiring the
other two being Orson Welles and Jerry Gold
It reads like a Master’s Thesis (though its
taste for his work is that it becomes increasingly
smith). Tony Perkins introduced me to Sondheim
liberal peppering with Sondheim lyrics makes it
difficult to sit through the shows of others (past
during the making of Psycho //and, after attend
anything but dry). First published in 1990, it has
or present).
ing a preview, he responded in kind by inviting
already been revised (1992) to include his most
But to anyone with even a passing interest in
me to a workshop of Sunday in the Park With
recent work, Assassins. While the analysis else
theatre, art or the creative process, I cannot
George (which went on to win him the Pulitzer
where is adequate, I feel it regrettable that the
commend one or all of these books (or one or all
Prize). I have been fortunate enough to corre
discussion of this, Sondheim’s latest and brav
of Sondheim’s shows) too highly.
spond with him and follow the evolution of all of
est show, all but misses the point. Perhaps
his shows since.
no one who lives in the USA, save
Sondheim & Co is the equivalent of a “back-
Sondheim, can face the brutal reality that
stage” musical. First published in 1974 as a sort
their “rights” and “dream” have been pur
of companion piece to the so-called “Scrabble
sued equally by the mad and the damned.
Album” (S o n d h e im -A M usical Tribute, a collec
But the best book on Sondheim is the
to r’s piece for years and now available on RCA
newest. Martin Gottfried, author of a mon
CD), Zadan had co-produced the 1973 benefit
strous coffee-table epic entitled Broadway
from which it derived (which also inspired Side
Musicals
and its slimmer sequel More
by Side by Sondheim, the first of a slew of review
Broadway Musicals (for the same pub
shows of Sondheim ’s work).
lisher), has paid more than fleeting hom
Zadan attempts little critical analysis, but with
age to Sondheim before. Both books
liberal quotes from Sondheim and a Who’s Who of
contain chapters on Sondheim - th e former
collaborators (the “& Co” of the title), he follows a
c o n ta in s a p ric e le s s fiv e d ra fts in
career that spans the history of modern musical
Sondheim’s hand of the lyric of “Send in
theatre. And Sondheim’s credentials are impecca
the Clowns” and concludes that the fate of
ble - from his tutelage by surrogate father Oscar
the musical is entirely in his hands (the late
Hammerstein (he was living with the great librettist,
Alan Jay Lerner in The M usical Theater
his Australian wife and family while he was writing
saw more of an apocalyptic battle between
the watershed Oklahoma) to his appointment in
art and commerce as represented by two
1990 as the first Professor of Contemporary Drama
men who ironically share the same birth
at Oxford.
day -
With amazing attention to detail and an Ap
56 . C I N E M A
PAPERS
96
Sondheim and A ndrew Lloyd
Webber).
•
Tlw:
briiiml'diMCt'tifft »tory of thv making of Stephen
SmimiEdit«»«. L.'pffete<l
CRAIG ZADAN
musicate
Mpafl
Hfe
AVAILABLE N 0 | AT YO U R 1 BOOKSTORE
T h am es a n d H u d so n 11 Central Boulevard, Port Melbourne Vic 3207 Phone (03) 646 7788 Fax (03) 646 8790
Jason D onovan
from
page
is
I don’t think I’ve had “hassles”. The public perceives them as hassles, but they are not hassles in the slightest. “Obstacles” is probably a better word. But do such experiences make one stronger? Absolutely! And, to a certain degree, this business makes you that way, regardless. When you have to stand out in the middle of the street and kiss someone, as we did today, that requires a lot of going into yourself. You have to forget the rest of those people and just concentrate on what you have to do. Y ou can’t really extend yourself when you are being constantly watched, when you’re watching yourself as you go out at night. It builds up an immune system, I don’t know whether that is good or bad, but it thickens your skin. It makes me feel like I’m a lot older person than I actually am. In terms of disciplines on yourself? Actually, it is probably the opposite. It makes me want to go out there and ... To a certain extent, I’m a different person to how I am perceived by an audience. I’m probably just a bit looser, and more relaxed. I’m not an overtly crazy person by any means. I do enjoy a sort of private rebelliousness, but not in public. I’m not one of these people who comes and throws off the dust and says, “I’ve got to have this, this and that.” I think the star system is really overrated and my taste has actually pushed me further the other way. You also get pushed back a lot in this business. A lot of people only see success; they don’t realize it hasn’t all been uphill every step of the way. I mean; Mel Gibson’s made some pretty crappy films, but you don’t remember those; you remember the hits. And nobody knows about the films that didn’t get up. Exactly. And, to be honest with you, those projects had great artistic strengths. But it’s the old story: it’s hard to find money for taking that extra step. Investors want returns and, when someone dies at the end of the film, it’s not a great pitch to the punter* is it? Do you have plans beyond your return to Jo se p h after filming? No. But now I’ve had a taste of this, I know this is what I want to do more of. I feel very relaxed behind the camera. You mean in front of the camera! Yes. [Laughs.] I feel a lot more at ease than I probably anticipated, which is a good thing. I think theatre has given me that expression. I’m not an extroverted person in my personal life. I don’t run outside and try to attract attention. I’m terrible at telling jokes. I’m not the centre of attention at a party. Acting has always taken me out of that shell. Jo sep h has been a good thing, too, in the sense that it made me extroverted for the two hours that I needed to be. And now with film, I have also had to learn how to pull that theatre training back a little bit. I’m finding that a nice balance.
R ichard S tew a rt
Would you like to switch that to more dramatic parts? Yes. I did a short film for the Royal College of Art last year in London. That was basically a voluntary film. That was great because I played a character totally opposite to what people see me as. I really enjoyed it. It was something to do without pressure, without-money, without criticism. I could go as far as I wanted to and not be too worried about whether I was making the right or wrong career move. That was good for me, definitely.
58 • C I N E M A P A P E R S
96
20
Stuart Cunningham and Liz Jacka mention in a companion article ah April 1992 Peat Marwick Mitchell report which concluded that foreign produc tions in England had at best minimal benefit for the local film industry. Is that something you are familiar with? I haven’t read the report, but I’m interested in reading it. I try to read everything I can on the subject, because it’s a damn controversial subject. We have taken a particular stance, and there are times when a stance has to be questioned, either to re-affirm your own line, or bring it into question. It doesn’t pay to move out of touch with realities of the world. When it comes to broadcasting policy, you have to look globally, because Australia is a trading nation. There are issues that relate specifically to what we can and can’t do in relation to protection, because what we are really talking about here is protection. It’s the same issue that relates to the Media Alliances’ insistence of American Screen Actors Guild rates of pay for Australian actors. That’s another area where we are on public record as objecting to. We think it’s discriminatory. This “Better Rates” policy is just bizarre. It has no moral justification that I cart see, whatsoever. In a GATT environment, we really need to re-assess a lot of traditions of our industry. Given what you are saying, Film Queensland differs from the other state bodies in taking vocal positions on various issues. Other bodies may well have positions on issues without actively promoting them. Do you see such forthrightness as necessary to being an active stimulus to the film industry? I do. If an organization is interested in being recognized as an organization in its totality, then it needs to have views and policies on a whole range of film matters. I don’t consider that we ought to stop short at just having a policy in relation to script development or something else that film offices have traditionally had a policy on. They probably do, even if one doesn’t know what they are. Exactly, and it’s better that they are known, so it’s clear to all. The fact of the matter is that we have never resiled from making our position quite clear on a whole range of issues. And I think that’s good.
Likely Queensland production slate (with comments by Richard Stewart) • •
• • •
G o o d N igh t Iren e (Gerard Lee). Package of four feature films being done back-to-back by Ian Coughlan and Jim Dale. Ian is a Cairns-based writer who wrote all four projects and will direct some of them. Jim Dale is the producer and rurts a Sydney-based company, Media Cast. O ver the T o p with Jim , which has received an ABC pre-sale and is based on the Hugh Lunn story. Beyond will produce a television series up here. There are two feature films that we have developed in a package: W hite Eyes and D o u b le N egative, which will be with Portman.
• Rosa Colosimo’s picture about that. • •
Would you again like to act and sing in the same film? Not necessarily. I’ve always sworn to myself that I’d never combine acting and singing together, and here I am. But I think there is a market out there for films and productions that involve these two things. You don’t see that as much as you did 20 or 30 years ago. It’s not that this film is a musical. It just has music in it. I got into the business because I enjoyed that. The success I’ve had, or what’s happened to me as a result, has just been an added bonus, really.
page
I’m going down to a conference in relation to that in Melbourne towards the end of the month, where those issues will be once again re-examined. In terms of the global view of Australia, and in terms of federal government policy as it relates to trade, I don’t think we are too far off the beam in suggesting that a change of policy wouldn’t be out of touch with broader trade issues relating to Australia at the moment.
What sort of roles would you now be interested in? It’s hard to say what particular things. I’m just interested in things that extend myself. Obviously, I’m associated with a sort of romantic type of image.
from
R ed Rain
will restart. I feel very confident
Jonathan Shiff will continue O cean Girl, and he has told me he is doing another series up in Port Douglas. The Studios will start on the second series of Paradise Beach and will do at least another one or two Village Roadshow projects, such as Fortress 2. Jenny Hooks [of Film Victoria] was mixed up when said it was an American project in a recent SPAA newsletter, the copyright is owned by Village Roadshow.
• There is also another mini-series which I think will be done here by Village Roadshow. •
There is another television series soon to go into production, Phil Bowman’s T ro p p o L o co . It is a great little series that was sold to Network 10. Beyond has foreign distribution.1
1
Allan Callaghan, former chief executive of the Queensland Film Corporation, was charged and found guilty on matters concerning financial improprieties.
A u stralia's First Films
f r o m
p a g e
37
Films Consigned to Oblivion Wills’ only complete showing of his films was a private one, given in the boardroom of the Agricultural Department in William Street, Brisbane, on the evening of 17 November 189 9 .54 Press reviews generously praised the films, expecting great value to accrue from their exhibition. Brisbane Courier suggested that “the Department would do well to give the general public some wider opportunity of seeing the pictures before they are sent away [to Britain]”.55 Wills’ outstanding productions never received a public showing in Australia, and had only the briefest usage in England. They were partly the victim of technological progress, partly passed over owing to bureaucratic bungling. After some delays, Wills’ films were dispatched to Britain through Sydney via the steamship “Orizaba” on 3 February 1 9 0 0 .56 In London, extreme difficulty was found in locating a firm willing to hire out Lumière cinématographes57, which were being super seded by projectors with longer film capacity. The Queensland films had Lumière perforations which would not fit the newer machines. Even when a Lumière projector was located, George Randall avoided using it. He had not been consulted regarding the need for the films, and evidence suggests that they were foisted on him.58 They are not mentioned in his voluminous papers at the Fryer Library in the University of Queensland. Only when Queensland film advertising was revived for London’s Franco-British Exhibi tion in 1908 did Randall reveal his opposition to these schemes. He considered that showing the films in English market towns would attract immigrants who were “the flotsam and jetsam of the cities”59. In his opinion, farm workers were the only justifiable migrant group for Queensland:
Queensland State Library’s video projector to give Wills’ films their long-awaited public premiere on 15 September 1993 - almost a century after their production! Posthumously, at least, Wills can now reap the long-deferred credit deserved by his pioneering effort, allowing colonial Australia to live again on the screen.
W ills- M o b s b y F ilm o g raph y, Q u e e n s l a n d 1899 This list is in rough chronological order of production. Titles are taken from a Queensland Museum listing. Running times are obtained from the video copy, effectively transferred from film at 12 pictures per second by double-framing. Even at that speed, some films run slightly faster than optimum. A : T R IA L FILM S M A D E IN S Y D N E Y B Y FR ED W ILLS c. FE B R U A R Y 18 9 9
(1) North Shore Steam Ferry Passengers Disembarking Taken overlooking Milson’s Point ferry wharf, with Bennelong Point, Fort Macquarie and Government House in the distance. Ferry with “Sydney” destination board and “Penny Ferry” sign up pulls in to the floating pontoon wharf. Length 19 seconds. (2) North Shore Horse Ferry At Milson’s Point terminal looking East towards Kirribilli. Horsedrawn vehicles disembark from ferry, passing under a wooden gantry at the terminal stage. Length unknown (not yet on video). (3) Redfern Station N o. 1 Before Central Station was built in 1906, this was the city terminal station of the Parramatta Railway. View looks South along the line from No. 5 platform, with passing trains. Length unknown (not yet on video).
[...] the good men from the villages; that is to say the men who are in work, not the men who are out [...] Farmers, when they visit the market towns, do so on business [...] They are too busy to listen to an immigration agent; [This film scheme] has been tried, not only by Canadian immigration agents, but by myself, when working for Queensland [before 1902], with most unsatisfactory results [...]60
(4) Redfern Station No. 2 (? - probable attribution) Presumably a reverse-angle shot to the previous. Looking North towards Sydney city along the line, with a tall castellated tower at the rear. Length unknown (not yet on video).
Wills’ film production never resumed. He gave one last compre hensive lecture on the subject to the Queensland Amateur Photo graphic Society on 15 June 1900, which the Australian Photographic Journal later serialized.61 Following other disagreements within his Department, he resigned from government employment in 1903 and his later work is unknown.62
(5) Petersham Railway Station and Ride from Newtown Static view from platform of commuters moving towards an incoming train, followed by a travelling shot taken from the rear of the train entering the same station. Advertising hoardings and a road bridge over a cutting are seen. Length 41 seconds (the station shot is divided into two reels).
O b sc u r it y and R etrieval Wills’ films appear to have returned to Australia in 1904 after only brief experimental usage in Britain63, and were stored away at the Queensland Department of Agriculture until 1955. They were then sent to the Queensland Museum with Wills’ cinématographe, photographic equipment and reference books including Hopwood’s Living Pictures (London, 1 8 9 9 ).64 In 1 9 8 2 , the films were sent to the National Library’s Film Archive in Canberra.65 By that time all knowledge of their provenance had been lost.66 The subsequent separation of the Film Archive from the National Library halted preservation work. In the move to the present National Film & Sound Archive (NFSA), collection components became separated and the confusion resulted in some items being located and pre served faster than others. Finally, the NFSA negotiated with the French Archives to copy Wills’ films onto modern 35m m film, at great expense, during 1 9 8 9 -9 2 . A few of Wills’ Sydney test films have stilLnot yet been copied. Anne Demy-Geroe and the A/V staff of the State Library of Queensland worked with the authors to publicly present the WillsMobsby films for the first time. Melbourne NFSA office manager Ken Berryman supplied a video copy, which was used with the
B: B R IS B A N E S C E N E S S H O T B Y F R E D W IL L S
c. M A R C H -O C T O B E R 1 8 9 9
(6) Opening of Queensland Parliament, 1 8 9 9 Arrival of Lord Lamington, Governor of Queensland, in his coach at Parliament House, Brisbane. Guard of Honour, consisting of Queensland’s Permanent Artillery under Lieutenant Black, receives him. Taken either 18 M ay 1899 or 18 September 189 9 - there were two openings that year. The former is the more likely subject of the film, as it matches photos in the Queenslander. Length 61 seconds. (7) Queen Street and Victoria Bridge View of Treasury, Victoria Bridge and electric trams in Queen Street, followed by reverse angle shot down Queen Street. Bridge and trams were both less than two years old at the time. Length 53 seconds. (8) Roma Street Station Passengers disembarking from train and passing close to camera up the exit rarrip. Length 49 seconds. (9) Government Picnic Party, S.S. “Lucinda” Queensland Parliamentarians boarding the government paddle CINEMA
PAPERS
96 . 59
steamer “ Lucinda ” at the wharf behind the Agriculture Department building in William Street, Brisbane. In three shots: boarding, casting off, and steamer moving down the Brisbane River. Probably 14 October 1 8 9 9 . Length 51 seconds.
stack behind. Same scene appears onp. 35 of Peter Lloyd’s Guiding Queensland Agriculture (Department of Primary Industry, Bris bane, 1988). Length 16 seconds. F: S U G A R H A R V E S T IN G A T N A M B O U R , S P R IN G 1899
(10) S.S. Katoomba Unloading Probably shot at Pinkenba. Unloading timber spars at an active wharf. Length 51 seconds. (11) Building Construction Demolition workers, some black, overtoppling and demolishing a wall. M ay have been demolition activity in William Street, clearing the site of the then new Agriculture Department building. Length 3 8 seconds.
(23) Cutting Cane Kanaka labourers cutting sugar cane under the watchful eye of an overseer. Cane is stacked onto wagon at rear of shot. Length 54 seconds. (24) Sugar Mills, Nambour Shot one: horse-drawn tramway load of cane arrives at conveyor belt in wide-shot. Shot two: close view of trimming operations at conveyor carrying cane into mill for crushing. Length 61 seconds.
C : F O X T O N 'S T O R R E S S T R A IT T O U R , J U L Y 1899 G : S T O C K M A N A G E M E N T , 1899
(F IL M S B Y M O B S B Y )
(12) Channel Rock Light Ship, North Queensland View from deck of M .V. “White Star” of light ship receding astern off the Townsville coast. Length 50 seconds. (13) Natives, Darnley Island, Hon. J. F. G. Foxton Taken late July 1 8 99. Photo album APA50 at John Oxley Library shows a still photo of this scene, labelled Murray Island. Home Secretary Foxton and his wife receive a gift of bananas from islanders passing him in single file. Thursday Island Government Resident J. Douglas also appears. Length 56 seconds. D : Q U E E N S L A N D R U R A L R A IL W A Y V IE W S
(14) Scrub from Back of Train, Eumundi Travelling shot of hilly scenery receding from rear of train. Some cuttings and built-up railway formations. Length 56 seconds. (15) Cairns Railway Travelling view of tropical undergrowth from rear of train. Could have been taken during Northern tour of Agriculture Minister Chataway, as scrub abutting this railway had just been acquired by the Department for conversion into experimental farming plots, m id-1899. Length 39 seconds. (16) Barron Falls, near Cairns Static shot of falls, approached via Cairns railway. Length 56 seconds.
(25) Sheep Dip Head-on view of sheep being dipped in arsenic pondage. M an with forked pole ensures total immersion of each beast. Length 3 7 seconds. (26) Sheep Running Through Gate M an opens gate, shorn sheep run through. Taken in arid countrypossibly Jimbour or Talgai. Length 4 7 seconds. (27) Agricultural College Cattle, Gatton Long-horned cattle (Ayrshires?) herded by drover on horseback. Post-and-rail fence at rear. Length 4 5 seconds. H: D E P A R T U R E O F F IR S T Q U E E N S L A N D C O N T IN G E N T T O BO ER W A R , O C T O B E R 1899
(28) Transvaal Contingent, Queen Street First Boer W ar Contingent, Queensland Mounted Infantry under Colonel P. Ricardo, giving their final Brisbane march-past near Post-Office Place on 28 October 1 8 99. Length 41 seconds. (29) Queensland Contingent for South Africa in Domain Review of First Boer W ar Contingent before Lieut-Governor Sir Samuel Griffith on afternoon of 28 October 18 9 9 . In three shots: cavalry lines approaching, close shot of passing cavalry, supply wagons and rear of parade with children following up behind. Length 58 seconds.
(17) “Out-take” View from Rear of Train
(30) Loading Horses, S.S. “Cornwall”
Probably a rejected view, showing only the rails receding from camera mounted at the back of a train. Surrounding scenery is outside the bounds of the picture. Length 62 seconds.
Loading of refractory remounts aboard troopship Cornwall for South Africa, 31 October 1899. Length 19 seconds.
E: W H E A T H A R V E S T IN G O N T H E D A R L IN G D O W N S , S P R IN G 1899
(18) Reaper and Binder, Harvesting at Jimbour (near Dalby) “Buckeye” reaper and binder moves away from camera in wheat field with mountains in distance. Labourers stook the sheaves from the reaper. Length 5 7 seconds. (19) Carting W heat (at Jimbour?) Same countryside as previous shot. Sheaves are tossed up onto wagon for conveyance to the thresher. Length 34 seconds. (20) Threshing at Allora N o. 1 Wide view of thresher at work with steam stationary engine and furphy water cart. A ten-horse team pulls a huge wagon laden with wheat sheaves passing on its way to the thresher. Length 65 seconds. (21) Threshing at Allora N o. 2 Close view of same thresher shown in previous shot, with details of activity tossing sheaves in, bagging wheat and stacking chaff. Length 4 7 seconds. (22) Mechanical Hay Stacker at Hermitage State Farm near Warwick Horse pushes hay onto cantilevered fork. Fork lifts the load onto the 60 . C I N E M A
PAPERS
96
(31) Horses Being Unharnessed Content unknown, but may be related to Boer W ar departures. Length unknown - not yet on video. I: U N ID E N T IF IE D F ILM S , 1899
(32) Feeding Pigeons Possibly a test film featuring H. W . Mobsby, mentioned in Wills’ 1900 QAPS lecture. Length unknown - not yet on video. (33) Country Show Mentioned in Brisbane Courier report of Wills’ private film show, 18 November 1 899, p. 5. N o print known. Length unknown.
N E X T IS S U E So far, we have examined the work of Australian pioneer film producers working on their own. Our first corporate film producer made more than 300 films between 1 8 9 7 and 1 9 09. Yet only one of its productions is remembered. For too long we have hyped the myth of “Soldiers of the Cross” while turning a blind eye to the other 2 9 9 films that they did produce. N ext issue: the Salvation Army Limelight Department.
S IN C E R E T H A N K S First and foremost our thanks go to the Division of Humanities at Griffith University for funding the project and providing the research support of our colleague Sue Ward. Thanks are equally extended to: National Film and Sound Archive: Ken Berryman, Helen Tully, Meg Labrum, Ann Baylis, Szuszi Szucs, Helen Ludellen, Marilyn Dooley. State Library of Queensland: Colin Sheehan, Anne Demy-Geroe, Brian Gilbert, Mrs Lawrie and the staff of the newspaper desk and A/V section. Queensland Department of Primary Industry: Peter L. Lloyd. Queensland Museum: Mark Whitmore, Brian Crozier.
2 7 Ross Lansell and Peter Beilby, The Docum entary Film in Australia, Cinema Papers, in association with Film Victoria, Melbourne, 1982, p. 23. 28 A. C. Haddon Australian and Pacific Papers Index, National Library, Canberra, 1 9 9 1 , p. i. 2 9 British Film Institute catalogue card, “Torres Strait” (film), Haddon. 2 7 2 feet, 35m m , from Cambridge Ethnographical Society, 1967. 30
13 volumes of Randall’s manuscript notes are held at the Fryer Library, University of Queensland. FR YER mss. 58/1 to 58/13.
31 Australian Photographic Journal, M arch 1899, pp. 10-11; “How to Ventilate a Darkroom ” by F. C. Wills is a typical example. 32 Blue Book o f Queensland, 1 8 9 7 , Appendix One: list of Officers under the Secretary for Agriculture, including F. C. Wills. 33 A. J. Boyd to Under Secretary for Agriculture, 28 M arch 1898: AG S/N341,
University of Queensland: Richard Fotheringham, J. O’Hagan. Queensland State Archives: L. McGregor, Judy McKay. Film Australia: Ian Dunlop, Judy Adamson.
34 Chief Secretary’s Under Secretary to Department of Agriculture Under
Australian Joint Copying Project, Cambridge: Sara E. Joynes, Frances Calvert.
35 Ibid.
AIATSIS Canberra: Carol Cooper. Also thanks to Clive Sowry (New Zealand), Phil Grace (Melbourne), Ron West (Pomona), Dr Mary Laughren (Brisbane).
No. 1 9 3 6 , Queensland State Archives (QSA). Secretary, 2 4 October 1 898: Premier’s Department Letterbook, PRE/G 2, p. 3 9 2 , QSA. 36 Queensland Agricultural Journal, 1 December 1898, p. 4 7 0 . Also, “As An Aid to Immigration”, Australasian Photographic Review, December 1898, p. 29. 3 7 Australian Photographic Journal, 20 September 1900, p. 2 0 0 , quotes Wills as saying that he then only had “the first [films] I took when in Sydney
Prudence Speed survived Chris Long’s several extended absences in Queensland to become Mrs Long on 7 November!*1
procuring information on the subject” . Same journal, 20 November 1900, p. 2 4 4 , states that there were five of these Sydney films.
N 1
otes Pathe’s Weekly commenced publication around the start of December
38 Australasian Photographic Review, 21 M arch 1899, p. 21. 39 Richard Fotheringham: Personal correspondence 10 November 1989 to Chris Long. Mobsby was appointed Assistant Artist and Photographer on 1 March
1910, but no copies are apparently held by an Australian library. The State Library of South Australia holds the magazine from the time it changed its name to Australian Kinematograph Journal in m id-1912. 2
Information from Colin Sheehan, State Library of Queensland.
3
Newspapers in Australian Libraries: A Union List. Part 2. Australian
4
Brisbane Courier, 3 May 1 8 9 7 , p. 2 ; 2 6 June 1 8 9 7 , p. 2.
Newspapers, National Library of Australia, Canberra, 1985.
1899, and was promoted to Artist and Photographer on 1 July 1904. 4 0 Reviews of Mobsby’s own films may be found in Everyones (Sydney), 11 June 1 9 2 4 , p. 5; 25 February 1 9 2 5 , p.14. Mobsby papers and photographs are held at the Fryer Library at the University of Queensland. 41
Brisbane Courier, 18 May 1899, p. 6. The Queenslander, 2 7 M ay 1899, p. 9 7 7 , has photos of the event.
4 2 Australian Photographic Journal, 20 June 1899, p. 141. Brisbane Courier,
5
Ibid, 31 August 1 8 9 7 , p. 2.
6
Ibid, 7 September 1897, p. 2 ; 8 September 1 8 9 7 , p. 4.
7
Ibid, 11 September 1 8 9 7 , p. 6.
8
M orning Bulletin (Rockhampton), 30 September 1897, p. 2 ; 1 October 18 9 7 , p. 2; 2 October 18 9 7 , p. 2.
4 4 Brisbane Evening Observer, 21 July 1 8 99, p. 3.
9
The Sydney M orning Herald, 4 December 1 8 9 7 , p. 2 ; 7 December 1897, p.
45 John Oxley Library, photo album APA50: “Foxton Album” .
2; 11 December 1 8 9 7 , p. 2.
2 2 May 1 8 9 9 , p. 4. 43
Torres Straits Pilot, 15 July 1 8 9 9 ,2 2 July 1899. North Queensland Herald (Townsville), 17 July 1 8 9 9 , pp. 6, 10; 14 August 1899, p. 6; 14 August 1 8 9 9 , p. 9.
4 6 Australian Photographic Journal, 20 November 19 0 0 , p. 24 4 .
10 M orning Bulletin, 18 November 1 8 9 7 , p. 2 ; 23 November 1 8 9 7 , p. 2.
4 7 Gavin Souter, Lion and Kangaroo, Fontana, Brisbane 1976, pp. 84-5.
11 Ibid, 14 December 1 8 9 7 , p. 2 ; 16 December 1897. Brisbane Courier, 23 December 18 9 7 , p. 2.
48 Australian Photographic Journal, 20 November 1900, p. 24 3 .
12 Brisbane Courier, 23 December 1 8 9 7 , p. 7. 13 Ian Dunlop, “Ethnographic Film-Making in Australia - The First Seventy Years” , in Aboriginal History 1 9 7 9 , 3:2. 14 Torres Straits Pilot, 19 March 1 8 9 8 , pp. 2-3. A. C. H addon Australian and Pacific Papers Index, National Library of Australia, 1 9 9 1 , p. i. 15 Alan W ard, “The Frazer Collection of W ax Cylinders: An Introduction”, in R ecorded Sound 85, Journal of the British Library National Sound Archive, January 1984, p. 1. See also A. C. Haddon Papers, Cambridge University Library, envelope 1 0 4 9 . The two phonographs were an Edison “H om e” and a Columbia “Bijou” . 16 Earliest Australian colour photos were previously assumed to have been taken by Mark Blow in 1899. Refer Alan Davies, The Mechanical Eye in Australia, Oxford University Press, Melbourne, 1 9 8 5 , p. 104. 17 A. C. Haddon, Headhunters: Black, White and Brown, Methuen, London, 1901. 18 Ibid. 19 A. C. H addon Papers, Cambridge University Library, envelope 1049. Microfilm copy held at National Library of Australia, Canberra. 20 Information from Frances Calvert, Berlin. 21 A. C. H addon Papers, envelope 1055: Diary 10 M arch 1 8 9 8 -2 5 March 1899. 2 2 A. C. Haddon Papers, envelope 1 0 3 0 : Haddon’s 1898 Journal. 23 A. C. H addon Papers, envelope 1 0 4 9 : J. Guardia to A. C. Haddon, 28 June 1899. 2 4 Reports o f the Cambridge Anthropological Expedition to Torres Straits, Vol. 6, pp. 3 0 6 -3 0 7 . 25
49 Brisbane Courier, 18 November 1 8 9 9 , p. 5. Evening Observer (Brisbane), 18 November 1 899, p. 2. 50 Brisbane Courier, 14 October 1899. 51 Ibid, 30 October 1 8 9 9 , pp. 5-6. 52 Ibid. 53 Ibid, 1 November 1899. 54 See note 49. 55 Ibid. 5 6 Chief Secretary’s Under Secretary to Department of Agriculture Under Secretary, 2 February 1 900: Premier’s Department Letterbook, PRE/G 11, p. 827, QSA. 5 7 Chief Secretary’s Under Secretary to the Queensland Agent-General’s Secretary in London, 3 August 1 900: Premier’s Department Letterbook of dispatches to the Agent General, PRE/N 3, p. 5 5 4 , QSA. 58 There are scant records of Randall using slides, and none relating to his usage of Wills’ films. None of the correspondence relating to the film project came from Randall in Britain. 59 BrisbaneSun, 9 August 1 9 0 8 , “Attracting Immigrants” (clipping in Randall papers, Fryer Library, University of Queensland). 60 Ibid. 61 Australian Photographic Journal, 2 0 September 1 9 0 0 , pp. 2 0 0 -2 0 1 ; 20 October 1900, pp. 2 1 9 -2 0 ; 20 November 1 9 0 0 , pp. 2 4 3 -4 : “Paper on Cinematography” by F. C. Wills (serialized). 62 Information from Peter L. Lloyd, Department of Primary Industry, Bris bane, 1989. 63 Index for final correspondence on Wills’ films is dated 31 May 1904, but the letter itself does not survive.
W. B. Spencer Papers, Pitt Rivers Museum, Oxford University: Haddon to
64 Information from Brian Crozier, Queensland Museum, 1993.
Spencer, 23 October 1900 . Copy held by Ian Dunlop.
65 Refer note 27.
2 6 A. C. H addon Papers, Box 1 envelope 3: Spencer to Haddon, 1 December
66 Collection is listed in NFSA catalogues as “ Queensland Lumiere Films” .
1900. CINEMA
PAPERS
96 • 61
EE IE. I Er F Lv C O N T I N U E D
F R OM
P A G E
2
Australian Films in Spain The month of October saw an Australian film cycle
Cinema Studies Masters Graduate Diploma by Coursework
in both Madrid and Barcelona. The cycle was an
The Department of Cinema Studies at LaTrobe
initiative of the Australian embassy in Madrid and
University offers a Graduate Diploma and a Mas
was organized in conjunction with Filmoteca, the
ters by Coursework degree exclusively in the aca
Spanish state film archives and institute set up to
demic and critical study of film. Graduates from all disciplines are welcome to apply. A selection of
DANIELA
BAGOZZI
promote film viewing and increase in cinema in this country.
fourth and fifth year subjects includes Surrealism
The programme consisted of seven features
in the Cinema; Introduction to Video Practice;
which had never been viewed in Spain, as well as Australian Film Television & Radio School. The
Ethnographic Film; Beyond Heterosexuality: Film and Sexual Politics; Film and Interpretation; NonWestern Cinema and the Encounter with the Other;
features Proof, The Last Days of Chez Nous, A
Single Film Research; A History of Film Culture
Woman’s Tale, Romper Stomper, Holidays on the River Yarra, Return Home and Prisoner of St
1895-1960 and Principles of Film Criticism.
Petersburg, and the shorts, got a good response
discipline, you may be eligible. The course offers subjects in theory, history, criticism, gender stud
seven shorts by present or former students of the
and often attracted large audiences. On the sec ond Madrid showing of Proof (each feature and short was-shown twice over a period of two weeks), a crowd of would-be viewers was turned away from the box-office as the tickets had sold out half an hour 'before thé session was due to start. One of the co-ordinators of Filmoteca-Madrid,
If you are interested and have a BA in any
ies and cultural studies. Graduate Diploma applications close Novem ber 30. Masters by Coursework applications closed December 10. Late applications may be consid ered. For more information and a detailed bro chure write to: The Postgraduate Co-ordinator.
Efrain Sarria, said Spanish audiences have been
Department of Cinema Studies, La Trobe Univer
interested in Australian filmmaking since the 1970s,
sity, Bundoora, 3083. Tel: (03) 479 1111. Fax: (03) 479 1700.
when what he calls “beautiful films” such as Picnic at Hanging Rock and Sunday Too Far Away were produced. From a professional point of view, Sarria
Lindemans
claims that Spain has an interest in Australian film production because it, like its Spanish counter part, is a small industry which mostly survives on
In August 1993, Lindemans Classic Dry White undertook the sponsorship of the Australian Film
a good deal of state assistance.
Institute’s 1993 Australian Film Festival which high
The concept behind the cycle was to show contemporary Australian filmmaking as well as
lighted the series of Ealing Studio films made in Australia during the 1950s and ’60s.
present a multi-faceted image of Australian life today. As is the case in many European countries,
initiated a nationwide retail promotion offering con
Spanish people tend to stereotype Australia as a country where kangaroos cross the passer-by’s
sumers the chance to win a trip to the Cannes Film Festival, plus one of 250 AFI memberships.
Coinciding with this sponsorship, Lindemans
line of vision every five minutes and people live in
The competition is still current, but time is
houses on stilts in the midst of an exotic wilder
running out! Consumers can enter by purchasing
ness. Some of the films set in urban contexts should dispel that clichéd view.
Lindemans Classic Dry White or the Classic Brut
The other feature which many have found inter esting is the degree of racial as well as cultural mix
Cuvée before 31 December 1993. The competition is being promoted via necktags and point of sale material in liquor outlets throughout Australia.
Best Performances: Alan Rickman (Close My Eyes); Susan Sarandon (Lorenzo’s Oil)
Best Com poser: Patrick Doyle (Indochineand Much Ado About Nothing)
B e st P h o to g ra p h y : F ra n çois C atonne (Indochine)
Festival Awards At the Mostra del Cinema di Venezia, Australian Rolf de Heer’s Bad Boy Bubby won five awards, including the Festival Jury Award, the CIAK Jury Award and the Bronze Plaque from OCIC, as well as sharing (with Robert Altman’s Short Cuts) the International Critics’ Award. At the Montréal World Film Festival, Michael Jenkins and Richard Barrett scored with Best Screenplay for The Heartbreak Kid.
VENEZIA Golden Lion: Short Cuts (Robert Altman; U.S.); Trois Couleurs: Bleu (Krzysztof Kieslowski, France)
Silver Lion: Kosh ba Kosh (Bakhtiar Khudoinazarov, Turjikistan). Jury Prize: Bad Boy Bubby (Rolf de Heer, Australia): Volpi Cup fpr Best Actor: Fabrizio Bentivoglio (A Soul Torn in Two, Italy). Volpi Cup for Best Actress: Juliette Binoche ( Trois Couleurs: Bleu): Volpi Cup for Supporting Actor: Marcello Mastroianni (Un, Deux, Trois: Soleil, France). Volpi Cup for Supporting A c tress: Anna Bonaiuto (Dove Siete? lo Sono Qui, Italy). Volpi Gup for Ensemble Cast: Short Cuts MONTRÉAL Grand Prix of the Am ericas: Trahir (Radu Mihaileanu, Romania). Prix de Montréal for Best First Feature: Trahir. Special Grand Jury Prize: And the Band Played On (Roger Spottiswoode, U.S.). Best Artistic Contribution: Kalifornia (Dominic Sena, U.S.). Best Directors: Claude Lelouch (Tout Ça ... Pour Ça, France); Juanma Bajo Ulloa (La Madre Muerta, Spain). Best Act
ress: Carla G ravina (Il Lungo Silenzïo.; Best A ctorJohan Leysen (Trahir); Denis Mercier (Le Sexe des Etoiles). Best Screenplay: Michael Jenkins and Richard Barrett (The Heartbreak Kid).
present in Australian culture. Although most peo ple know that Australia has many migrants there was an element of surprise at films such as Geoffrey
An Editor’s pick
1993 AFI Awards
Wright’s Romper Stomper, or the shorts by Monica Pellizzari, Rabbit on the Moon and Just Desserts.
These choices are selected from films seen this year, not those released in Australia in 1993. If the
FEATURE FILMS
The social as well as personal conflicts caused by multi-culturalism which these films portray are, to
latter were the case, included below would be several previous winners: La Double Vie de
European minds, associated primarily with life in
Véronique (Best Film in 1991 ), II Ladro di Bambini
the U.S. The film cycle was received enthusiastically by
(Stolen Children (Runnerrup, 1992), Zbigniew
the audiences and praised by the local press which regarded it as a well chosen group of films for its diversity as well as quality. All in all, it was a success, especially for a society where approxi mately 90% of film distribution is based on com mercial North American film, as is reflected in its cinema attendance statistics.
P A P E R S 96
The Piano, Jan Chapman (producer)
Newvision Film Distributors Award for Best Achievement in Direction
Preisner (Best Score, 1991), Tout les Matins du Monde (Best Film, 1992) and Yves: Angelo (Best
Jane Campion, The Piano
Photography, 1992).
Jane Campion, The Piano
Best Film: Lorenzo’s Oil (George Miller). Run ners-up: Radio Flyer (Richard Donner, 1992);
Best Screenplay Adapted from Another Source
ComoAgua Para Chocolate (Like Water for Choco
James Ricketson, Blackfellas
late, Alfonso Arau); Unforgiven (Clint Eastwood)
AGFA Award for Best Performance by an Actress in a Leading Role
Best Australian: Memories & Dreams (LynnMaree Milburn)
62 . C I N E M A
Best Film
Cinesure Award for Best Original Screenplay
Holly Hunter, The Piano
Filin
RCVÎCWS
f r o m
p a g e
52
Crows and Sparrows are often regarded as
Hoyts Group Award for Best Performance by an Actor in a Leading Rôle Harvey Keitel, The Piano
AGFA Award for Best Performance by an Actress in a Supporting Rôle Judy Davis, On My Own
Telecom Mobilenet Award for Best Performance by an Actor in_9 Supporting Rôle David Ngoombujarra, Blackfellas
Young Actor's Award Robert Joamje, Map of the Human Heart
Samuelson Award for Best Achievement in Cinematography
drama tradition. He avoids favouring the per
They also depended on finely-tuned ensemble
spective of any individual character but rather
playing to achieve their effect. Like them, The
uses a third-person point-of-view in which char
Wedding Banquet is about groups and the price
acters are always presented in relation to each
their members have to pay to maintain their
other. This isn’t just W ai-Tung’s story or even
existence.
Wai-Tung and Simon’s story but rather it is the
For example, Wai-Tung has to negotiate his
story of the whole group of main players. Even in
way through the tensions created by his different
the opening shots, when Wai-Tung is working
group memberships. On the one hand, he is the
out alone at the gym, he is listening to an
only son of a Taiwanese general, entrusted with
audiotaped letter his mother has sent him on his
managing a building the family has bought in
Walkman.
New York and expected to marry and produce
By avoiding broad comedy and enabling the
grandsons to carry on the line. On the other
audience to understand everyone’s point of view,
hand, he is part of what could be called a chosen
the film also gains pathos. Beyond the humour,
family of New York yuppies, including his lover
there is pain and suffering. This surfaces as an
Simon, who finds it hard to understand why Wai-
aftertaste because it is rarely spoken directly in
Tung cannot just tell his father he is gay and get
the film. In the Alan Alda wedding farce, Betsy’s
it over with.
Stuart DrybUrgh, The Piano
Best Original Music Score
with another element from the Chinese melo
constituting the golden agè o f Chinese cinema.
Wedding, every grudge had to be aired by the
These two different groups meet when Wai-
end of the film. This American honesty might,be
Tung’s parents try to fix up a marriage for him.
healthier for those involved, but, for the audi
Michael Nyman, The Piano
They enrol him in a singles club. Rather than
ence, it was about: as much fun a$ witnessing
Spectrum Films Award for Best Achievement in Editing
hurting them by telling the truth, he strategizes
your neighbours’ domestics.
by demanding a bride with two PhDs (one in
In contrast to this, what The Wedding Ban
Veronika Jenet, The Piano
Physics) who speaks five languages, is an op
quet points up is the amount of well-meaning
Soundfirm Award for Best Achievement in Sound
era singer and at least five foot nine inches tall.
silence, deception and plain lying that Chinese
Inevitably, because The Wedding Banquet is a
people are prepared to invest in maintaining a
Lee Smith, Tony Johnson,
farce, his parents and theolub locate Wai-Tung’s
surface of calm and harmony. Not only does
Gethin Creagh, Peter Townsend,
ideal woman, although she is only five foot eight.
Wai-Tung refuse to tell his parents he is.gay for
Annabelle Sheehan, The Piano
But, fortunately for Wai-Tung, it turns out her
fear of dashing their hopes, but his own mother
Best Achievement in Production Design
mother enrolled her in the club, too; she has a
hasn’t told him his father
white boyfriend but doesn’t dare tell her parents.
because she doesn’t want to worry him. As the
The speed and intensity of the farce steps up
film makes clear, a traditional Chinese wedding
Andrew McAlpine, The Piano
Best Achievement in Costume Design
has
had .a stroke
Janet Patterson, The Piano
once Wei-Wei moves in with Wai-Tung and
banquet resembles a Japanese game-show
Members’ Award for Best Foreign Film
Simon to satisfy the immigration authorities and
endurance test more than a pleasurable cel
The Crying Game
then his parents turn up and join the household.
ebration. Nonetheless, Wai-Tung, Wei-Wei and
Everybody is deceiving everyone and nobody
1993 NON-FEATURES
but the audience knows the whole story as the
Simon go through with it to keep his parents happy.
Best Short Fiction Mr Electric, Stuart McDonald
Best Animation The Darra Dogs, Dennis Tupicoff
Best Documentary Exile and the Kingdom, Frank Rijavec; For All the World to See, Pat Fiske
Best Screenplay in a Short Film Just Dessertst Monica Pellizzari
Best Achievement in Cinematography in a Non-Feature Film Kangaroos - Faces in the Mob, Glen Carruthers
Best Achievement in Sound in a Non-Feature Fijm
characters creep up and down the stairs whis
By the end of the film, however, everyone’s
pering asides to each other, and almost sturrj-
well-meaning and self-sacrificing deceptions
bling upon the lovers sneaking:& kiss.
work out for the good, or at least they appear to.
What prevents the film from lapsing into the
It is here that the politically-correct thought po
worst kind of broad comedy is that all this hu
lice might have problems with The Wedding
mour is not achieved at the expense of any of the
Banquet. Without wishing to give the plot away,
characters. There are no toe-curling homophobic
if you miss the subtleties of the unspoken price
portrayals of screaming queens as in La Cage
that everyone is. paying for this impression of
Aux Folles-, the father is not a bigoted despot,
family togetherness, the film could seem all too
northe m otheradom ineering haridan; and Wei-
easily like a cheap fantasy that sacrifices the full
Wei is not a victim tricked into marrying a clos
import of the irreconcilable differences and
eted gay man. Rather, although foibles are
irresolvable problems it raises to achieve an
pointed up, each of the characters is trying to do
easy viewing experience. However, if you look
what they believe is best and each of them is
beyond the smiles in the happy family picture,
presented sympathetically.
you’ll realize that Ang Lee’s film may be more
In the Chinese melodrama tradition, it is not
Exile and the Kingdom, Noelene Harrison, Lawrie Silverstrin, Kim Lord
ambivalent about families and maintaining har
individuals who are at fault but rather the situa
mony than might at first be apparent.
Best Achievement in Editing in a Non-Feature Film
tion that causes the problems. In the case of The
THE WEDDING BANQUET Directed by Ang Lee. Pro
Wedding Banquet, what accentuates the usual
ducers: Ted Hope, James Schamus, Ang Lee. Screen
Everest - Sea to Summit, Michael Balson
problems between the generations is not only
play: Ang Lee, Neil Peng, James Schamus. Director of
Open Craft Award
W ai-Tung’s sexuality, but the cultural gap be
photography: Jong Lin. Editor: Tim Squyres. Production
Memories & Dreams, Lynn-Maree Milburn (for innovation in form)
tween the older, more traditional parents and
designer: Steve Rosenzweig. Costume designer: Michael
Byron Kennedy Award
can values. Interestingly, W ai-Tung’s sexuality
Matt Butler, Evanne Chesson, Adrian Martin, Gary Warner
of his own age, and he only hides it from those
Production. Australian distributor: Palace. 102 mins. 35
Raymond Longford Award
who know his parents.
mm. Taiwan-U.S. 1993.
Sue Milliken
their children who have adopted liberal, Am eri is not a problem for many of his Chinese friends
#
•
Clancy. Music: Mader. Cast: Winston Chao (Wai-Tung), May Chin (Wei-Wei), Mitchell Lichtenstein (Simon), Sihung Lung (Mr Gao), Ah-Leh Gua (Mrs Gao). A Central Motion Picture Corporation (Taipei) and Good Machine
Ang Lee complements this even-handedness CINEMA
P A P E R S 9 6 . 63
P R O D U C T IO N SURVEY
BROUGHT
TO
PERMANENT
N O TE : Production Survey forms now adhere to
Prod, secretary
a revised format. Cinema Papers regrets it can
Location manager
not accept information received in a different
Unit manager
format, as it does not have the staff to re
Unit runner Production runner
process the information.
A D JU D G E D A S O F 5/11/93
Laboratory
DFL
Greg Ellis
Lab liaison
Clive Duncan
Leigh Ammitzboll Cameron Stewart
Steeves Lumley Film Finances Mobile Prod. Facilities
Camera Crew
FEATURES
Focus puller
PRE-PRODUCTION LUCKY BREAK Prod, company
Generation Films
Principal Credits
Cast: [No details supplied.] Synopsis: [No details supplied.]
Harry Glynatsis
Prod, company Budget
Producer
Grip
Co-producer
Gaffer
Robbie Young
Scriptwriter
Roy Pritchett
Director
DOP
On-set Crew
Producer
Bob Weis Judi Lewin
1st asst director 2nd asst director
Ben Lewin
Continuity
Victoria Sullivan
Make-up
Kirsten Veysey
Dialogue coach
Gary Wilkins
Hairdresser
Cheryl Williams
Production Crew
Peta Lawson
Art Department
Scriptwriter DOP
Vince Monton
Sound recordist Prod, designer Costume designer
Anna Borghesi
Art director
Editor
Peter Carrodus
Art dept admin.
Composer
Paul Grabowsky
Planning and Development
Set dresser Props buyer
Casting
Wardrobe
Liz Mullinar Casting
Production Crew
Rob Visser
Victoria Hobday Simone Semen Darryl Mills Darryl Mills
Lesley Parker
Post-production
Rachel Garnsey
Sound transfers by
Producer’s asst
Sarah Norris
Director’s asst
Ben Holgate
Craig Godfrey
Location manager
Patricia Blunt
Craig Godfrey Mark Tomlinson
Production runner
Martin Williams
Mixed at
Rachel Nott
Tony Francis
Camera Crew
Planning and Development
Camera operator
ARTHUR BOYD: TESTAMENT OF A PAINTER (1 TV hour) Don Bennetts Films. Executive producer: Jan McGuinness. Producer: Don Bennetts. Director: Don Bennetts. Scriptwriters: Don Bennetts, Arthur Boyd. A
FEATURES LUCKY BREAK (90 mins) Generation Films. Producer: Bob Weis. Co-producer: Judi Lewin. Director: Ben Lewin. Scriptwriter: Ben Lewin. Romantic comedy about a passionate virgin with a handicap and a fertile fantasy life.
profile of Australian artist Arthur Boyd filmed to coincide with the his retrospective at the Art Gallery of NSW.
DOCUMENTARIES
THE DREAMTIME
AUSTRALIAN DESERTS: AN UNNATURAL DILEMMA
(1 T V hour) Aboriginal Nations. Producer: Keith Salvat. Director: Paul Fenech. Scriptwriter: Paul Fenech. Examines tradi tional beliefs about the Dreaming in Aborigi nal communities. We learn about the creation myths and their place in modern Australia.
(52 mins) Glen Joseph Productions. Execu tive producer: Bill Childs. Producer: Glen Joseph. Director: Glen Joseph. Scriptwriter: Peter Engebretsen. Examines the dilemma of how to protect the fragile eco-system of
Since the last Board meeting the FFC has
Australia’s desert regions while continuing to allow public access.
also entered into contract negotiations with the producers of the following project:
October
ERNIE DINGO’S KIMBERLEY (55 mins) Documentary. Australia-UK co production. InCA Independent Communica
Scott Goodman
2nd asst director
Tony Mahood John Martin
Prod, secretary
Janis Lee Leonie Godfrey
3rd asst director
Karen Mahood
1st asst director
Unit manager Insurer Legal services
Page Seager
Camera operators
Mark Tomlinson
Focus puller Camera type
Art dept runner
Standby props
Robert Moxham
Post-production
Peter Cass Brett Carter
Film gauge
35mm
Government Agency Investment
Santo Fontana
On-set Crew Jo Howie Liz Goulding
Special fx make-up
Jane Murphy Glen W. Johnson
SP Betacam
Continuity
Christina Norman Peter Forbes
Set dressers
Scott Goodman
Development
Film Victoria
Production
Film Victoria
AFC
Liz Goulding
FFC
Marketing
Craig Godfrey
Safety officer
Dorothy Godfrey
Tech, adviser
Ken Godfrey
Still photography Catering
Inti, sales agent Publicity
CiBy Sales Village Roadshow
Cast: Toni Collette (Muriel), Bill Hunter (Bill). Synopsis: Sometimes your better half is you.
Ken Mellors Drunken Admiral Restaurant
Art Department THAT EYE THE SKY
Art directors
Jo Howie Craig Godfrey Cast: Lorraine Merritt, Jon Sidney, Bill Pearson, Ian Lang, Kerry Laws, Tim Aris, David Noonan,
Prod, company Dist company
Entertainment Media Beyond Films
Vick Hawkins, Jacqueline Kelly, Pam John,
Pre-production Production
25/10/93 ...
Gareth John.
Post-production
20/12/93 ...
Synopsis: Upset by an unfaithful fiance, Cassie
Principal Credits
Kinsella retreats to a deserted beach town. It is
Director
winter. Only an eccentric anthropologist and incestuous couple share the seclusion. Many
Producer Co-producer
murders later Cassie is the target of a madman. Only a mental asylum can save her, maybe. -i "v vV i • 'Ü
John Ruane Peter Beilby Grainne Marmion Fred Schepisi Robert LeTet Tim Bevan
*
PRODUCTION
16/8/93 ...
Exec, producers
FEATURES I
Scriptwriters
John Ruane Jim Barton
Based on the novel
That Eye the Sky
JHI Written by
Tim Winton Ellery Ryan
DOP
EPSILON
MURIEL’S WEDDING
Prod, designer
Ken Sallows Chris Kennedy
Costume designer
Vicki Friedman
Prod, company Dist. company Pre-production Production
House & Moorhouse Films Village Roadshow 23/8/93 ... 18/10/93 ...
Principal Credits Director Producers
Lloyd Carrick
Editor
Planning and Development Script editor Casting
John Flaus Maura Fay & Associates
Production Crew Paul J. Hogan Lynda House Jocelyn Moorhouse
96
Hugh Bateup
Art dept co-ord
Assoc, producers
PAPERS
Art director
Sound recordist
Shearman. Presenter Ernie Dingo journeys through the Kimberley where he finds a multi
64 • C I N E M A
Daphne Paris
Art Department
Cinesure
FEATURES
unwilling woman visitor from another planet and an earthman.
love of the land.
Continuity
See previous issue for details on LIGHTNING JACK
Shearman. Scriptwriters: Ernie Dingo, Nick
cultural mix of people drawn together by their
David Williamson
Prod, manager
L
(90 mins) Rolf de Heer. Producers: Rolf de Heer, Domenico Procacci. Director: Rolf de Heer. An intergalactic love story about an
tions Associates. Producers: Will Davies (Aus tralia), Alan Bookbinder (UK). Director: Nick
Roth Warren
On-set Crew
Hairdresser
DOCUMENTARIES
Film Finances
Lorraine Merritt
Special fx
September
Steeves Lumley
Legal services
Electrician
August
Insurer
Ron McCullouch
Key grip Gaffer
FILM FINANCE CORPORATION FUNDING DECISIONS
Jill Steele
Completion guarantor
Craig Godfrey Soundfirm Soundfirm
Sharon Gerussi
Moneypenny Services
George Goers
Composer
Rowena Talacko
Prod, accountant
Camera Crew
Standby wardrobe
Prod, supervisor Prod, co-ordinator
Sound recordist Editor
Catherine “Tatts” Bishop
Prod, co-ordinator Prod, secretary
Mark Tomlinson
Ben Lewin
Co-producer
Prod, manager
Feb - Jun 1994
Craig Godfrey
Director
Brendan Campbell
Alison Barrett
Production Crew
7/11/93-24/11/93
John Goldney Scott Brocate
3rd electrics
Casting
Aug - Nov 1993
Post-production
Terry Ryan
Planning and Development
$80,000
Pre-production Production
Patrick Reardon
Costume designer
Pocket Money Productions
Principal Credits
Camera equipment Key grip
David Lee Jill Bilcock
Prod, designer
Tibor Hegedis Samuelsons
Clapper-loader
Martin McGrath
Sound recordist Editor
TO THE POINT OF DEATH
Juanita Parker
Insurer Completion guarantor Travel/Freight
Michael D. Aglion DOP
Cameron Stewart
Prod, accountant IN F O R M A TIO N IS CORRECT A N D
Jacinta Lomas
TRUSTEES
Tony Mahood
Prod, manager Prod, co-ordinator''"-;' Prod, secretary Location manager
Tony Leach Susie Wright Robin Astley Maurice Burns
Unit manager
Michael Batchelor
Prod, accountant Insurer
Producers
Al Clark Michael Hamlyn
Kevin Plummer Jardines
Exec, producer
Completion guarantor
Assoc, producer
Legal services
Scriptwriter DOP
First Australian Completion Bond Company Holding Redlich
Camera Crew
Sound recordist Editor
Camera operator Key grip
Mandy Walker Barry Hansen
Prod, designer
Gaffer
Ted Nordsvan
Costume designers
On-set Crew Tst asst director
Phil Jones Annie Beresford
Continuity Make-up
Amanda Rowbottom
Make-up asst
Zjelka Stanin
Special fx supervisor
Michael Bladon
Art Department Art director
Brian Dusting Sharon Young
Art dept co-ordinator
Post-production Asst editor Laboratory
Maria Kaltenhaler Cinevex
Marketing Inti, sales agent Publicity
Beyond Films
Composer Prod, manager
Cast: [No details supplied.] Synopsis: A young boy struggles to free his
Camera Crew
father from a coma following a car accident.
Focus puller Clapper-loader Gaffer
Wintertime Films
Pre-production
5/10/93-7/11/93 8/11/93-17/12/93
Production
Principal Credits Director
Margot Nash
Producer Scriptwriter
John Winter Margot Nash
DOP
Dion Beebe Faith Martin Kathy Kum-sing
Aboriginal consult. Prod, manager Prod, co-ordinator Location manager Unit manager
Caroline Bonham Fiona King Robin Clifton Rick Komaat
Production runner
Daniel Heather
Prod, accountant Completion guarantor Legal services Camera operator Continuity
Di Brown Film Finances Nina Stevenson Dion Beebe Lynn-Maree Dansey
Government Agency Investment Development Production
AFC AFC
Synopsis: A comedy musical about three drag
Legal services
Owen Paterson
queens crossing the Australian outback in a bus.
Lizzy Gardiner Tim Chappel
Sue Seeary Grant Lee Tim Parry
Jamie Platt John May Jardine Tolley Film Finances Martin Cooper Brian Breheny Adrien Seffrin Anna Townsend Matt Inglis
On-set Crew 1st asst director
Stuart Freeman
2nd asst director
Emma Schofield
Continuity Boom operator Make-up Make-up assts
Kate Dennis Fiona McBain Cassie Hanlon Angela Conte Strykermeyer
Hairdresser
Cassie Hanlon Angela Conte
Hair assts
Strykermeyer Choreographer Stunts co-ord. Safety officer Mechanic
John O ’Connell Bernie Ledger Bernie Ledger Mark McKinley Mark McKinley
Bus driver Still photography Unit publicist Catering
Elise Lockwood Catherine Lavelle Marike Janavicius
Art Department
Costume supervisor
Post-prod, supervisor
Specific Films Post-prodcution
13/9/93 - 28/10/93 28/10/93 ...
Principal Credits Director
Post-prod, liaison Asst editor Sound supervisor Sound editor Laboratory
Emily Seresin
Tony Lynch Tony Lynch Andy Yuncken Steve Erskine Steve Erskine Atlab
Lab liaison
Ian Russell
Film gauge
35mm Anamorphic 1:2.35 Apocalypse
Screen ratio Video transfer by
investment Finance
Stephan Elliott
2nd camera asst.
David Dunkley
Production Post-production
6/7/93 - 1/8/93 2/8/93 - 30/2/94
Gaffer
Principal Credits Director
Murray Fahey Murray Fahey
Producer
Clapper-loader
Gary Burdett Tim Jones
Best boy Electricians
Murray Fahey
Generator operator
Scriptwriter
Murray Fahey Peter Borosh
On-set Crew
DOP Sound recordist Editor Art director
Bruce Young
Key grip
Original screenplay by
Martin Perrott John Prentice Phil Mulligan Bob Woods
David Glasser
1st asst director 2nd asst director
Michael Faranda Warren Parsonson
Brian Kavanagh
3rd asst director
Francesca Belli
Robyn Monkhouse
Continuity Boom operators
George Kightly Sue Kerr
Frank Strangio
Production Crew Prod, manager
Bernard Purcell Gina Twyble
Prod, co-ordinator Prod, secretary
Carla Buscemi Winfalz Inti.
Completion guarantor Camera operator Key grip 1st asst director Asst director
Bernard Purcell Serena Hunt
Post-production Post-prod, supervisor Sound editors
Brian Kavanagh Craig Carter Livia Rutic Peter Frost Atlab
Mixer
Grant Shepherd Garry Siutz
Make-up
Kerry Jury Make-up assistant
T racey Garner
Special fx co-ord’s
Peter Leggett John Neal
Stunts co-ord’s
Glen Boswell Richard Boue George Mannix
Safety officer Unit nurse
Julie Deakins Gary Johnston Johnny Faithful
Stills photography Catering Runner
Rob Browri
Art Department Art director
Jon Rohde
Art dept co-ord Set dressers
Lee Bulgin Richard Kennett
Cast: [No details supplied.] Synopsis: A suspense thriller about a woman
Props buyer
Tal Oswin Susan Glavich
haunted by her past.
Wardrobe
Laboratory
Wardrobe co-ord
POLICE RESCUE - THE MOVIE Southern Star Xanadu
Pre-prodcution Production
2/8/93 ... 30/8/93 - 1/10/93 4/10/93 - 19/1.1/93
Martine Summons
Construction Dept Staging assts
Matt Bartley Damian Leonard
Studios
Director
Michael Carson
Producers
Sandra Levy John Edwards Errol Sullivan
Exec, producers Assoc, producer Scriptwriter
Wendy Falconer Olivia Schmid
Wardrobe assts
Prod, company
ABC, Frenehs Forest
Post-production Asst editors
Nicole La Macchia Martin Hiscox
Sound editors
Fabian Sanjuro (dial.) Peter Hall (fx)
Penny Chapman Wayne Barry
DOP Sound recordist Editor Prod, designer
lqn Neilson (foleys)
Debra Oswald
Sound asst
Russell Bacon
Sound audio op. Mixer
Peter Grace Chris Spurr Murray Pickett
John Hemming Erik Briggs Peter Purcell Brian Jamison
Neg matching Laboratory
Atlab
Planning and Development
Lab iiaison
Casting
Government Agency Investment
Ann Robinson Liz Mullinar Casting
Production Crew Prod, manager Prod, co-ordinator Producer’s assts
Polygram Filmed Entertainment FFC
Peter Borosh Adam Good
On-set Crew
Tim Parry
Adam Dalli
Production
19/6/93-5/7/93
Vehicle co-ord.
Designer’s asst.
Latent Image
Brett Joyce
Pre-production
Principal Credits
Post-production
Prod, companies
Russell Bacon Sean McClory
Focus puller 2nd unit cam. op.
Yann Vignes Kerry Brown
Roz Hinde
THE ADVENTURES OF PRISCILLA, QUEEN OF THE DESERT
Camera Crew
Coventry Films Winfalz Inti.
Art dept runner Props buyer
Costume co-ord.
POST-PRODUCTION
Film Finances David Heidtman
Prod, company Dist. company
Post-production
riiust be' met in the present. Tessa had not gambled on that.
L
ENCOUNTERS
Colin Gibson
Wardrobe
FEATURES
H. W. Wood Australia
Camera operator
Art director
Cast: Pamela Rabe, Linden Wilkinson. Synopsiis: When the past refuses to be buried it
r
Marianne Flynn
Camera Crew
Other Credits Casting
Sue Blainey
Paul Booth
Best boy
Rob Brown
Completion guarantor
Composer
Camera operator
Peter Branch
Prod, runner
Pearce, Bill Hunter.
Rick Komaat Russell Fewtrell
Completion guarantor
Manifesto Film Sales Catherine Lavelle
Cast: Terence Stamp, Hugo Weaving, Guy
Guntis Sics
Rick Komaat
Prod, accountant Insurer
Unit asst
Inti, sales agent
Brian Breheny
Location manager
Production runner
Trish Rothkrans “A.M.” Simon-Mayer
Unit manager
Prod, accountant Insurer
Esther Rodewald
Unit manager Unit asst
Marketing
Location manager
Publicity
Prod, co-ordinator Producer’s asst Transport manager
Apocalypse
Stephan Elliott
Guy Gross
Legal services
Prod, company
Sue Seeary
Production Crew
Palace Publicity
VACANT POSSESSION
Rebel Penfold-Russell
NSW Film & Television Office
Prod, secretary
SimonrWicks
Production
FFC
Marketing Jo Rooney Andrea Chittenden Amanda Higgs Rosa Del Ponte Lis Gilroy
Inti, sales agent Inti, distributors
Southern Star Film Sales
UIP Southern Star Inti. Publicity Victoria Buchan Cast: Gary Sweet (Mickey), Zoe Carides (Lofrie),
Creativity, Judgement & Trust Essential ingredients to sound film investment Complete the picture... witfiPeriiianent Trustee F For an initial discussion contact David Hepworth (02) 232 4400
BY
PERMANENT TRUSTEE C0 H PANÏ A .C.N. 000 000 993
CINEMA
LIMITED
PAPERS
96 • 65
Steve Bastoni (Angel), Sonia Todd (Georgia), Tammy Macintosh (Kathy), Jeremy Callaghan (Brian), John Clayton (Adams), Belinda Cotterill (Sharyn).
Synopsis: A feature adaptation of the television series of the same name. THE ROLY POLY MAN Prod, company
Rough Nut Productions
Dist. company
Total Film & Television REP
Production
16/7/93-20/8/93
Principal Credits Director
Bill Young Peter Green
Producer Line producer
Press kit
Standby props
George Zammit
Sound recordist
Simon Pressman
Action vehicle co-ord.
Peter Cashman
Editor
Karryn de Cinque
(Sandra), Les Foxcroft (Mickey), Zoe Bertram
Wardrobe
(Laurel), Frank Whitten (Henderson), Rowan Woods (Professor Wauchop), Peter Braunstein
Standby wardrobe
Barbra Zussino
Costume designer
Kym Goldsworthy Brian Deheny Guntis Sics
Sound recordist Editor
Neil Thumpston Robert “Moxy" Thompson
Prod, designer Costume designer Composer
Construct, manager
or something, or perhaps a combination of both is making people’s heads explode all over town and Dirk is determined to find out why. That is his first mistake.
ROUGH DIAMONDS
Julia Walker
Production Crew Prod, manager
Caroline Bonham
Prod, co-ordinator Location manager
Fiona King Rick Kornaat
Unit assistant Runner
Russell Fewtrell Daniel Heather
Prod, accountant Insurer Completion guarantor Focus puller Clapper-loader
Kate Dennis Anna Townsend
Key grip
Pat Nash
Grip
Peter DeHann Michael Gaffney
Asst grip Gaffer
Paul Booth Matt Inglis Michael Gaffney
Best boy 3rd electrics
On-set Crew 1st asst director 2nd asst director Continuity Boom operators
Hairdresser Model maker Asst model maker Safety officer Still photography Catering
Cast: Jason Donovan (Mike), Hayley Toomey
John Stokes
Synopsis: Mike Tyrell's life changes when in a
Camera type
Wayne Le Clos
moment of inattention the cattle truck he is driving hits a car parked on the side of the road that belongs to Chrissie Bright, an ex-singer turned barrister's wife, on the run from suburbia..
Key grip
Leigh Sandow
Asst grip Gaffer
Torstein Dyrting
Best boys
Torstein Dyrting
Donald Crombie
Planning and Development
Extras casting
Maizels & Assoc.
Out-to-Lunch
Unit assistant Prod, assistant Prod, accountant
Robert Colby
Lyn Askew Nick Breslin Audio Loc Sound Design
Focus puller Clapper-loader 2nd unit director 2nd unit DOP Camera equipment Key grip Asst grip Asst gaffer Generator operator
John Dennison
3rd asst director
John Patterson Ross Brewer
Continuity Boom operator
John Dennison
Make-up
Chris Rowell Productions
Film gauge
Super 16 Kodak 7293, 7248
Screen ratio
10:1
Hairdresser Asst, hairdresser Stunts co-ord. Unit nurse Chaperone Tutor Stills photography
Government Agency Investment
Unit publicist
Development
NSW Film & Television Office
Catering
Production
NSW Film & Television Office
Art Department Art director
Marketing Total Film & Television REP
66 • CI NEMA PAPERS 96
Melaini Lewis Australian Rent-A-Car
Vivid Pictures 9/93 ... 10/93... Nov - Dec 93
Post-production
Prinicipal Credits Director
Lawrence Johnston Susan MacKinnon
Producer Scriptwriter DOP
Tracy Kublar Tony Politis Nathan Harvey
Art dept runner
Dion Beebe Paul Finlay
Editor
Annette Davey Liam Egan
Other Credits Researcher Prod, manager
Lawrence Johnston
Producer’s attach.
Helen Panckhurst Kathy Shelper John Russell
Insurer
Cinesure
Camera assistant Key grip
Sion Michel Jo Juhanson
Brad Shields Brad Shields
Asst grip Gaffer
Paul Smith Michael Woods
Bill Ross
Best boy
David Holmes Kylie Naylor
Gary Shearsmith
Make-up Safety officers
George Mannix Claude Lambert Liz Hughes
John Dolan Ken Moffatt Murray Head John Cavanagh Mick O ’Brien
Still photography Catering Asst editor
Angela McPherson Carolina Haggstrom Paul Jones Margaret Archman Carolyn Nott Margaret Archer
Jackie Munro Atlab Andrea Anderson
Laboratory
Government Agency Investment Production
AFC NSW Film & Television Office Cast: Les Foxcroft (Arthur Stace).
Synopsis: For forty years Arthur Stace walked the streets of Sydney and wrote on them one word - eternity. t
Melissa Hasluck Rose Ferrell Michael Bennett
Special fx make-up Wig stylist Special fx supervisor Special fx Stunts co-ord. Stunts asst Safety officer Still photography
Lionel Midford Bronwen Feachie Catering Julianne White Jamie Howe
Set dresser
Rebecca Cohen
Props buyer
Kristin Reuter
Rachel Stevenson Rob Greenough Rob Greenough Rob Greenough Samantha Chalker Rob Greenough Simon Frost
Catering
Lois Portelli
Art Department Art director Art dept runners
Juliet John Michael O ’Rourke Michael Bennett
Set dresser Draftsman Propsperson Props buyers
Juliet John Bill Gibson Designs & Dimensions Juliet John Juliet John Melissa Hasluck
Wardrobe Seamstress
Vaughan Richardson
Construction Dept Construct, supervisor Construct, manager
Bill Gibson Bill Gibson
Leading hand Carpenter
Andy Bill Gibson
Set finishers
Bill Gibson Juliet John
Studios
Jodi Patterson Sarah Brill Fremantle Prison
Post-production Editing advisor Editing asst Sound transfers by Sound editor
Jolie Chandler Ari O ’Connell Creating Waves Karryn de Cinque
Musical director
Phil Bailey
Music performed by
Phil Bailey Parinita
Recording studio
Sound Mine Studios Don Connolly Dave Upson Creating Waves
Annie O ’Halloran
Karl Fehr
Rachel Stevenson Rachel Stevenson
Stacey Cross James Stockwell
Danny Baldwin
Rob Bailey
Jo Mercurio
1st asst director
Foley Mixer
SHORTS
Carolyn Nott
Louise Forster
Arriflex SRI I
Continuity
Sarah Walker
Lab liaison Vicki Sugars Adam Spencer
Peter Baker Stacey Cross Andrew Bremner
On-set Crew
Lawrence Johnston
Prod, accountant
On-set Crew
Tony Vaccher
Movielab
Rebecca Johnson Phillips Fox Solicitors
Samuelsons Film Service
1st asst director 2nd asst director
John Dennison
Antonia Barnard Film Finances
Camera Crew
2nd electrics
Laboratory
Australian dist.
Pre-production Production
Eric Sankey
Completion guarantor
Gaffer Lyn Askew
Tony Vaccher
Inti, sales agent
Kerry Mulgrew Christopher Strewe
Damien Rossi
Accounts asst.
Tony Vaccher
Shooting stock
Tony De Pasquale
Prod, company
Dave Suttor Stuart Lynch
Dry Shand Dobson
Sarah Brill
ETERNITY
Jennifer Cornwell
Location manager Unit manager
Camera operator
Camera assistant
DOCUMENTARIES
Julie Forster
Prod, co-ordinator Prod, co-ord. attach. Prod, secretary
Car hire
Clapper-loader
Make-up
Prod, manager
Accommodation
Camera operator
Boom operators
Tony Campbell
Post-production
Neg matching
Susie Maizels Maizels & Assoc. Peta Einberg
Insurer
Camera Crew
Clive Rippon
See previous issues for details on: THE SEVENTH FLOOR; SIRENS SPEED; TALK; TRAPS
Tony Campbell
Wardrobe buyer Standby wardrobe
Mixers
Chris Feld Peter Martin
Costume designer
Wardrobe
Sound design
Kim Sandeman
Sound designer Art director
Mario Varricchio
Armourer
Sound editors
Georgina Greenhill
Composer
Murray Gosson
Art dept. asst.
Sound transfers
DOP
Casting
Michael O'Rourke Michael Bennett
John Schiefelbein
Scriptwriters
Legal services
Kent Hughes Production runners
(Sam), Jocelyn Rosen (Lisa), Angie Milliken (Chrissie), Kit Taylor (Les Finnigan), Lee James (Macka McKeegan), Jeffrey Hardy (Douglas McFarlane), Roger Ward (Merv Drysdale), Maurice Hughes (Jimmy Rawlins), Tim Gaffney (Doc).
Damien Parer Jonathan Shteinman
Prod, designer Costume designers
Jeremy Coggin Sarah Brill
Donald Crombie Damien Parer
FIUA
Elise Lockwood
Prod, assistants
Jody Patterson
Insurer
Marta McElroy Marta McElroy
Props buyer
Greg Doherty
Grant Shepherd
Tracey Moxham
Standby props
Asst editor
Lab liaisons
Sound recordist
Art Department Art dept co-ord. Decorator
Justine Smith Samantha Chalker
Douglas Heck & Burrell Neil McEwin
Greg Stuart Linda Young
Unit nurse
Producer’s asst Prod, secretary
Kodak
Sound recordist Editor
Niobe Syme Melissa Hasluck
. Atlab
Ann McFarlane Russell Brown
Jason Gilbert
Prod, co-ordinator
Bright Sparks Songs
Auditor
Dave Young Chris Wauchop
Prod, manager
Michael O ’Rourke
Producer
Nikki Gooley
Special fx co-ord.
John McDonald
Chris Webb Geoffrey Guiffre Sophie Fabbri-Jackson Fiona McBain Nikki Gooley
Make-up
Wayne Hayes
Laboratory
Production Crew
Camera Crew
Production Crew
Post-prod, supervisor Music supervisor
Principal Credits
Di Brown Hammond & Jewell Film Finances
Post-production
Niobe Syme Karryn de Cinque
Mark Green
Robyn Bersten
Unit manager
Casting Michael Ashton
Film stock
Christopher Lee Greg Apps
Extras casting
Construction Dept
Tony Shilton
Director Exec, producers
Cole Porter
Planning and Development
Forest Home Films
Planning and Development Casting
Mark Gainford
Gary Kier Prod, company
Vaughan Richardson
Composer
Synopsis: Dirk Trent, a chain smoking, hard
Margot Wilson Dave Skinner
Animals Animal wrangler
drinking, low rent private investigator is thrown headlong into a murder investigation. Someone
Juliette John
Prod, designer
(Det. McKenzie), Deborah Kennedy (Chantal), John Batchelor (Axel), Roy Billing (Sidebottom).
John Winter
Scriptwriter DOP
Paul Lepetit
Cast: Paul Chubb (Dirk Trent), Susan Lyons
Music mixer
MICHELLE’S THIRD NOVEL Prod, campany Budget
Creating Waves
Lab liaison
Ian Anderson Warrick Driscoll
Principal Credits Director Producer Scriptwriter Based on Written by DOP
Phil Bailey
ShortshockFilms Mixed at Laboratory $25,000 Karryn de Cinque
Neg matching
NiobeSymeGauge Screen ratio Michael Bennett Shooting stock Michelle's Third Novel Michael Bennett
Video transfers by
PeterBakerOff-line facilities__^
Cinevex
16mm 1.33:1 Standard Kodak 7293 CFM Anne Kyle
Government Agency Investment
Camera type
WA Film Council’s Short Drama Fund
ARRI BUM Kevin May
Kep grip
Cast: Marguerite Lingard (Michelle Burnett). Synopsis: Michelle's Third Novel is a highly-
Gaffer
charged comedy/drama about a wired writer on
Boom operator
the verge of blowing a fuse, who discovers an entirely new way to plug into her volts of inspira tion. Batteries not included.
Make-up
Kathy Courtney
Hairdresser
Kathy Courtney
NIGHT RELEASE Release Films
Budget Pre-production
$70,000 Aug - Sept 1993
Production
Sept 1993 Oct - Nov 1993
Principal Credits
Still photography
Chris Sheedy
Barry Mitchell
Producer
Barry Mitchell
Scriptwriter
Barry Mitchell
Majestic Plates
Laboratory
Movielab
Lab Liaison Grader
Kelvin Crumplin
Gauge
35mm
Screen ratio
1:1.85
5296 Cast: Mary Ann Jolley (Rosie), Reg Cribb (David), Sandie Lillingston (Lenore), Jeremy Callaghan
Composer
John Gray
Other Credits Storyboard artist Prod, manager
Hugh Freytag
Camera type
Arriflex
Key grip
Exec, producer Scriptwriter
Marcus Bosisto
DOP Scott Venner Sound recordist MarkStanforth Prod, designer
Gaffer 1st asst director Continuity
Kath McIntyre
Make-up
Helen Evans
Safety officer
Zev Eleftheriou
Still photography
Simon Cardwell
Costume designer Composer
Other Credits
Prod, manager DavidDalla-Molle Prod, assistant SAFC Camera assts
Animal trainer Sound transfers by Sound editor
JoanneLee
Music performed by
JohnGray
Mixer
Tony Young
Key grip HendonStudios Asst grip
Mixed at Laboratory Lab liaison
Digital Film Laboratries Gaffer MarkFreeman Asst gaffers
Gauge
35mm
Shooting stock
A G FA Pan 250/XT100
Government Agency Investment Production
1st asst director AFC
Cast: Susie Fraser (Susan), David Grybowski
Continuity Boom operator
(Brian).
Still photography
Synopsis: Brian and Susan are locked in an
Catering
empty car park one night. Or is it empty?
Asst art director Sign design Rushes editor
ROSIE’S RETURN Pre-production Production Post-production
6/9/93 - 8/10/93 9/10/93-10/10/93 11/10/93 ...
Principal Credits Director Producer
Michael Condran Michael Condran
Co-producer Scriptwriter
Michael Condran
DOP Sound recordist Art director Composers
VCA
Mixer Neg matching Laboratory Shooting stock Gauge
Douglas Brook Hugh Barton
Chris Taylor Lara Conner
Principal Credits
Ziyin Wang Chen Zhen SharonConnolly Ron Saunders Ziyin Wang JohnWhitteron Paul Finlay Stewart Young Stuart Greenbaum
Synopsis: Shot in Australia and China, Dream
Director
Trevor Graham House follows the lives of Tom and Ding, two of Trevor Graham the 40,000 Chinese who have come to Australia SharonConnolly to study in the last five years. Dream House CristinaPozzan follows their surprising personal journeys. JanWositzky
Producer Exec, producer Co-producer Scriptwriter
JenniMeaney
Sound recordist
FLOWERS AND THE WIDE SEA
BrowynMurphy Prod, company DeniseHaslem Dist. company
Editor
Synopsis: Inspired by the sole survivor of a U.S.
BIOGRAPHY III
Film Australia Film Australia 2/8/92-6/11/92
Pre-production Production
9/11/92-18/12/92
Post-production
Jan 93 - Nov 93
Principal Credits Director
Tony Stevens
Producer Exec, producers
Sharon Connolly Sharon Connolly
Assoc, producer
Ron Saunders Ziyin Wang
Based on the book Flowers and the Wide Sea Written by FilmAustralia Scriptwriters FilmAustralia
Eric Rolls Tony Stevens Sue Castrique John Whitteron BronwynMurphy
Prod, company
COMEDY (Working title)
HenryFrancis Prod, company NadiaCossich Peter Frost David Corke Cinevex 7293 16mm
her Dolly Parton fantasy while Matt retreats to
Focus puller Clapper-loader
Production Post-production
RachaelGuthridge
Synopsis: To avoid reality Libby escapes into
Shane Allen
Director Co-directors
DOP FrankHeimans Sound recordist Producer FrankHeimans Greg Wilson Editor Tony Stevens Exec, producer SharonConnolly GlenArrowsmith Composer MartinArmiger DOPs Paul Ree Chantelle Carlin Synopsis: Based on the celebrated book of the Simon Smith Kalimna Brock same title, Flowers and the Wide Sea examines Sound recordists Tim Parratt Andrew Shaw the fascinating and previously hidden history of Graham Wise Andrew Porter one of Australia’s oldest immigrant communi Editor FrankHeimans Steven King ties, the Chinese. Synopsis: A continuing series of thirty-minute Trevor Graham portraits of prominent Australians. Jill Brock THE FORGOTTEN FORCE
James Cahill
John Biggins
Pre-production
Sophie Benkemoun WarwickLawrence AUSTRALIAN Prod, company Joanne Donahoe Dist. company Jeff Bird Principal Credits David Cassar Director Neil Stanyer
his night shift. A prostitute, her client and a hens’ night finally force them to face the day.
Other Credits Unit manager
Prod, company Dist. company
airforce bombercrash on their land during WWII, the Yanyuwa people created the 'Aeroplane StephenJoyceDance'. However peformances of the dance are becoming increasingly rare, as Yanyuwa culture now fights its own battle for survival. KateMadden
Ainsley Crabbe Bill Keir
Camera operator
May 93 - Nov 93
Exec, producer FilmAustralia Scriptwriter FilmAustralia DOP 8/3/93 - 25/6/93 Sound recordist 28/6/93 - 19/7/93 Editor 31/1/94-30/5/94 Composer
Michael McKenna
Andrew McClymont
Brian Birkfeld
AEROPLANE DANCE
Phaedra Vance Murray
Length 23 mins Cast: Craig Goddard (Matt), Natasha Herbert (Libby), Michael Carman (Oscar), Tabitha (Belinda).
Mary Ann Julley
Post-production
Principal Credits
Wang Jing Ming
Cast: Tamblyn Lord (Gaston), Russell Fletcher JenniferSabine (Speltz), Simon Wilton (Grimes), Sean Ladham Douglas Brook (Dyer), William Mclnnes (Crossan). Kattina Bowell Lloyd Carrick
FilmAustralia FilmAustralia
Producer
DOP
Principal Credits
Director GeoffreyMcMahon Producer
Prod, assistant Camera assistant
SEETHING NIGHT Prod, company
Michael Kumnick Julia De Roeper
FILM AUSTRALIA
Martin Hoyle
Art director
Editor
Prod, company Dist. company
See previous issue for details on: CONVICTS (working title)
Maria Thompson
(Karl), Shane McNarama (The Truckdriver). GeraldThompson Synopsis: Relaxed from an overseas holiday, JoanneLee Rosie is ready to tackle the future, blissfully un DavidBanbury aware that friends have taken care of it for her. Ian Jobson
DOP Sound recordist
DREAM HOUSE
SCHOOL
The Cisco Kidneys
Shooting stock
Director
ing the last 20 years.
FILM TELEVISION & RADIO
Janice Tong
Music performed by Titles
Prod, company
Post-production
Kevin May
Continuity
Synopsis: A frank and irreverent look at the development of live and television comedy dur
AUSTRALIAN
See previous issue for details on: FOREVA; IN LIVING MEMORY; LOOP; ONLY THE BRAVE; SON OF CELLULOID
Dist. company Pre-production Production Post-production
Principal Credits Director
FilmAustralia
Dist. company FilmAustralia
FilmAustralia
Principal Credits
FilmAustralia Director 2/8/93 - 27/8/93 Producer 30/8/93 - 17/9/93 Exec, producer 4/10/93 - 21/1/94 Scriptwriter Trevor Graham
Ray Quint Adrienne Parr Chris Oliver Julian Leatherdale
DOP Sound recordist
Peter De Vries GrantRoberts
Producer Exec, producer
Cristina Pozzan Other Credits SharonConnolly Prod, manager Tracey Taylor Scriptwriter Richard Harris Prod, accountant Dare Skinner DOP Jenni Meaney Inti, distributor FilmAustralia Sound recordist MarkTarpey Publicity LesnaThomas Editor Tony Stevens Synopsis: In August 1945, two atomic bombs Prod, designer NeilAngwin obliterated the cities of Hiroshima and Nagasaki. Costume designer LaurelFrank Within weeks Australia committed over 35,000 Composer Phil Judd military personnel to the British Commonwealth Cast: Rachel Berger, Wendy Harmer, Mark Lit force where they were assigned the most dan tle, Rod Quantock, Richard Stubbs, Magda gerous area of Japan - Hiroshima prefecture. Szubanski.
Creativity, Judgement & Trust Essential ingredients to sound fill ipvesftheht Complete the picture... vwfflfc Permanent Ihistee For an initial discussion contact David Hepworth (02)
BY
PERMANENT TRUSTEE COMPANY A.C.N. 000 000 993
CINEMA
LIMITED
PAPERS
96 . 67
THE GADFLY Prod, company
MUTTABURRASAURUS Film Australia
Principal Credits Director
Lewis Fitz-Gerald
Producer
Bill Bennett
Exec, producer
FilmAustralia Director
Dist. company
FilmAustralia Producer
Principal Credits Directors
Corrie Soeterboek
Scriptwriter
Lewis Fitz-Gerald
DOP
Geoff Burton
DOP
Editor
Dany Cooper
Prod, manager
Prod, designer
Daniel Burns
Prod, co-ordinator Prod, secretary
Prod, manager
Corrie Soeterboek
Prod, accountant
Prod, co-ordinator
Glenda Carpenter
Editor
Julie Cottrell-Dormer Bentley Dean Leesa Curtis Dare Skinner
Jung-Ae Ro
Bentley Dean
Wally Logue
Prod, accountant
Dare Skinner Dale Ivanovic
Andrew Szabo Kay Lovett
Film Australia
Amanda Thompson
Prod, co-ordinator
Publicity
Camera operator Marketing consultant
Kathryn Millis Dale Ivanovic
Publicity
Inti, distributor
Film Australia
Synopsis: With a combination of stop-motion animation and documentary style interviews, this film looks at the dinosaurs who inhabited Australia one hundred million years ago.
Prod, companies
Film Australia 18/1/93-12/3/93
Synopsis: A 55 minute dramatized documen
Pre-production Production
China for three years in 1969 as a spy, James’ release was finally secured with the help of his
Principal Credits
Post-production Directors
13/4/93 - 3/12/93 Anna Grieve James Manché
Producers
Dist. company
FilmAustralia DOP
Director Producer
Prod, company Dist. company
Sound recordist
James Manché Sharon Connolly
Ian Spruce Margaret Antoniak
Camera charts
Yoram Gross Film Studios Beyond Distribution EM-Entertainment (Europe)
Principal Credits Director
Cam Ford
Sandra Gross Tim Brooke-Hunt G. Y. Jerzy Kouichi Kashiwa
Background artists
Background assts Layout supervisor Layout artists
Kathleen Bourke Therese MacLaine Janusz Antoniak Jan Wieczorek
Assistant editor
Marzena Domaradzka
Sound mixer
Simon Leadley
Sound editors
Tim Ryan
Animation servicesColorland Animation Produc tions Laboratory Atlab
Audio post-prod.
Trackdown Studios
Henry Neville
Lab liaison
Ray Nowland Gerard Piper
Completion guarantor
Robert Smit Sue Beak Athol Henry
Background design
Exec, producer's assts
Susan Beak
Harry Rasmussan
FilmAustralia Character design Oregon Public Broadcasting Corp.
Alice Borkert
Ian Spruce
John Burge
WILDLIFE CRIMINALS
Lea Rosie
Producer’s asst
Guy Gross
Steve Lumley
Prod, companies
Prod, manager
Studio management
Yoram Gross
Exec, producers
Editors James Manché
Julia Gelhard
Kate McCarthy
Producer
Editor
Mimi Intal
Prod, supervisor
Accountant
Anne Grieve
Composer
Graphics
Yoram Gross
Rey Carlson Pat Fiske
SharonConnolly lectual, creative and political life of the decade. This film tells the story of the ‘Pram’ ^ brash and sloganeering, it saw alternative theatre as a step Synopsis: G orgeous follows modern girl towards an alternative society. Hermoineon a journey of discovery, self-hatred,
Gorgeous asks why girls and women feel inad equate, and shows what they try to do about it.
Katrinka Beerens Rebecca Newbry
List test operator
THE ADVENTURES OF BLINKY BILL (series)
Kaz Cooke Catflap Animation
self-doubt and heavy chocolate biscuit abuse.
Asst colour stylists
Paul McAdam
Exec, producer
Scriptwriter Animation company
Gail Hall Belinda Price
Track reading
PRODUCTION
MartinArmiger Composer Synopsis: In the 1970s Melbourne was home to Other Credits Kaz Cooke an experiment in living theatre, the Pram Factory Storyboard artists SharonConnolly collective. It became the focal point for the intel
Principal Credits
David Witt
Michelle Price
TELEVISION
Anna Grieve
Prod, company
18/10/93...
Stella Wakil Additional dialogue Colour styling
15/3/93 - 8/4/93
Exec, producer FilmAustralia Scriptwriter
Oct 93 - Apr 94
See previous issue for details on: ESCAPE FROM JUPITER
Thermal Falls
tary about one of Australia’s most intriguing post-war figures, Francis James. Imprisoned in
Post-production
Synopsis: Documentary which tracesthe illegal exporting of flora and fauna from Australia.
Film Australia
Dist. company
Production
Elizabeth Urbanczyk
THE PRAM FACTORY
Klein (Francis James as schoolboy), Lewis Fitz Gerald (The Intelligence Archivist).
Sept 93 ...
Philip Peters
Lesna Thomas
Lesna Thomas
(Interrogator), Catherine Hauser (Translator), Ji
\
Greg Ingram
Dale Ivanovic
Pre-production
Adam Rapson
Julie Cottrell-Dormer
Prod, manager
Film Australia
GORGEOUS
Stan Walker Michael Dunn
Animators
John Booth
Inti, distributor
old school friend, Australian Prime Minister Gough Whitlam.
Evgeni Linkov
Cynthia Leech
Other Credits
Marketing consultant
Ming (Guard), K. C. Lee (Guard), Cheuk-Fai Chan (Guard), Danny Tang (Guard), James
Keith Saggers Philip Bull
John Russell
Cast: John Derum (Francis James), Kee Chan
Bun Heang Ung
John Booth
Prod, accountant
Lesna Thomas
Chris Oliver Al Austin
Scriptwriters
Marketing consultant Inti, distributor
Publicity
Maria Szemenyei
DavidRoberts DOPs Chris Oliver Steve Newman
Other Credits
Other Credits
Athol Henry
Susan Lambert
Norman Yeend Producer Exec, producer
Sid Butterworth
Sound recordist
Andrew Szemenyel Aviva Ziegler
Exec, producer Graham Binding
Chris Oliver
Assoc, producer
Principal Credits
Prod, company
Denise Wolfson Film Finances
Governement Agency Investment Development
NSW Film & Television
Production
FFC
Cast: Keith Scott, Robin Moore (Character Voices).
Cynthia Leech Ray Nowland
Synopsis: The plot of the television series takes
Robert Smit Robert Qiu
nited again after the destruction of their village, have chosen a site for their new home and are cautiously settling in. It is also about how these animals re-establish themselves as a commu nity. It is about how they pick up the threads of old
Paul Cheng Amber Ellis Richard Zaloudek Ga Hee Lim
up where the film leaves off. The animals, reu
Miroslav Kucera
relationships and how they get involved again in the world around them.
Bob Fosbery Junko Aoyama
See previous issue for details on: THE BATTLERS (mini-series)
Steve Lumley Sue Beak
Prod, company Dist. company
Gerard Piper
Pre-production Production
Michael Dunn Darek Polkowski Paul Fitzgerald Robert Qiu
Post-production
Principal Credits Directors
Riccardo Pellizzeri
Susan Beak
Steve Mann Producer
Jock Blair
Exec.producers
Graham Burke
Darek Polkowski
Greg Coote
Patrick Burns
Nick McMahon
Junko Aoyama Robert Malherb Susan Beak Paul Maron
Michael Lake Assoc, producer
Editors Art director
Patrick Burns
Music supplied by
Gerry Grabner
|an Grant Graeme Hicks Suzarihe Flanery Andrew MacNeil Tony Read Rondor Music
Planning and Development Script editors
Michaela Stefanova Nicholas Harding
Various Mark Wareham
Sound recordists
Dang Phuong Darek Polkowski
Jo Porter
Scriptwriters DOP
Jill Bell
96
4/1/93 - 14/3/93 15/3/93-17/9/93 15/4/93-4/10/93
Kay Lovett Fiona Quigley
Maria Szemenyei
68 • C I N E M A P A P E R S
New World Entertainment
Andrew Friedman Chris Langman
Andrew Szemenyei Athol Henry Senior animators
Paradise Beach Productions
Glen Lovett
Ray Van Steenwyk Animation directors
PARADISE BEACH
Paul Moran Patrick Burns
Rick Maier Alexa Wyatt
Casting
Maura Fay & Assoc.
Dialogue coach Budgeted by
John Dommett Michael Lake
Production Crew Prod, supervisor Prod, manager Prod, co-ordinator Producer’s asst Prod, secretary
Michael Lake David Watts Barbara Lucas Liza McLean Lara Griffin
Location manager
Ron Stigwood
Unit manager Production runner
Graham Ellery
Prod, accountant Accounts asst Paymaster Insurer
Margie Beattie Pat Passlow Payola Rhonda Fortescue Payola
Legal services
Hammond Jewell Phillips Fox
Post-production
Phillip Harris
Producer
Phillip Harris Sabina Harris
Jenifer Sharp
Phillip Harris Marco Zeilinger
Charles Boyle David Phillips
Sound recordist Editor
Tom Robson
Denise Morgan
Phillip Harris
Prod, designer Composer
Sabina Harris
Alison Niselle Ian Coughlan
DOP
Script editor
Sabina Harris
Prod, manager
Sabina Harris
Prod, assistant Base-office liaison
Sabina Harris
Camera Crew
Michael Healey
Camera assistant Camera type
Asst grip Gaffer
Paul Howard
Key grip Gaffer
David Elmes
On-set Crew
Bede Haines Grant Nielson Jacon Parry John Bryden-Brown Michael Baker
Electrician
Lara Robson
JV C Nigel Pugh Laurence Clark
2nd asst directors
Playback operator Boom operator
Peter Nathan
Wardrobe asst
Vera Biffone Karen Mansfield Jenni Fraser
Continuity
Boom operators
Lyn Aronson Geoff Fairweather
Make-up
Sydney McDonald Lynne O ’Brien
Make-up asst Still photography Unit publicists
Egon Dahm Maree McDonald Jason Boland Double PR Photography Marina Glass Anne Maree Moon
Government Agency Investment Production
Revolving Film Fund, Qld Government
Marketing Inti, sales agent Inti, distributor Publicity
New World Entertainment New World Entertainment
Synopsis: Paradise Beach, where the perfect white sand stretches for miles: the music is hot and the party just goes on.
See previous issue for details on SHIP TO SHORE (series) SKYTRACKERS (series) STARS (series pilot) Production
Garry McDonald
Planning and Development Peter Hepworth
Script editors
Michael Joshua Jenifer Sharp
Melinda Kay
Production Crew
Lucy Ackers
Prod, co-ordinator
Lennox Productions
Animals Animal handler
Karen Wheeler
Post-production
Jo Rippon
Sound transfers by Sound editor Music performed by Gauge
Phillip Harris Sabina Harris Rand Productions John Bishop Ken Goederee Shoot S-VHS PRO Edit 1”C
Shooting stock
FUJI H471S Off-line facilities Lennox Productions Cast: Simon Hastings (Paul), Nigel Pugh (John), Kate McManus (Sheena), Elizabeth EllisonJones (Joanne), Robert Ringleben (Graham), Sophie Hastings (Sandra). Synopsis: Paul is a “would-be” if he “could-be” rock star trying to make it big. With his band on a backing tape and two girl singers, Joanne and Sheena, they play in pizza shops and milk bars.
Producer’s asst. Prod, secretary
20/7/93 - 3/10/93
Eddie McShortall
Safety officer Still photography
Greg Noakes Steve Brennan Debbie Withers
Unit publicist
Darren Lewtas
Catering
Henry Ellison
Runner
Art Department Adele Flere Jill Eden
Art directors
Adele Flere
Props buyers
Rolland Pike Brian Alexander Rolland Pike Phil Chambers
Location dresser Standby props
Chris James
Wardrobe Standby wardrobe Wardrobe driver
Animals
Gina Black
Wrangler
Emma Honey Belinda Leigh Tara Ferrier
Trainee prod. Location manager Unit manager
Michael McLean Shane Warren Robert Bailey Kerry Baumgartner
Chaperone Financial controller
Jennifer Clevers
Prod, accountant Insurer
Kay Ben M’Rad Mandy Robertson
Completion guarantor
Hammond Jewell Film Finances Inc.
Karina Eagle
Construct, manager
Post-production
15/1/93 ... 7/6/93 ...
Frank Mangano Crawfords Studios
Studios
Post-production Post-prod, supervisor
Philip Watts
Asst editor
Anne Carter Andrew Scott
Post-prod, assts
Carter Lewis Visual effects
Dale Duguid Photon Stockman Chris Berry
Post house
Ant Bohun
Hirecom
Apocalypse Steve Taysom
Camera Crew
Deidre McLeland Michael Vann Sarah Pumazelle
Film stock
Camera operator
Craig Barden
Focus puller Clapper-loader
Angelo Sartore Trish Keating
Camera assistant 2nd unit DOPs
Jeff Fleck Ross Issacs Ron Hagen Gary Bottomley Ross Issacs
Key grip Grip Gaffer
Development
Marketing
7/6/93 - 25/2/94
1st asst directors
Chris Page (eps 1-7) Ray Hennessy (eps 8-13)
Mark Defriest (eps 1-7) Brendan Maher (eps 8-13)
2nd asst director 3rd asst director
Rachael Evans
Jonathan Mark Shiff
4th asst director
Henry Ellison
Michael Garcia Paul Kiely Ray Phillips
Cinevex Ian Anderson Lui Keramidas
Government Agency Investment
Samuelsons Craig Dusting T ravis Walker
On-set Crew
Lyn Molloy Filmlink
Laboratory Lab liaisons
Production
Daryl Pearson Adam Williams
Continuity Boom operator
Kodak Freight & rushes
Ron Hagan Paul Jackson
Dick Tummel
Best boy 3rd electrix
Principal Credits
Gina Black
Mark Elliot
Green room
Barry Browse
OCEAN GIRL (series)
$3.58 million
Darcy Smith Andrew Thompson
Mobile phones
Film equipment
Budget Pre-production Production
Driver Runner
Michael McLean
Barker Gosling Paula de Romanis Jet Aviation
POST-PRODUCTION
Westbridge Productions Tele Images Beyond Distribution
Alby Farrawell Frank Mangano
Legal services Travel co-ord.
Underwater DOPs
Prod, company Dist. companies
Chris Anderson New Generation Stunts
Amanda Garland Jennifer Clevers
Jo Warren
2nd unit asst.
TELEVISION
Producer
Stunts co-ordinator
Construction Dept
Tutor
Post-prod, supervisor Asst editor
Line producer
Photon Stockman
Construct foreman Sabina Harris Tina Hastings Lara Robson
Dale Duguid
Special fx supervisor
Set dressers
Story editor
Budgeted by
Wardrobe buyer
Directors
Maggie Kolev Doug Glanville
Hairdresser asst.
Jane Hyland
Marina Glass
(Lisa Whitman), Andrew McKaige (Nick Barsby), Jon Bennett (Kirk Barsby), Kimberley Joseph (Cassie Barsby), Megan Connolly (Tori Hayden), Ingo Rademacher (Sean Hayden), Raelee Hill (LorettaTaylor), John Holding (Roy McDermott), Tony Hayes (Grommet Ritchie).
Doug Glanville
Laurie Stone
Helen Leonard Clive Carter
Wardrobe supervisor
Cast: Robert Coleby (Tom Barsby), Tiffany Lamb
Prod, company
Tracy Watt
Prod, designer Costume designer
Casting Casting asst.
Make-up
Wardrobe
Andrew Scott
John Bishop Andrew Gibbard
Stunts co-ordinator Still photography
Arnie Custo Colin Phillips Clinton White Wade Savage
Philip Watts Anne Carter
Neil Luxmore
On-set Crew 1st asst directors
John Wilkinson
Editors
Composers Marco Zeilinger Kate McManus
Maggie Kolev
Make-up Make-up asst. Hairdresser
Craig Barden
DOP Sound recordist
Production Crew
Camera operator
Camera assistant Key grip
Ken Goederee
Planning and Development
Show Travel
Focus puller
Neil Luxmoore
Scriptwriter
Show Freight
Brent Cox
Peter Hepworth
Scriptwriters
Co-producer
Freight co-ordinator Camera operators
Jonathan Mark Shiff
Exec, producers
Jennifer Clevers
Director
Travel co-ordinator
Camera Crew
1/11/93 ...
Principal Credits
Qld Film Development Office Film Victoria FFC
Inti, distributors
Tele Image
Beyond Distribution Cast: [No details provided.]
Synopsis: The story of Neri, a mysterious young girl from the ocean, and her discovery by the young inhabitants of an underwater research colony. Set in the tropical rainforests and spec tacular coral reefs of far north Queensland.
See previous issue for details on: THE FEDS (tele-feature) SNOWY (mini-series)
Creativity, Judgement & Trust Essential ingredients to sound film investment Complete the picture... with Permanent Trustee FILM TRUSTEESHIP For an initial discussion contact David Hepworth (02) 232 4 400
" ' " " ‘ "¿'."V .Y ‘„‘.T ,} " l"
CINEMA
P A P E R S 96 - 69
EDITORIAL
Full Effects here Is a d e cid e d e ffe cts s la n t to th is “T e c h n ic a litie s ” w ith tw o a rticle s th a t sh o w how digital m a n ip u la tio n of film im ages (at film re so lu tio n ) is d e fin ite ly part of the S FX to o lkit. It is also
Effect(i\ Steve Courtney’s ILLUSIONS FX is another of the small companies that have positioned them selves around the Warner Roadshow Studios and held on through the quiet times. Courtney came from an engineering back
ch a n g in g the w a y e ffe cts are done, w ith a lot of the cla ssical re q u ire
ground and started in the film industry by build
m e n ts of m otion control and blue screen being u n n e ce ssa ry w hen the
ing the “hero” car for director John Clarke’s
c o m p u te r can m atch m otion paths and pull m attes fro m an yth ing . In the
into doing special effects, at first for commer
M aking o f Jurassic Par/cvideo, S teven S p ie lb e rg ta lk s a b ou t how m odel
cials, and then recommended him for effects on
a n im a to r Phil T ip p e tt (of G o M otion fa m e ) sw ung o ve r to 3D c o m p u te r
the Mission: Impossible series and decided to
im a ge s d u rin g th e p ro d u ctio n w ith th e sta te m e n t th a t “ M otion C ontrol is
stay. He declined work interstate so that he
Running On Empty. It was John that pushed him
Butterfly Island. Steve moved to Queensland for
d e a d ” . (O f co u rse the c o m p u te r control of ca m e ra m o ve m e n t is alive and h e althy, but in th e lim ited a re a of h ig h -b u d g e t m odel a n im a tio n h e ’s
would stay available for local production and has now established what he feels is the Queens land engineering-based effects facility. Working from a script, he was commissioned
p ro b a b ly right.)
to design and construct the effects for the Police
It all com es, as usual, dow n to m o n ey and tim e. T h e re is an old rule w hich says th e re are th re e w ays the jo b can be done - GOOD, FAST and CHEAP - but you can o n ly ch o ose tw o. W h a t you can a cco m p lish on a 6 6 m H z 486 PC w ith P h o to sh op o r H iR es Q F X is a m a zin g ly good and
Academ y live show at the Movie World theme park. This has led him into pyrotechnics for live shows, and the company has built a range of stunt equipment, such as kick rams, a small plate that, when stood on, kicks open to throw the stuntman to heights of up to 45 feet (14m). A
v e ry ch e ap but at film re s o lu tio n s it’s oh so slow . T o get the job done, you
similar device was made to flip someone out of
need to sp e nd m o n e y on h a rdw a re and so ftw a re , and, of co u rse, you
the water as if tossed by a dolphin. In the lean times, he has made a range of
have to ch a rg e enou g h to earn th a t back. T h a t m akes the ch o ice in how you Invest th a t m o n ey ve ry im p o r ta n t. S yste m s su ch as M A T A D O R , running on S ilicon G ra p h ics, lo o k to m e like an a ffo rd a b le e n try point, e s p e c ia lly fo r th e c o s t-s e n s itiv e A u stra lia n industry. Th e re are a fe w o f th e U K -ba se d P a ra lla x M a ta d o r in sta lla tio n s here, m o stly w o rkin g on vid eo . O ne th a t I’m e sp e cia lly kee pin g an eye on in Q u e e n sla n d , at B risb a n e P o st-P ro d u ctio n S e rv ices, se e m s to be w e ll-p o s itio n e d . W e have a lso held up o u r “T e c h n ic a litie s ” end of th is Q u e e n sla n d issue w ith so m e c ra ft s to rie s th a t se e m e d to fit. 70 • C I N E M A
PAPERS 9 6
fh
tools such as cobweb guns and gas-powered
e) Engineering fog machines, and recently camera rockers and
flying foxes, including one of 600 feet (180m)
lightweight geared heads.
between two cranes, providing technical assist
The first geared head was made on order for
ance and equipment.
Dale Duguid, a Queensland art director who is
On The Penal Colony, where he worked with
now doing visual-effects design. The head was
the American crew for three months, Bob was
fora Mitchell camera, and Duguid wanted some
introduced to a lot of the extra equipment he now
thing that was smaller and lighter than the con ventional heads. Steve, working with his design
has. This includes special dynometersthat allow him to test load a rig so that people know it will
associate John Harris, came up with an elegant
be safe, even testing the wire swaging (the
solution (see photo) that he is now keen to
process of adding “thimbles” or eyelets to the
market. The original head has been converted
loop ends of wire rope). The swaging device will
from handwheels to stepper motors for a mo
work with diameters up to ten mm on location
tion-control rig.
and tests out at 95 to 100% of the strength of the
The camera rocker request came from a grip
rope.
who wanted a low rocker that would allow cam
One particular stunt Bob’s proud of on that
era movement. The current design sets the
series involved the 300-foot (90m) wide Barrum
camera 65mm from the ground and has manual
Falls gorge, and dropping a stunt man 210 feet
pan and tilt.
(60m) into four feet (1.2m) of water. Bob:
Steve is moving the effects facility to a larger space as we go to press, so for information and prices on the above gear or “anything you can’t get off the shelf” call ILLUSIONS FX on (075) 732 226.
Playing S a fe The qualifications and experience listed on the front page of Bob Wenger’s resumé could barely have been accomplished by someone with his 23 years in the industry as long as they hadn’t worked on any films! It is W enger’s four years in the RAAF Training Corps and 13 years Police Force background that add depth to his modest current self-description as “providing special ized rigging for stunt and special effects” . Bob: I give technical assistance and provide special ized testing equipment to test rigs for acceptable safe working loads. I also run a mobile wire-rope swaging service. I’m a rope specialist and Class 1 Rigger and Dogman. This lets Bob provide a service that he feels
We had to run cables across the gorge, anchor them down, and make a flying fox to travel the stuntman and the cameraman out the same distance and then drop them. The cameraman, with a hand-held camera, stopped short of the water and the stunt guy entered it. We were using special descenders from the States that Kenny Bates from Stunts Unlimited brought in. I did the wire work and got the crew down to the bottom and safely back up. I have my own Rescue stretcher, Oxy-Viva equipment and more roping and climbing equip ment than anyone in the industry at the moment. After the injury recently in NSW where the guy fell because of a wire swage, saying that there is a ‘trend’ to increased safety makes it sound trivial. But I’m taking the guesswork out of it.
FACING PAGE: RICHARD ATTENBOROUGH, LAURA DERN AND SAM NEILL IN STEVEN SPIELBERG'SJURASSIC PARK. ABOVE: SHOOTING MARTIN CAMPBELL'S THE PENAL COLONY.
the Gale Anne Hurd’s feature The Penal Colony, the Damien Parer feature Rough Diamonds and the tail end of Lightning Jack, whose interior sequences were shot at the Warner Studio 5. With another “Movie of the Week” shot in No vember and a string of features slated for next year, the lab is well on its feet. Gary feels that the local market is very sup portive of the laboratory because of the service and the quality. One of the main reasons for work being sent to Sydney is the lack of a telecine handling the studio’s NTSC require ments. Gary says that this will change when the local Videolab facility installs an NTSC telecine at the end of the year.
Bob Wenger doesn’t intend to stop there.
It is pretty much a full post laboratory at Atlab
When I spoke to him, he’d just completed the
Queensland, with only the optical sound negs
qualifications in first aid to the level required now
and titles being sent to Sydney. The lab is
for a Safety Officer in the Queensland industry.
capable of doing bulk release prints. Gary adds:
Bob can be contacted on (075) 307 547. Mobile:
Ninety-eight percent of our chemicals are re plenished and recycled. The system was de signed by the Atlab and the Filmlab technicians, and, because it’s all a new set-up, I’d have to say that we are probably more of a ‘green’ lab than Hotham Parade.
(018) 539 440.
is new and needed. Most of the American crews have a provision for riggers. In Australia we
Local S ty le
don’t specialize; the job is usually left to the
Like most businesses moving north, Atlab took
grips.
a chance when it opened the Queensland labo
Along with his move from Victoria, Bob feels
ratory facility in February this year. With no
Gary also cites a very different atmosphere in
he has moved into his new area of effects with
guarantees of production, and a local commer
Queensland as compared to Hotham Parade, or
his rope work on The Penal Colony, where he
cials industry that manager Gary Keir describes
the Sydney industry in general:
did rigging, rope safety and cliff rescue work.
as “quiet” , all eyes were turned to “the Studios” .
Previous to this, he worked with Chris Anderson
Atlab’s first major job was one of the “Movie
on the stunts for Time Trax and did a lot of large
of the Week” series, Mercy Mission, followed by
At Hotham Parade, the footage comes in there at the end of the day and you see it go out in the morning. Here it’s more shared. We get asked to CINEMA
PAPERS
96 . 71
Technicalities
go out to the set and talk to the DOP about his instructions for rushes. There’s been a totally different learning experience for us all. Being on the doorstep can be trying and interesting, be cause we are usually with them screening rushes, which is a very different atmosphere.
Pushing the Envelope G le n n Fraser re p o rts o n Ju rassic P a rk a n d th e C h a n g in g P o litic s o f M o tio n -P ic tu r e T e c h n o lo g y
ture production, working in sales, assisting Pe
Seminars on filmmaking can be as boring as they are titillating. Filmmakers can walk away from them inspired, or dejected. Seasoned speakers can impress on their audiences a feeling of being out of one’s depth, or they can reassert the importance of “telling the story”. Sydney filmmaker, GLENN FRASER, bit the bullet and landed in Hawaii for a four-day seminar on the post-production techniques of Jurassic Park, and found the behind-the-scenes politics of the film promised that the future of effects pictures could be as interesting as the stories they tell.
Spielberg. He is a filmm aker whose vision ex
ter Willard for a few years, and production
By the time of this writing, most filmmakers
tends past the final cut of the film and well into
manager of Atlab Sydney for three years before
would be familiar with the somewhat numbing
the incredibly profitable merchandising arena.
he was offered the Brisbane position. He is very
feeling engendered by Jurassic Park. Banish
One of the few directors who can bring large-
Rushes screenings take place at the main theatre in the studio, which is a full double-head .J theatre wiih changeover, or at the sm aller lab theatre, which! is a mute facility. The Damien Parer feature Gary mentioned, Rough Diamonds, which stars Jason Donovan, is significant because it is being cut on film in Queensland. The editors are working out of a room in the Videolab building (which is also almost part of the Warner lot). Gary has been with Atlab for almost 18 years, heading at various stages commercials and fea
of celluloid only 35mm wide. Jurassic Park is more than simply an exer cise in celluloid. It is an astute combination of marketing, merchandising and technology. From whichever direction we examine the wonder of modern filmmaking, it is still the pull of econom ics and politics that drive the cinema forward. In some cases, those same forces drag the cinema in its wake, often after having cut a bloody swathe through the artistic desire of the film maker. Few filmmakers can work with such de mands as well as Jurassic Parks director Steven
happy with the move and has obviously enjoyed
any thoughts of plot contrivances, unfinished
budget cinema vehicles in on time, and on budget,
the experience of being part of the local excite
story development and trite characters; if you’re
Spielberg has opened his arms to a cost-saving
ment.
noticingthis, then you’ve lo s ta tru e love fo rfilm .
appreciation of product-endorsement, fully-fo
Gary’s staff aré the people who were brought
You’re forgetting why the cinema exists in the
cused merchandising and to the newest ground
from Sydney to start the lab, but as time goes on,
first place. Jurassic Park tells a story in the
breaking technology. This co m bination of.
and the lab and production builds in Queens
greatest Barnum & Bailey tradition. It replaces
marketing goals is becoming a much s o u g h t-
land, he feels they will probably start looking at
the magnificence of the elephants and trapeze
after talent in Hollywood’s filmmakers, whose
getting some keen young locál people in.
w ith the th rillin g s a v a g e ry of a pack of
upper-end projects are becoming increasingly;
velociraptors. There’s no denying there’s magic
top-heavy. All of these tools are part of the new^
Pacific Highway at the Warners Roadshow Stu
still left in our lives when we Can still be as
edge in getting audiences into cinemas. The
dios, Oxehford. Ph: (075) 736 500
tounded by images projected from a single piece
youth of today demand to be a part of a film
ATLAB QUEENSLAND is situated on the
O U R IMAGE HAS NEVER B ET T ER
BEEN ■
; -K;
W e ’ve got to w here we are by providing the same high standard o f quality and service demanded by Australian cinematographers year after year. Atlab has been consistently achieving the results they look fo r when it comes to film processing. W e ’ve been able to project an image that’s a faithful reproduction o f what they see through the viewfinder, shot after shot. Cinematographers are getting the quality, service and perform ance fro m a film processing la b o ra to ry committed to excellence.
47 Hotham Parade, PO Box 766, Artarmon, NSW 2064, Australia. Phone; (02) 9060100. Fax: (02) 906 7048. Henderson Partners ATL005
72 • C I N E M A
P A P E R S 96
STEVEN SPIELBERG'SJ1/A455/C PARK.
through the matching products they can buy.
things for Hollywood.
They also have an insatiable appetite for the
Only there do the over
cutting-edge technologies that are leading a
flows from the design
small, but significant revolution in Hollywood.
systems of the Ameri
In an art form that is becoming increasingly
can military machine fil
aware of the hard facts of audience attendance,
ter down through to the
and the realization that new technologies are
film
putting more power in the hands of the inde
th e n ce to c o m p u te r
b u s in e s s ,
and
pendent filmmakers, we need to examine the
games - to give Am eri
value of cinema as a medium. Is it what the
cans a leading edge in
cinema produces, or how (or indeed if) it is
entertainment technol
displayed? Jurassic Park has allowed us to see behind the scenes of some of the changes
ogy. In July of 1993, invi
rippling through the effects industries of Holly
tations were sent out to
wood, and, ultimately, these ripples will reach
film societies and indi
across the Pacific and strike our shores in some
viduals the world over
form. W hether it be in the shape of films, compu
to visit the islands of
ter software or virtual reality, the old guard is
Hawaii and hear some
having to shift its bulk as a new breed of vora
of
cious computer designers makes its impres
scenes s to rie s from
sions in an expanding workplace.
th e
b e h in d -th e -
Jurassic Park. A panel
Jurassic Park saw the first part of a shift from
of noted creative and
effects technology into computer-generated ef
effects personnel from
fects technology. It is part of a new ethic that has
H o lly w o o d ’s dom ain
an audience believe what it sees, rather than
promised to offer an in
believe what it is obliged to believe. Today,
sight into some of the
technology creates the belief in what we see. It
most innovative tech
is no longer a wilful suspension of disbelief, but
niques used in modern
is a virtual threat by the filmmakers to astound
cinema. Through lack of
and astonish. Seeing behind the scenes of a
interest or communica
filmic myth doesn’t dispel the magic - it capital
tion, only five Austral
izes on it. A little knowledge of the process is just
ians showed theirfaces
enough to encourage an audience to foster the
at a convention number
myth - and to aggrandize the magic.
ing around 200 seminar
The myth of belief is alive and well, and made
guests. Dedicated and
all the more worthy in a growing age of cynicism
mortgage-laden Inter
and hype. In Australia, we had three or four
national filmmakers introduced themselves at
'months of preparatory hype to contend with
an informal launch, and proceeded to explain
was a goodly list of pames tb represent the best of what this style of filmTiad to offer.
before the release of Jurassic Park. Some crit
away the reasons why they had offered to risk so
The sessions began with a re-showing of the
ics, knives honed to a keen and ready edge,
much money in what could possibly be nothing
original film. This of course didn’t apply to any
awaited the opening so they could be first to run
more than a groupie-laden and disappointing
Australians present. For us it was the premiere
in and take a slash at this sacrificial dinosaur.
seminar.
- the film was due to open in Australia the
fVndThen the howls of surprise as the dinosaurs
The event was congenial, and the enthusi
following week. So whilst many of the seminar
got their own back. Many critics fell back in
asm of the guests seemed to match the experi
attendees were already discussing their opin
abject horror as they began to (sic) “enjoy the
ence of the panellists. Hollywood’s effects people
ions of the effects, my partner and I had merely
picture” , and find in it “a great sense of fun” . Or
are a gentle, reclusive breed for whom the light
to nod knowingly and expect all to become clear over the next few days.
perhaps, for a moment, they were taken back to
of day must seem a rare privilege. Kauai is one
those first few flickering images that so im
of the more beautiful of Hawaii’s islands, and to
We were not disappointed. The film stood out
pressed their child’s eyes. Their grimaces re
see in person the grandeur and size of a beau
above any other effects film we’d seen, and the
ceded to smiles, and the critics were quietened.
tiful landscape, which is so often created artifi
following four days of seminar talks proved as
cially, is enough to humble anyone.
enlightening as the film was entertaining. The
Such is the lure of the cinema. For many of us, Spielberg has re-invented the magic. Though
The platform for the conference was infor
cohesion of talent in a traditionally fickle industry
having lost his path for a time, catering to a softer
mal, and the excess of Hawaiian shirts was as
was surprising. The mood was supportive of all
a iif less critical audience with his odes to Peter
clichéd as one could imagine. The speakers
concerned, and the praise for Spielberg stems
Pan and extraterrestrial pathos, the man who
ranged from live-action dinosaur creator Stan
not so much from the matter of his being a
taught us how to fear nature, to understand the
Winston, visual effects co-ordfnator Dennis
premier director of b/gfilm s, but from his overall
wonder of outer space, and to believe again in
Muren, dinosaur supervisor Phil Tippett, pro
vision for a project and the simple good manners
traditional heroes, has returned to his genre. For
ducers Jerry Molen and Lata Ryan, marketing
he employs to achieve it.
Jurassic Park, Spielberg; has in tow the most
consultant Marvin Levy, director of photography
Perhaps the most impressive feat accom
accomplished set of technicians and artists work
Dean Cundey, special dinosaur effects creator
plished by the designers of Jurassic Park was in
ing in the effects medium today. If we are to
Michael Lantieri and sound designer Gary
the area of risk investment. This also served to
believe the extent of the changes that are pro-
Rydstrom. Co-screenwriter and author of the
generate some of the more delicate politics
f|o s e |j within the cinematic medium, then history
original novel, Michael Crichton, had to pull out
during, and since, its completion. At the helm of
j f s H e n made with th;e advent of this film. T ip
of the seminar at the last m o m e n t-a disappoint
the project of dinosaur design and supervision
much-touted cbnpputerization of effects is reach
ing turn for those wishing to grasp an insider’s
was an artist with a strong pedigree in Holly
ing fuil circle at an incredible fete. It means big
view of Hollywood’s treatm ent of writers. In all, it
wood. Phil Tippett was the natural successor to CINEMA
PAPERS
96 . 7 3
T echnicalities
RIGHT: INDUSTRIAL LIGHT & MAGIC CREATED A PAINTED IMAGE OF THE DINOSAUR'S SKIN TEXTÜRE AND THEN MAPPED IT ONTO A 3D DINOSAUR MODEL. PHOTO: 1993 UNIVERSAL. COURTESY OF INDUSTRIAL LIGHT & MAGIC.
the W illis O’Brien/Ray Harryhausen school of
ages) had been fanned, and its potential was
and Harriet operations, began to come to grips
specials effects. He’d pioneered his own form of
seen by some filmmakers, including Spielberg,
with their subjects, dinosaurs began to walk the
stop-motion photography, Go Motion, through
as awesome. During those initial stages of de
earth again.
the Star Wars trilogy, and has since been a
sign and planning, Spielberg threw a wad of
With no little diplomacy, Spielberg tore the
much sought-after talent. Tippett had collabo
money at Industrial Light & Magic’s Dennis Muren
carpet from beneath Tippett’s design team and
rated with most of Hollywood’s big-name effects
and asked him to take a closer look. As Tippett’s
directed most of it towards Industrial Light &
producers, but Jurassic Park was to prove a
dinosaurs began to come to life in the form of his
Magic. Tippett, left floundering for a moment,
watershed in his career.
electronic storyboards (an incorporation of stop-
still had a valuable part to play in operations. He
m o tio n
p h o to g ra p h e d
was still more fam iliar with the individual dino
storyboards), Muren’s team began to investi
saurs than any of the other artists. He had
On the project since early 1991, Tippett Was to oversee the design and implementation of the
d in o s a u rs
and
film ’s dinosaurs. At that stage, Go Motion and
gate the possibilities of living, breathing compu
immersed himself in their history. His advisers
live-action robotics were seen to be the answer
ter-anim ated dinosaurs. Plate-photographed
were palaeontologists and his was the choicé to
for the effects. It was a proven ground in the
against modern-day backgrounds, Muren’s liz
wade through an ever-widening polemic of opin
industry and there was already a stock of sea
ards began to take shape. Tyrannosaurus, re
ion as to the origin of the dinosaurs, their reptil
soned artists in town with a working knowledge
splendent in verdant, striped colouration is shown
ian o r a v ia n s im ila r itie s , w arm
of the medium. One of Tippett’s co-workers on
in the early production bible tests as taking a
bloodedness, and even to the extent of surmizing
the project, Lucasfilm ’s Industrial Light & Magic
Sunday stroll along a fully-lit country road.
as to which stage of evolution they would be
was experimenting with a new type of effect that
Though initially the lizard stepped with more
now. Phil’s role was one of mentor to the artists.
was, in essence, computer-designed. Devel
grace than an oversized ballerina (defined ideas
He translated the scientific garble of the rock
oped in the early 1980s oh Barry Levinson’s
of the creature’s movement and size had yet to
hounds into almost anthropomorphic terms, in
Young Sherlock Holmes, it was still on shaky
be settled), the results were astounding. Clearly,
effect giving each of the dinosaurs their pérson-
ground, but director James Cameron’s enthusi
a rethink of the effects budget was in order.
o r co ld
ality. Yet still, as far as the seminar was com
asm for new technology, and his faith in.the
Desperate attempts at producing movement
cerned, there was some degree of bitterness in
medium, spurred the workshop onward. The
blur on Tippett’s stop-motion animals were ac
his features as questions from the audience continued to address the issue of CGI.
computer illustrating effects (such as morphing)
complished, and the effect became noticeably
from films like Terminator 2: Judgment Day and
mòre realistic. But even the computer-aided Go
By the end of the convention, the majority of
The Abyss have since gone down as just an
Motion was no match fo rthe moving illustrations
the seminar audience had woken u p to th e e ffe c t
other tool for thè filrfifnaker. Some of thèsè
produced by a growing number of employees
that this hew technology was to have on Holly
effects are now fam iliar to cinemagders arid
over at Industrial Light & Magic. It was slow,
wood. The further the post-production person
advertising pèòplé alike.
painstaking work, but as the artists, recruited
nel went, the more adventurous they became. In
from as wide afield as graphic design and Harry
effects-producer Jánet Healy’s words:
The spark of CGI (computer-generated im
r
2 0 Y e a rs s e rv ic e to th e M otion P ic tu re in d u s try
optical & graphic **
Edit Advise has the track record to get you off the cutting ro om floor. “ L E X &: R O R Y ”- 35m m feature Editor and Post Production Supervision “ 6 DEGREES of SEPARATION” Technical assistance “W H ITE FANG” Technical assistance
“Producers who take advantage o f non linear editing are spending less time and money as well as expanding their options. J Edit Advise is now located in their own premises at 170 Dorcas Street South Melbourne, 3205. Transportable Lightworks Available Call Barry or John at Edit Advise about your post production needs. Phone 03 6 9 6 9 3 5 7 Fax 03 6 9 6 9 3 5 8 Mobile 018 3 7 7 1 3 3 74 • C I N E M A
PAPERS
96
• • • • •
’
P ty. L td .
Titling lecialisls
1000 Typefaces on line Extensive Proofing system Accept IBM or MAC Files for discount All formats including anamorphic Quoting a pleasure
5 Chuter St¡ McMahons Point, North Sydney, NSW 2060
Phone: (02) 922 3144
seen a dinosaur before. The de sign team could “get away” with errors that would never work if they were trying to illustrate a person’s myriad facial tics in close-up. Phil Tippett: Just because we invented the e lec tric stove, it do esn’t mean we disreg a rd
our
f o u r- th o u s a n d - y e a r
relationship with fire. The essen tials are ju st as relevant today. A film m a ker is still a storyteller, and these advances will sim ply begin to bring to many more individuals the access to produce the ir own film s.
We all know filmmaking to be an intensely collaborative proc ess. Such a medium is also hid The first of the effects-shots to be m anipulated
it with that of ah actor, this technique signals a
eously expensive, and that cost is just the sort of
were the full-da ylig ht brachiosaurs - and, with
growing área of discussion in the politics of
a barrier that prohibits perhaps the most tal
them , one can still pick up a few of the inconsist
modern cinema. As one of the seminar guests
ented of our filmmakers from ever seeing their
encies. But as the production sm oothed out, we
proposed, “Are we then threatened with the
dreams brought to life. Just as day-to-day sur
sought out fresh challenges. The final scenes
prospect of a sequel to The Wizard of Oz, with
vival prevented those without a sponsor from
between the raptors and the T-R ex looked like
the original cast members?”
they were a nightm are to orchestrate. T hey were tim e consum ing, but, after finishing them, it looked like nothing was beyond our reach.
The Diet Coke commercials of a year ago
creating art with brush and canvas four hundred years ago, it doesn’t mean that these tools aren’t
and Rob Reiner’s Dead Men D on’t Wear Plaid
available today. We must believe that these
already showed us how clever we could be in the
changes in the face of cinema will serve to bring its creation to a wider market. Ultimately, the
Yet for all the attention paid to the dinosaurs,
incorporation of old film into new footage, but
one effect within the film was to signal perhaps
CGI promises a fa r more novel concept. To take
means of survival for these filmmakers come in
the greatest threat to the Hollywood system.
pictures of long-dead actors, turn them into
thè m arketthattheirw orkisseen, and not simply
A few years ago, Hollywood’s legends came
graphic images to be manipulated at will, and
in the manner in which it is produced. The
out of retirement to protest the colourization of
gifting them with the voice of a talented mimic
relationship of a film to its audience is the impor
the classics. Interference with the original art
seems like a marketing valhalla for Hollywood’s
tant linkthat gives worth to the medium. Whether
work was the closest thing to “original sin” any
dream machine. There is no longer the problem
that film shows the grandeur of dinosaurs, the
one in Hollywood could imagine. The war was
of productions halting because of the untimely
computer-realized face of a long-dead actor, or
fought, and lost, by the purists. Money had its
death of an actor - five years hence would
the trace of shape and form that does without the
way, and soon everything from the Marx Broth
perhaps have seen Brandon Lee’s Raven make
interaction of a performer, what use will it be if its
ers to Buster Keaton found a new audience
it to the screens - actor intact - through the
market is closed off from view?
whose sense of appreciation ran to anything that
genius of CGI. So confident are the big players
In the future, the marketing of merchandising
wasn’t old and cheap - that is, black-and-white.
that these techniques will take over from tradi
and special effects will take on an ever-greater
The issue died away, and the finance machines
tional motion-control effects work that the likes
rôle in the production of big-budget films. It may
began to crank onward.
of Cámeron’s new effects unit, Digital Domain,
produce a polarity in filmmaking that suffers the
Now the issue of CGI replacement appears
has restrained from the purchasing of any mo
survival of the block-buster, and thé intensely
to set a few passions aflame. Well into produc
tion control stages. Scott Ross, co-founder of
personal home-made video product, and pre
tion of Jurassic Park, the artists became so
Digital Domain, ex-industrial Light & Magic and
cious little in between. W hateverthe turnaround,
confident of their CGI techniques that the direc
Go Motion, argues, “In three years, my crystal
it’s going to be a demanding generation in all
tor took the liberty of enhancing some of the
ball says we probably won’t be doing things that
sectors of the film community, and, if we’re
stimt work with its wonders. The sequence show
way any more. Where we’re going to make our
lucky, it may even contain a few surprises.
ing the main characters being pursued through
investment is in computer technology.” Like the
In ten years time, the most important film in
the air-conditioning system of the main complex
promises of virtual reality, however, there is
the history of cinema will be created on an
entails one of the raptors trying to jump through
probably a lot of hype. But should we start taking
outback property two hundred miles west of
the ceiling to grab the young girl. The animal
out copyright on our images just yet?
Coober Pedy. Totally computer-generated, the
misses its chance, but the girl threatens to fall
While in the U.S. legal personnel are already
filmmaker will have never left her house to write,
back into its snapping jaws. As the girl hangs on
on the trail of this potential minefield, the artists
direct or edit it. It will be a visionary masterpiece
for dear life, she flashes a look towards her
and technicians at the coalface are calling for
of truly independent filmmaking.
rescuers before being lifted to safety. In reality,
commonsense. Just as computers have swal
the body belonged to a stunt woman - the face,
lowed jobs in many fields, they have also cre
ever see it.
to the actress.
ated many new positions. The animated film did
Sources:
Take a moment to introduce yourself to the
not replace live-action cinema, it simply split and
“High Technology Filmmaking: Behind the Scenes of
future of effects in film. Not all the bluster and
formed its own particular medium. The members
Jurassic Park’, American Film Institute Conference,
hype of dinosaurs or aliens orterm inators, but in
of the Jurassic Park team promote CGI as noth
Kauai, August, 1993
Ifhe humble replacement of actors with charac
ing more than a new tool for the filmmaker -
J. Duncan, “The Beauty in the Beasts”, Cinefex, 55,1993
ter-generated images. Already used to great
innovative, yes - but no more soul-destroying
effect in Wolfgang Petersen’s In the Line o f Fire
than the invention of the steadicam. Muren states
to remove the face of a real President to replace
that they had the luxury of no one ever having
And apart from the filmmaker, no one will
J. Ferguson and Peter Galvin, “Big”, Filmnews, voi 23, no. 7, September 1993 D. Shay, “In the Digital Domain”, Cinefex, 55, 1993
CINEMA
PAPERS
96 • 75
LEFT: USING SPLINE-INTERPRETED MASKS, INDUSTRIAL LIGHT & MAGIC WAS ABLE TO MAKE THE COMPUTER-GENERATED DINOSAURS FIT INTO LIVE ACTION SCENES IN A BELIEVABLE WAV. PHOTO: 1993 UNIVERSAL. COURTESY OF INDUSTRIAL LIGHT & MAGIC.
first truly robust 2D paint solu tion for Silicon Graphics plat forms. Two SIGGRAPHs later, you couldn’t miss Parallax. The company’s double-decker booth occupied a prominent location adjacent to Silicon G raphics’ highly successful Discovery Park exhibit. Nes tled between Softimage and Alias Research, Inc., Parallax had taken its place in the fir mament, and with good rea son. In the two years since its debut, MATADOR has be come something of a stand ard, particularly for2D painting and rotoscoping, in post-pro duction and digital-effects op erations throughout California. In fact, there are more than 400 MATADOR licences cur rently in use, with the most recent orders coming from Digital Domain, The Post Group, Pacific Data
M atador
Images and Pacific Title Digital. Established users include Industrial Light & Magic (ILM), R/ Greenberg Associates Los Angeles (R/GALA), Sony Pictures Imageworks, Composite Image
I’ve been hoping for an Australian film applica
Systems, Video Image and Cinemotion, all of
tion story on the use of Parallax Software Inc.’s MATADOR. Perhaps it’s just that the industry is
which used MATADOR to create effects for nearly every major film released this summer.
quiet, or that we’re just not making effects pic
One reason for MATADOR’S acceptance
tures. You realize that it’s more than that when
among the cognoscenti is the way the system
you read a piece such as the following article
has evolved. Parallax’s development team is
that will appear in the latest Silicon Graphics
made up exclusively of people with experience
Users magazine. Somehow the local producers
in television production, film production, anima
don’t understand how sophisticated digital film
tion or graphic arts. After seeing MATADOR 1.0
effects have become, how they can save money
at SIGGRAPH in 1991, ILM roadtested the sys
and where to get them. Allow something for the
tem for three months and discovered a number
self-promotional tone, here are some of the
of areas where it felt performance could be
highlights from a longer piece that has examples
enhanced. ILM’s suggestions were incorporated
of work done on Jurassic Park, Coneheads and
into MATADOR 2.0 .
creative uses such as in the Clint Eastwood example. Our thanks go to the local distributor, Computer Effects, for permission to reprint the following examples and for the full story contact them at the address below. (F.H.)
Last A c tio n H e ro
CRISPIN
LITTLEHALES
At SIGGRAPH ’91, you had to scour the show floor just to find Parallax Graphics Systems Ltd, the small British company that had just released its first product for the U.S. motion-picture and video industries. Called MATADOR, the new system offered users of Silicon Graphics sys tems a breadth of animation and special-effects capabilities far beyond those previously avail able on a single workstation. In addition to tools for modelling, rendering, animation, compositing and special effects, MATADOR provided the 76 • C I N E M A
PAPERS 9 6
etary and off-the-shelf software. MATADOR was employed to produce a range of effects, from fairly straightforward wire and rig removal to very complex rotoscoping and retouching. One shot in particular posed some interest ing challenges, Robertson remembers: Near the end of the film there is a sequence where the character of Death from Bergman’s Seventh Seal swings its scythe straight out of the movie into the theatre. We used the perspec tive tool in MATADOR to create that distortion since the scythe had been shot flat in the first place, We had to distort it in true perspective to make the movement look real. I thought we were going to have to get into some kind of fairly involved 3D mapping or some kind of odd morph work to fit it in. I was pleased to see how effectively the perspective tool worked and also to learn that we could write a macro to batch process the whole length of the shot. The automation capabilities built into MATA DOR enabled Robertson and his team to com plete multiple-frame shots In less time and with less repetitive effort. For example, the key to the plot of the movie is revealed in a scene early on when Danny, a young fan of Jack Slater (Arnold Schwarzenegger), “passes through” the screen his hero. Throughout the balance of the film, we follow Danny, Slater and some nasty villains as
Ten different effects companies worked on Last
they leap back and forth through the silverthresh-
Action Hero concurrently, each bringing its own
old between fantasy and reality. Robertson ex
signature to the segm ent it produced. R/
plains:
Greenberg Associates Los Angeles (R/GALA) performed a dual role on the project. In addition
MATADOR GOES HOLLYWOOD
R/GALA’s own work was done primarily on Silicon Graphics systems using a mix of propri
of a Manhattan cinema into the fantasy world of
The Fugitive. The significant things to look for are the shift away from blue screen and the
we saved on shooting and stage costs by avoid ing the need for blue screens. We were dealing with a five-month schedule but Columbia Pictures’ visual effects supervisor, John Sullivan, and his team shot footage through out that time, and with Miller Drake [the visual effects editor on the project] made sure the sequences were finalized and sent as fast as possible to the various post-production houses. There were a lot of experienced people who knew exactly what needed to happen and how to make it work. The visual effects producers at Columbia, Alison Savitch and Chuck Comisky, made a heroic effort to keep the momentum going.
to producing roughly 40 of the special-effects shots, R/GALA served as the film ’s visual-ef fects consultant, responsible for making the final production look as cohesive as possible. R/GALA’s Stuart Robertson, the Digital Ef fects Supervisor on the project, isn’t likely to forget the challenge of straddling the two as signments anytime soon. He recalls: The logistics of assembling the show were quite amazing. We were gratified that all the vendors came through on time and produced great work. There were close to 150 effects shots and the budget was quite modest - probably less than it would have been in an optical situation because
To capture these transitions, a film crew shot the background scene with a hole in a solid wall or a neoprene sheet. The actor then would put a hand, an arm, or his whole body through the hole. Since the actor was supposed to be reach ing into a theatre, light streaked through the hole and illuminated him. Then the wall or sheet was replaced with a beauty wall and shot in correct perspective as an empty plate. The next step was to blend the two shots. R/ GALA used MATADOR to rotoscope the charac ter, eliminate the neoprene sheet or set wall, and add the beauty wall. Then they animated the edge where the hand or body was passing through, creating the contour between the solid wall and the character. R/GALA’s animators and
As of Monday 4th October, we’re moving in to 176 Bank Street, South Melbourne - right in the heart of Melbourne’s film industry. This means a more user friendly face to face service right on your doorstep. A t our new Bank Street address we will be
moving up, up on to the first level of purpose built accommodation. (If you can recall our old location you will agree this is indeed a real move up!) And of course, we are moving along, along with the times. Apart from our excellent colour, black & white and sound services, we are able to offer overnight video rushes, Osc/r tape to film interface, Hi Res Kines, Digital frame store colour grading and a brand new 30 seat theatrette. So when you’re on the move, drop in and see the new boys on the block.
D iL Digital Film Laboratories / 76 Bank Street South Melbourne Victoria 3205 Telephone 03) 696 5533 Facsimile 03) 696 9300
Neg Matching to Offline Edit or Cutting Copy
N E G TH IN K 'S C O M P U T E R M A T C H B A C K 1S Y S T E M
m
im
iL i
.15mm & 16mm NEG ATIVE C L T T IN G
STOCK FOOTAGE LIBRARY CHRIS ROWELL PRODUCTIONS PTY LTD
Scans Keykode™ in 16mm, super 16mm or 35mm Producing Frame Accurate Neg cutting lists from EDLS produced by all linear or non linear editing systems. CONTACT Greg Chapman
SUITE D 172 FILM AUSTRALIA BUILDING ETON ROAD LINDFIELD NSW 2070 TEL: (02) 416 2633 FAX: (02) 416 2554
PTY LTD
PH: (02) 439 3988 FAX: (02) 437 5074
10 5 / 6 C L A R K E S T R E E T C R O W S N E S T N SW 2065
• OPEN C H ANNEL • OPEN CHANNEL • OPEN CHANNEL - OPEN CHANNEL • OPEN CHANNEL • OPEN CHANNEL
flu
<(«
THE MELBOURNE CENTRE FOR FILM AND VIDEO TRAINING January '94 Update • Video Summer School • Young Filmmakers Summer School New Certificate Courses Full & part time certificate courses in Video Production commence in '94. Enrol NOW. Open Channel is a registered non-TAFE provider. Open Channel acknowledges assistance from the Australian Film Commission and Film Victoria
Award winning production house
OPEN CHANNEL 13 Victoria Street, Fitzroy, Vic. 3065 Ph: 03/419 5111 Fax: 03/419 1404
Production • Facilities • Training * Production • Facilities • Training « Production « Facilities » Training C I N E M A PAPERS
96 . 77
rotoscope artists produced a matte for every other frame and used MATADOR’S sequencing c a p a b ility to c o m p u te th e in -b e tw e e n s . Robertson:
that matched the set piece and the imaginary stories and atrium. Then we generated a digital rain element, and animated searchlights pass ing through the rain. We matched the position and angle per frame of those searchlights. That element really helped to put the whole shot together.
John DesJardin then began work on combin ing our computer graphics Willy with a shot of the breakwater and the young boy who was urging Willy to escape. Next, he animated the orca and Combined the animation with the background
Rather than cutting a hard-edge type of matte, we used digital airbrush tools to create a matte with a lot of motion blur. Then we used MATA DOR to go back in and retouch certain areas. We’d put that together and send a semi-compos ite back to our New York operation where the animation for the light streak and a little blue magic effect were computed. Sony Pictures Imageworks contributed 46
dent during his re-election campaign. There is a
back Into the ocean on the other side of the
shots to Last Action Hero, including a wonderful
critical moment in the film where the antagonist,
breakwater. That part was created by a practical
d ep ictio n of D anny’s im agined version of
John M alkovich , rem inds the hero, C lin t
shot of a full-sized whale model being panned as
Laurence O livier’s classic interpretation of Ham
Eastwood, that he was present at Jack Kennedy’s
water was being splashed. The whole sequence
let. Schwarzenegger replaces Olivier as the
assassination and that he could have saved the
consisted of a computer-generated shot fo l
action explodes on the screen. Tim McGovern,
President had he responded better to the crisis.
lowed by a full live-action one combined with yet
visual effects supervisor for Sony Pictures
To establish this defining moment, John Nelson,
a n o th e r c o m p u te r-g e n e ra te d
Imageworks, remarks:
the visual effects supervisor on the film, and the
seamlessly blended together.
We used MATADOR throughout that sequence. We had to deal with a combination of black-andwhite scenes and colourscenes. We desaturated and enhanced the colour footage to look blackand-white, and then we added colour elements to the black-and-white to make it fit with a kid’s imagination. At one point, Arnold/Hamlet lights a cigar, picks up Claudius, and throws him through a stained-glass window. As the window breaks, colour spreads into the shattered glass. Since the stained glass was originally shot in blackand-white, the effects team painted and tracked it through a non-motion controlled camera move, and performed an animated wipe starting from the point where the glass is broken. In the final stages of the segment, Arnold/Hamlet lights another cigar and sets off an explosion. Accord
In t h e Line o f Fire In the Line o f Fire involves a c iA agent who is
When W illy reaches the height of his leap,
looking to atone for an embittering defeat by
there is a cut to the young boy’s point-of-view as
working with fellow agents to protect the Presi
he watches the whale soar over him and plunge
One of the most difficult effects created for
Eastwood back in time by taking footage of him from D irty Harry (circa 1971) and giving him a
Speaking of the sequence where the newly-
digital haircut, lapel trimming, and tie thinning so
liberated Willy is reunited with his pod, Wash
as to make him look like a secret service agent
elaborates:
in 1963. The team then placed the 1960s version of Eastwood behind JFK’s shoulder in newsreel footage of that fated visit to Dallas. McGovern: We wrote code to take the motion out of the plate in which Eastwood originally appeared. Although there were a lot of tricky things we had to write ourselves, such as grade enhancers, we used MATADOR to dp much of the paint work, includ ing the mattes. When you see the shot, it really does seem to place Eastwood at the scene and it fits in well with the way the motion works. And he really does look much younger.
Free W illy
we had to desaturate Arnold and the castle, while enhancing the explosion.”
know that an orca couldn’t possibly leap over a
MATADOR’S ability to automate repetitive
massive breakwater in a single bound. How
operations allowed Sony to achieve the desired
ever, the special-effects team at Video Image
colour effect without wasting time. For instance,
did such a convincing job on this dramatic shot
in one scene Sony was asked to colourize Arnold/
that it’s hard to believe otherwise. John Wash,
Hamlet’s eyes and skin tone to make the original
Video Image’s art director and the on-set visual
b la c k -a n d -w h ite fo otag e look like an old
effects supervisor for the film, explains how they
Technicolor movie or an overzealous Ted Turner
got Willy to take the plunge:
to set up a macro in MATADOR to process all the frames automatically once the rotoscoping had been done. Also by relying on MATADOR’S greater than 24-bit colour depth, Sony was able to produce an intricate matte for the backdrop of a scene in which Jack Slater swings from a Times Square rooftop to save Danny dangling from a rainslicked gargoyle 11 stories above street level. McGovern recalls how Sony crafted the Illu sion of imminent peril out of a relatively harm less sound stage shot: Jack and Danny were supposed to look like they were 11 stories above the ground with people moving below. They actually were a story and a half above the stage floor. We added the extra 10 stories as well as an atrium, and shot footage to place the unsuspecting pedestrians beneath them. From that, we produced a matte painting 78 • C I N E M A
PAPERS 9 6
all
the film was not included on the original shot list.
Most people who go to see Free Willy probably
some complex mattes, the animators were able
shot -
team at Sony Pictures Imageworks rotoscoped
ing to McGovern, “That was colour footage, so
colourization. By establishing lookup tables and
image, from which he had already removed the fibreglass model. There also was a matte paint ing and some other splash elements that were added at that point to enhance the effect.
First, we shot a rough model of the whale and Richard Helmer, who was responsible for the physical effects, created a hydraulic rig to thrust the model through the surface. We scanned that footage and began the process of constructing a whale database from a model we had sculpted. A member of our computer graphics team, Andy Kopra, created a numbered grid corresponding to that model of the whale. Using that informa tion, I was able to create a texture with charac teristic markings for Willy’s skin. That was then mapped onto the computer graphics model of Willy. I used MATADOR to paint itasastretchedout image almost as if we had literally skinned Willy and laid his surface out on a flat plane. Once the texture was roughed in, we mapped it onto the whale and I made adjustments until the fit was perfect - altogether it was a very quick procedure that required only a day or so. We also used Renderman effects in addition to the texture map to give the skin a glistening, natural look.
Wq wanted to identify Willy by his bent dorsal fin - a condition that is common to orcas in captiv ity. Compositing a computer-generated bent fin onto one of the orcas filmed by natural wildlife photographer Bob Talbot involved a pretty so phisticated process. Usually, when we do ef fects photography, we plan to do our live-action shooting in a very controlled situation. In this case, though, the footage we were given had been taken by Talbot from a moving boat using a hand-held camera. There was no control, the camera just fol lowed the action. We had to place the new fin on the whale while matching the fin to the motion of the whale and while taking the motion of the camera into account. First, we removed the original fin by tracking different areas of water and compositing them over the original whale’s fin. Then Andy Kopra modelled a bent fin using Renderman to light and shade it so it matched the overall scene. He rotoscoped the fin frame by frame to match the position of the whale’s body. And we used MATADOR to blend the fin and smooth out the image in several instances, as well as to clean up some of the edges and artefacts left by the compositing process. It was quite a tough piece of work. Originally, we thought we could simply modify the fin, but then we decided we needed to rebuild it com pletely. But getting the new fin in and out of the water and making sure that all the artefacts had been removed ... well, that’s an art.
R eal o r S y n th e tic The net effect is that even the most incredible things can be made to appear real. One effects supervisor, in fact, says he’s always disappointed when someone complements him on a particu lar effect: “On the whole, w e’d just as soon you didn’t notice.” Note: Crispin Littlehales is a freelance writer living in San Francisco. During intermissions, she can be found standing in line for popcorn. Computer Effects: 109 Union Road, Surrey Hills, 3127. Ph: (03) 899 1993. Fax: (03) 899 1995.
For the finest in motion picture cameras
CAMERAQUIP Film Equipment Rentals & Service
64-66 Tope Street, South Melbourne 3205 Phone: (03) 699 3922 Fax: (03) 696 2564
330 King Georges Ave, Slngupore 0820 Phone: [65] 2917291 Fax: (65] 293 2141
FRAMEWORKS LONG FORM SUPPORT HAS CHANGED POST PRODUCTION FOR GOOD The day Frameworks introduced the first
matching. Daily budget and # 1 #
refining the way a long form
' i
the new 'Non-Linear'
progress reporting. And, apart from always being accessable, Stephen still supervises complete
environment. Working with top
or refresher Avid courses for
editors and producers of drama,
the editor.Frameworks is the
documentaries and features, Frameworks' Stephen Smith has perfected a system that takes
MêêÊêêê
care of everything. From rushes to neg.
Avid to Australia we set about
project should be supported in
m f j j j fl
most experienced digital
iM
Non-Linear facility in Australia. Call Stephen for a quote.
His accurate budgeting and proven post production back-up, can only be good for your next project.
FRAMEWORKS» 2 RIDGE STREET NORTH SYDNEY 2060 PHONE-(02) 954 0904 FAX (02) 954 9017 CINEMA
PAPERS
96 . 79
Ten
Critics'
Best
and
Worst
TENEBRICISE TEN A P A N EL O F TEN FILM REVIEW ERS HAS R ATED A S ELEC TIO N O F TH E LATEST R ELEA S ES ON A S C A LE O F 0 TO 1 0 , TH E LA TTER BEING TH E OPTIM UM RATING (A DASH M EAN S NOT S E E N ). TH E CRITICS A R E : BILL COLLINS (NETW ORK 1 0 ;
DAILY MIRROR, SYD N EY); SAN D RA H A LL (THE BULLETIN); P A U L HARRIS ( “E G ”
THE AGE, 3RRR); IVAN HUTCHINSON (S EVEN NETW ORK; HERALD-SUN, M ELB O U R N E); STAN JA M ES (THE ADELAIDE ADVERTISER); N EIL JILLETT (THE AGE);
AVERAGE
EVAN WILLIAMS
DAVID STRATTON
TOM RYAN
SCOTT MURRAY
NEIL JILLETT
STAN JAMES
IVAN HUTCHINSON
FILM TITLE D ire c to r
PAUL HARRIS
SANDRA HALL
THE SUNDAY AGE, M ELB O U R N E); DAVID STRATTON (VARIETY; SBS); AND EVAN W ILLIAM S (THE AUSTRALIAN, S YD N EY).
BILL COLLINS
SCOTT M UR R AY; TOM RYAN (3L0;
BAD LIEUTENANT A b e l F e rre ra
9
7
6
6
7
2
2
6
6
-
5.6
BEDEVIL T ra c e y M o f f a t t
-
6
2
6
-
2
3
-
8
-
4 .5
BLACKFELLAS Jam es R ic k e ts o n
-
6
4
7
-
8
-
-
7
-
6.4
BOXING HELENA J e n n ife r C h a m b e rs L y n c h
-
2
2
0
-
2
-
3
1
-
1.6
CONEHEADS Steve B a rro n
5
-
5
-
3
4
-
-
0
-
3 .4
CRUSH A lis o n M a c le a n
-
6
3
-
-
5
2
3
6
-
4
DAVE Iv a n R e itm a n
7
7
5
6
7
4
5
-
6
7
6
DESPERATE REMEDIES S te w a rt M a i n an d P ete r "Wells
-
-
-
4
-
7
4
-
5
-
5
ETHAN FROME J o h n M a d d e n
9
7
-
7
-
4
-
7
5
-
6.5
HERCULES RETURNS D a v id P a rk e r
-
-
3
-
4
5
-
-
6
-
4 .5
HOMELANDS T o m Z y b r y c k i
-
8
5
-
-
2
-
5
8
-
5.6
IN THE LINE OF FIRE W o lfg a n g Petersen
7
8
6
7
8
7
4
5
8
8
6.8
J’EMBRASSE PAS A n d ré T é c h in é
-
-
6
7
-
2
4
6
2
6
4 .7
KING OF THE HILL Steven Soderbergh
-
8
5
7
-
2
-
6
5
-
5.5
MAN W ITHOUT A FACE M e l G ib s o n
6
7
3
-
6
3
-
-
6
3
4.9
M UCH ADO ABOUT NOTHING K e n n e th B ra n a g h
9
7
5
6
-
5
7
8
7
-
6.8
THE NOSTRADAMUS KID B o b E llis
7
7
5
2
-
4
5
-
7
7
5.5
10
-
7
-
-
-
10
8
8
-
8.6
POETIC JUSTICE Jo h n S in g le to n
-
4
-
-
6
3
-
2
6
-
4 .2
PRELUDE TO A KISS N o r m a n R e ne
7
3
1
1
3
3
-
4
2
-
3
THE PUBLIC EYE H o w a r d F r a n k lin
-
6
6
5
-
7
-
5
5
-
5 .7
RED ROCKS WEST Jo h n D a h l
-
-
2
7
-
7
-
8
-
-
6
RISING SUN P h ilip K a u fm a n
8
2
-
5
7
4
4
-
6
5
5.1
SILVER BRUM BY Jo h n T a to u lis
7
3
5
-
7
5
-
5
8
-
5 .7
THE STORY OF QUI JOU Z h a n g Y m o u
9
7
6
-
8
9
-
7 .7
WATERLAND Step h en G y lle n h a a l
-
-
5
-
-
6
5
5.2
OTHELLO O rs o n W e lle s
80 • C I N E M A
PAPERS
96
§8
7
7
3
; 5
Bank of Melbourne
No Transaction Fees on your Personal Banking ■ No Transaction Fees, regardless of how many transactions you make. ■ Earn good interest ■ Receive a free VISA Card* or Bank of Melbourne Card* and a free cheque book. ■ Bank on Saturday from 9 to 12 (most branches). On Weekdays from 9 to 5.
Bank of Melbourne cuts the cost of banking .
Head Office: 5 2 Collins Street, Melbourne 3 0 0 0 ------------------------------------------------------------------
BANK 44322
* Our cards are debit not credit cards. You only spend the money in your account. Government duties apply to all transactions.
Five state-of-the-art Studios. Theatrette. Offices.
Editing Suites.
Fifty seat
Production
IVlake Up, Laundry, W ardrobe
Dressing Room facilities. Set Construction Workshops. Studio Commissary. Library.
Location
5 0 acre Backlot.Travel &.
Accommodation Office.
Laboratory. Video
Post Production. Visual Effects.
WARNER ROADSHOW MOVIE WORLD STUDIOS AUSTRALIA: Pacific Hwy, Oxenford, Gold Coast, QLD 4210 Ph: (61 75) 886 666 Fax:. (61 75) 733 698 USA: 2121 Avenue of the Stars, Los Angeles, CA 90067. Ph: (310) 282 5300 Fax: (310) 282 5339
|
https://issuu.com/libuow/docs/cinemapaper1993decno096
|
CC-MAIN-2021-25
|
refinedweb
| 76,911
| 57.5
|
Singletone utility in java
Singletone utility in java Please provide me an example of singletone utility in java
applet - Applet
in Java Applet.",40,20);
}
}
2) Call this applet with html code...:
Thanks...applet i want a simple code of applet.give me a simple:...,
Try the following code:
import java.applet.*;
import java.awt.event.*;
import java.awt.*;
public class CreateTextBox extends Applet implements
Java Date Utility - Java Beginners
Java Date Utility How to determine the maximum number of weeks in a year (52 or 53) as per the ISO Calendar? Is there any standard method in java...
Regards
Bliss Hi Friend,
Try the following code:
import
java applet - Applet
://
Thanks...java applet I want to close applet window which is open by another button of applet program. plz tell me! Hi Friend,
Try
project...
project...
enter 10 integers, store it using array then display... them from lowest to highest
number to words conversion create a java program... in words
example enter an integer 123
equivalent in words one hundred twenty
Applet In Jsp
are using the applet class in the jsp by using the
html tag <APPLET CODE = "... of the way by which can use an applet in jsp.
The code of the program is given...Applet In Jsp
java - Applet
:// what is the use of java.utl Hi Friend,
The java util package provides many utility interfaces and classes for easy manipulation
Applet
Applet I have a java applet that has a button. On clicking the button it should disappear and another applet should appear. How to write this code???? Also in login applet after successful login it should display admin applet
Java Email
Java Email I am making one java email applications using jsp-servlets. can you tell me that how can i recieve and send email dynamically in my application in UI...
thanx.
Hi,
Please read at Email From JSP &
Sending an email in JSP
Sending an email in JSP
Sending an email in JSP
In this section, you will learn how to send an email in
jsp.
Following is a simple JSP page for sending
Send Email From JSP & Servlet
J2EE Tutorial - Send Email From JSP &
Servlet... example seen above, the code was exposed to
the web-administarator. The better... webserver, using
JavaMail API, the following code shows how the required
I am using 1and1 server. Using this server i am sending a mail using java program .
But it is running some problem occur
" Cannot send email. javax.mail.SendFailedException: Invalid Addresses;
nested exception
Email validation is JSP using JavaScript... will show you how to validate email address in you JSP
program using JavaScript.... In your JSP program you can use JavaScript to validate the email address
Project in jsp
Project in jsp Hi,
I'm doing MCA n have to do a project 'Attendance Consolidation' in JSP.I know basic java, but new to jsp. Is there any JSP source code available for reference...? pls help me
Java Email
Java Email i am writting a program to send emails using gmail smtp server. I had the following error:
java.lang.ClassFormatError: Absent Code...)
Exception in thread "main" Java Result: 1
BUILD SUCCESSFUL (total time: 6 seconds
Java code - Applet
Java code I want java applet code for drawing an indian flag
how to run applet - Applet
://
Hope that it will be helpful for you.Even... in applet program. this is executed successfully with appletviewer command
>
Applet - Applet
in details to visit.......,
Applet
Applet is java program that can be embedded into HTML pages. Java applets
code for email - Spring
code for email i want a java code using springs after login process sending an email to the corresponding with a text message to them as successfully registered Hi Friend,
Please visit the following links:
http
Java Code - Applet
Java Code How to Draw various charts(Pie,Bar,and Line etc.)using Applet
how to send sms on mobile and email using java code
how to send sms on mobile and email using java code hi....
I am developing a project where I need to send a confirmation/updation msg on clients mobile and also an email on their particular email id....plz help me to find
applet servlet communication - Applet
project in eclipse and writing the applet code in ajava file which is present in src... to applet.
1)Here is the code of 'ServletExample.java'
import java.io....();
}
}
}
3)Call this applet with the html file.
Java Applet Demo
Am doing a project, in that i need to send email to multiple recipients at a same time using jsp so send me the code as soon as possible.
Regards,
Santhosh
core java - Applet
. (........)In Words.
Please help me.
this is my college project. email me at y_haarika@yahoo.co.in Hi Friend,
Use the following code
Applet
"Welcome in Passing
parameter in java applet example." message.
Here...;
Introduction
Applet is java program that can be embedded into HTML pages. Java applets...
Disadvantages of Java Applet:
Java plug-in is required to run applet
Java
javascript-email validation - Java Beginners
about email validation at: validation give the detail explanation for this code:
if (str.indexOf(at)==-1 || str.indexOf(at)==0 || str.indexOf(at)==lstr
Applet - Passing Parameter in Java Applet
example."
Here is the code for the Java Program : ...;Welcome in Passing parameter in java applet
example.">
<... like Welcome in Passing parameter in java applet
example. Alternatively you
Regarding project - Applet
Regarding project hi friend ,
iam doing project in Visual cryptography in Java so i need the Help regarding how to make a share of a original imahe into shares
anu
project
project how to make blinking eyes using arc, applet in core java
Java Applet - Creating First Applet Example
Java Applet - Creating First Applet Example
... browser loads class file of applet
and run in its sandbox.
Here is the java code... the applet. An applet is a program written in java
programming language
java - Applet
java what is applet? Hi Friend,
Please visit the following link:
Thanks
Java - Applet Hello World
Java - Applet Hello World
This example introduces you with the Applet in Java...;
CODE tag is used to specify the name of Java applet class name. To test
scrollbar - applet - Applet
for more information.
Thanks... in complete reference book the same output
comes.
Can anyone change the code so... ScrollbarDemo extends Applet {
public void init() {
Scrollbar sb = new
JSP Project
JSP Project Register.html
<html>
<body >
<form... type="text" name="email" value="" size=25 maxlength=125>
<br><...;
Process.jsp
<%@ page language="java" %>
<%@ page import="java.util.*"%>
I use the archive tag to download my applet...? In the first step i only need code for the log in, after login the rest. Can I control the loading within the applet?
thanks
Java applet
Java applet How do I go from my applet to another JSP or HTML page
about project code - Java Beginners
about project code Respected Sir/Mam,
I need to develop an SMS... or programs.
User interface: Web based.
Technology: JSP ,servlets
User interface....
This can be developed using any kind of components using JAVA.
The following
Using Applet in JSP
://
JSP-Applet
To use applet in JSP page we can use... to
include Applet in JSP page
What is applet ?
Applet is java program that can....
....................
....................
<jsp:plugin type="applet" code="
project
project sir
i want a java major project in railway reservation
plz help me and give a project source code with entire validation
thank you
Project Guidance
a combination of one or more of the following technologies: JavaScript, Java, JSP, XHTML... a Java applet easily customizable HTML/CSS interface. To store applet data...Project Guidance Hello,
I have a project in SE at college and me
Applet - Applet
------------------------");
g.drawString("Demo of Java Applet Window Event Program");
g.drawString("Java...Applet Namaste, I want to create a Menu, the menu name is "Display... .
my code is :-
import java.awt.*;
import java.applet.*;
import
Problem in show card in applet.
the card demo with applet.... but Not show card in the applet, then I add a code in paintComponent method like... On Run as Java Applet then only show the Applet, not show any one card,hence any
jsp plugin implementation - Applet
jsp plugin implementation Hi,
I have implemented the code... folder under the WEB-INF folder and put my java code file and its corresponding classes inside the WEB-INF/classes folder.
When i execute my jsp page, APPLET
Applet in Eclipse - Running Applet In Eclipse
... from the menu bar to begin creating your Java applet project.
... 8: Create java applet code saving and
compiling the applet program... in
Eclipse 3.0. An applet is a little Java program that runs inside a Web
Email Validation code Can anybody tell me how to write email validation code using java Hi Friend,
Please visit the following link:
Thanks
project
project how to create core java code for trend analysis calculator
java applet
java applet I want code to implement
(1) user will enter how many nodes
(2)it should allow that no. of clicks and
circle should be displayed at that
position
email extractor how to extract only email address from a lines... at the below code and let me know if it helped you or if you have any further queries.
/**
* Program to scan email address from a file and write into an other
unable to see the output of applet. - Applet
the following tutorial
but the problem....
u just copy that java source code and compile that using javac
then you
java applet problem - Applet
java applet problem i was doing a program using java applet. I want... and to exit from the applet respectively.Now i want to display a message when... Friend,
Try the following code:
import java.applet.*;
import java.awt.event.
jsp plugin implementation - Applet
jsp plugin implementation Hi,
I have implemented the jsp plugin code in my program as you given below.
When I execute the above code in my client PC,it worked fine.But when I opened the same page in another
project development
database and i want code for jsp or servlets through access database urgent sir.
Here is a code that creates a registration form and save the data...project development i have one html page called register.html page
project
should write Java code, so as to:
1. Compute Gross Monthly Salary for all employees... in the correct place
The given code calculates the employees gross monthly
java - Applet
java 1.An applet program to draw a line graph for y=2x+5.[for suitable values of x & y]
2. An applet program to draw following shapes
(1)cone, (2)cube, (3)square inside a circle Hi friend,
this is cubic code
Applet program for drawing picture and graph - Applet
the program(code) of drawing picture and graph in Applet. Hi Friend,
Please visit the following links:
Hope
online examination system project in jsp
online examination system project in jsp How many tables are required in SQL for online examination system project in jsp in java
my email id
Swing Applet Example in java
Java - Swing Applet Example in java
... swing in an applet. In this example,
you will see that how resources of swing... Applet Example. when the applet is loaded but again when you click
on the Add
java compilation error - Applet
,
Plz give full details with source code and visit to :
Thanks...java compilation error hi friends
the following is the awt
jsp code - Java Beginners
JSP code and Example JSP Code Example
Java Project - Java Beginners
Java Project Write a Java program to create simple Calculator for 4... the following link:... code for making this calculator operational. Students are expected to handle
Applet
Applet Draw the class hierarchy of an Applet class. Also explain how to set background and forground colors in java
Swings - Applet
to call that class in applet. is it possible. or otherwise is there any way to deploy java class in browser. Give an example...
Thanks in advance... Hi Friend,
Try the following code:
1)CountCharacters.java:
public class
integration of webcam - Applet
this project using aplets.could u please help me with the code. Hi friend...;
----------------------------------------------
Visit for more information:...*;
public class JavaCam extends Applet implements Runnable{
boolean boolean
java - Applet
java Hi,
I need very urgent code............
please help me.........
How to convert text to wave conversion?
I need java code............
Regards,
Valarmathi
applet running but no display - Applet
strDefault = "Hello! Java Applet.";
public void paint(Graphics g) {
String...applet running but no display Hai,
Thanks for the post. I have... from a client, the page appears with a blank applet part (just whitescreen
applet
applet what is applet in java
An applet is a small program that can be sent along with a Web page to a user. Java applets can perform... the following link:
Applet Tutorials
Validate email id in jsp - JSP-Interview Questions
Validate email id in jsp Hi Please Suggest me how to validate email id in JSP(Java Server Faces
java - Applet
://
Thanks
Applet query - Applet
link:
Thanks...Applet query i want to knw d complete detail of why does applet... with their container.It runs under the control of a Java-capable browser.The java-capable browser
servlet code - Applet
from the servlet to applet.
Here is the code of 'ServletExample.java... with the html file.
Java Applet Demo
Thanks...servlet code how to communicate between applet and servlet
Could not able to run Java Applet in JSP using <jsp:plugin>
Could not able to run Java Applet in JSP using I could not able to run the above example applet in the JSP.
Getting class not found exception...://
Thanks
Java applet
Java applet I wanted as many clicks are there ,circles should be displayed there.
I tried this code but it erases previous circles.
plz help.
public void mouseClicked(MouseEvent m)
{
x=m.getX();
y=m.getY
Applet
Applet Write a Java applet that drwas a line between 2 points. The co-ordinates of 2 points should be passed as parametrs from html file. The color of the line should be red
Java 2D Graphics - Applet
://
Here you will get lot...Java 2D Graphics I am working on GIS project.I want represent a line... the standard forms like dashed, dotted line.My code
Java Applet
Java Applet Hi,
I have a query, on every mouse click an oval should be drawn. But in my program
I have used repaint() function, therefore... code snippet of my pgm:
public void mouseClicked(MouseEvent m)
{
x
Java Program - Applet
the following links: Program How to Draw various types of Charts Like pie,Line,Bar
java project
java project how many tables are required in backend for online examination system project in jsp
If backend is SQL
project query
project query I am doing project in java using eclipse..My project is a web related one.In this how to set sms alert using Jsp code. pls help me
jQuery UI Utility : Position
jQuery UI Utility : Position
jQuery UI Utility : Position
The position utility script is used for positioning any widget relative
java - Applet
java How can i develop an expert system using applets? can you help me with some sample source code
Applet code parameter - Applet
Applet code parameter Hi...
I've designed an applet where i placed... is in the folder named project
then the class file is in project/WEB-INF/classes folder
How can i get that class...
I used code="MyProgram.class" codebase="WEB-INF
java code to send email using gmail smtp server
java code to send email using gmail smtp server please send me the java code to send email using gmail smtp server.
and how to send verification code
Java Program - Applet
Java Program A java program to move a text in applet from right to left. Hi Friend,
Please visit the following link:
Thanks
HTML email example
HTML email example Hi,
I am looking for an email to open email composer when user clicks on the email link. Give me code for html email example... for opening the email client when user clicks on the email link.
Here is the example
The Java Applet Viewer
The Java Applet Viewer
Applet viewer is a command line program to run
Java applets... be stored in a web page and run within a web browser. The applet's code
gets
Sending email without authentication
Sending email without authentication Hi sir, Am doing a project in JSP, in that i want to send mail without any authentication of password so send me code as soon as possible.
Please visit the following links
java project
java project i would like to get an idea of how to do a project by doing a project(hospital management) in java.so could you please provide me with tips,UML,source code,or links of already completed project with documentation
Java runtime example - JSP-Servlet
Java runtime example in eclipse after submiting the data throgh jsp...
com.microsoft.sqlserver.jdbc.SQLServerException: The value is not set for the parameter number 1.
the code is:
Bean... String ZipCode;
private String Phone;
private String Email;
boolean valid
How to write a simple java applet
How to write a simple java applet Hi, how can i write a simple java applet, displaying text in specific colors and font style.
For example... in green color
Help me pls :(
Hi Friend,
Try the following code
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles
|
http://www.roseindia.net/tutorialhelp/comment/84895
|
CC-MAIN-2015-48
|
refinedweb
| 2,937
| 64.81
|
A tool for tracing use of the JNI in Android apps
Project description
jnitrace
A Frida based tool com.example.myapplication
jnitrace requires a minimum of two parameters to run a trace:
-l libnative-lib.so- is used to specify the libraries to trace. This argument can be used multiple times or
-l libnative-lib.so -l libanother-lib.soor
-l *.
com.example.myapplication- is the Android package to trace. This package must already be installed on the device.
Optional arguments are listed below:
-R <host>:<port>- is used to specify the network location of the remote Frida server. If a <host>:<port> is unspecified, localhost:27042 is used by deafult.
-m <spawn|attach>- is used to specify the Frida attach mechanism to use. It can either be spawn or attach. Spawn is the default and recommended option.
-b <fuzzy|accurate|none>- is used to control backtrace output. By default
jnitracewill run the backtracer in
accuratemode. This option can be changed to
fuzzymode or used to stop the backtrace by using the
noneoption. See the Frida docs for an explanation on the differences.
-i <regex>- is used to specify the method names that should be traced. This can be helpful for reducing the noise in particularly large JNI apps. The option can be supplied multiple times. For example,
-i Get -i RegisterNativeswould include only JNI methods that contain Get or RegisterNatives in their name.
-e <regex>- is used to specify the method names that should be ignored in the trace. This can be helpful for reducing the noise in particularly large JNI apps. The option can be supplied multiple times. For example,
-e ^Find -e GetEnvwould exclude from the results all JNI method names that begin Find or contain GetEnv.
-I <string>- is used to specify the exports from a library that should be traced. This is useful for libraries where you only want to trace a small number of methods. The functions jnitrace considers exported are any functions that are directly callable from the Java side, as such, that includes methods bound using RegisterNatives. The option can be supplied multiple times. For example,
-I stringFromJNI -I nativeMethod([B)Vcould be used to include an export from the library called
Java_com_nativetest_MainActivity_stringFromJNIand a method bound using RegisterNames with the signature of
nativeMethod([B)V.
-E <string>is used to specify the exports from a library that should not be traced. This is useful for libraries where you have a group of busy native calls that you want to ignore. The functions jnitrace considers exported are any functions that are directly callable from the Java side, as such, that includes methods bound using RegisterNatives. The option can be supplied multiple times. For example,
-E JNI_OnLoad -E nativeMethodwould exclude from the trace the
JNI_OnLoadfunction call and any methods with the name
nativeMethod.
-o path/output.json- is used to specify an output path where
jnitracewill store all traced data. The information is stored in JSON format to allow later post-processing of the trace data.
-p path/to/script.js- the path provided is used to load a Frida script into the target process before the
jnitracescript has loaded. This can be used for defeating anti-frida or anti-debugging code before
jnitracestarts.
-a path/to/script.js- the path provided is used to load Frida script into the target process after
jnitracehas been loaded.
--hide-data- used to reduce the quantity of output displayed in the console. This option will hide additional data that is displayed as hexdumps or as string de-references.
--ignore-env- using this option will hide all calls the app is making using the JNIEnv struct.
--ignore-vm- using this option will hide all calls the app is making using the JavaVM struct.
--aux <name=(string|bool|int)value>- used to pass custom parameters when spawning an application. For example
--aux='uid=(int)10'will spawn the application for user 10 instead of default user 0.
Note
Remember frida-server must be running before running
jnitrace. If the default
instructions for installing frida have been followed, the following command will start the server ready for
jnitrace:
adb shell /data/local/tmp/frida-server
API:
The engine that powers jnitrace is available as a separate project. That project allows you to import jnitrace to track individual JNI API calls, in a method familiar to using the Frida
Interceptor to attach to functions and addresses.
import { JNIInterceptor } from "jnitrace-engine"; JNIInterceptor.attach("FindClass", { onEnter(args) { console.log("FindClass method called"); this.className = Memory.readCString(args[1]); }, onLeave(retval) { console.log("\tLoading Class:", this.className); console.log("\tClass ID:", retval.get()); } });
More information:
Building:
Building
jnitrace from source requires that
node first be installed.
After installing
node, the following commands need to be run:
npm install
npm run watch
npm run watch will run
frida-compile in the background compiling the source to the output
file,
build/jnitrace.js.
jnitrace having seen the original method,
field, or class lookup. For any methods passing buffers,
jnitrace will
extract the buffers from the arguments and display it as a hexdump below the
argument value.
|-. After the
|-characters are the argument type followed by the argument value. For jmethods, jfields and jclasses the Java type will be displayed in curly braces. This is dependent on | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/jnitrace/
|
CC-MAIN-2022-21
|
refinedweb
| 904
| 57.77
|
A while back I wrote about sonnet primes, primes of the form ababcdcdefefgg where the letters a through g represent digits and a is not zero. The name comes from the rhyme scheme of an English (Shakespearean) sonnet.
In the original post I gave Mathematica code to find all sonnet primes. This post shows how to do it in Python.
from sympy.ntheory import isprime from itertools import permutations def number(t): # turn a tuple into a number return 10100000000000*t[0] + 1010000000000*t[1] + 1010000000*t[2] + 101000000*t[3] + 101000*t[4] + 10100*t[5] + 11*t[6] sonnet_numbers = (number(t) for t in permutations(range(10), 7) if t[0] != 0) sonnet_primes = filter(isprime, sonnet_numbers)
3 thoughts on “Sonnet primes in Python”
Rather than filter, I would have used
sonnet_primes = (number for number in sonnet_numbers if isprime(number))
It is a bit more verbose, but it looks more obvious to me, especially in the context of mathematics
Love the alignment of your code! Makes everything look beautiful, even poetic.
|
http://www.johndcook.com/blog/2013/01/08/sonnet-primes-in-python/
|
CC-MAIN-2014-35
|
refinedweb
| 172
| 58.21
|
LiteratePrograms:Public forum
From LiteratePrograms
This is the public forum, a place for open discussion among all members. Feel free to discuss anything. Please don't remove this notice.
New Template?
If we take a look at Selection sort (Haskell) we can see {{codedump}} in use. There is, however, a bigger problem on that page. The code that's there cannot possibly compile. It makes use of a symbol "usun" which is not defined anywhere. Neither GHC's online docs nor the Hoogle search engine knows of it, so it looks to me as if someone cut-and-pasted a block of code that referenced something that didn't make it. Perhaps we could also have a {{incomplete}} or {{broken}} template to mark pages like this? MTR/严加华 23:13, 15 June 2007 (PDT)
- Done. See {{incomplete}}. --Allan McInnes (talk) 14:53, 16 June 2007 (PDT)
Latex2mediawiki and noweb/funnelweb compatibility
If I use noweb to generate a latex document, is there a way to convert the latex document into the equivalent mediawiki syntax? May be to redefine the latex macro, to expand to its corresponding mediawiki syntax. (Assuming that the macro used in that latex document contains only a subset of macro that can be mapped to mediawiki syntax.) Anyhow, can the document written in mediawiki syntax be exported to pdf/html/etc., by the command line in batch mode? --Ans 02:21, 4 January 2008 (PST)
One more question. Does noweb and funnelweb have compatible syntax? --Ans 02:21, 4 January 2008 (PST)
- Hmm, well, LaTeX is in general much more expressive than Mediawiki wiki syntax, and I'm not aware of any translation tools between the two (I don't believe it would be straightforward; some parts would probably have to get embedded in math tags). I haven't used funnelweb so I can't speak to compatibility there. Deco 06:44, 8 March 2009 (MDT)
Page load problem
When trying to load Look and say sequence (sed), this results in
- Catchable fatal error: Object of class Title could not be converted to string in /nr/web/literateprograms/includes/Database.php on line 1182
This problem seems to have appeared recently (the page has been changed two days ago, so at least then it probably loaded OK), so this probably is caused or triggered by a recent change. --Ce 03:57, 13 April 2008 (EDT)
- As I just noted, this problem seems to be more general; basically all pages containing code seem to be affected. It's obviously not the brackets in the title because the (wrongly titled) page Ada Hello World doesn't load either. --Ce 04:05, 13 April 2008 (EDT)
- Actually, pages without code also seems to be affected. Maybe all pages in main namespace are affected? This might be because of a software update somewhere in the system (f.ex. new PHP version). Anyway, there isn't much we can do as long as we don't have access to the server. Ahy1 10:54, 13 April 2008 (EDT)
- Guess we'll just have to be patient. I'm sure Deco will take a look at it soon. -- Derek Ross | Talk 14:28, 13 April 2008 (EDT)
- This problem is no longer reproducing and I'm not sure what was up. Please let me know if you encounter it again. Deco 06:42, 8 March 2009 (MDT)
Sandbox
Consider moving the sandbox to another namespace. LiteratePrograms:Sandbox, for instance. Codeholic 12:59, 23 April 2008 (EDT)
- That wouldn't work currently, because the literate programming functionality is only turned on for pages in the main article space. Deco 06:41, 8 March 2009 (MDT)
Spam attack
Currently the vast majority of all edits seem to be insertion/removal of spam. The pattern of the spam is easy to recognice: An IP inserts lots of external links. Therefore I think automatic link creation should be prevented. The least intrusive (but probably the most complex to implement) would be to have captchas for every anonymous edit creating an external link (one could further make a whitelist of sites where a captcha is not needed, to further reduce the impact on anonymous users). In addition, Account creation would have to be protected with captchas.
If implementing captchas is too complex, maybe another solution would be to automatically reject edits which insert more than five external links at once. Given that legitimate edits usually don't need many external links (most don't even need any external link at all), this shouldn't be a too serious restriction. However there would be the danger of the spam bots adapting by making lots of edits to the same page, inserting a few links each time, which would be even worse than the current situation.
Disallowing link creation for anonymous users would be another solution. After all, an anonymous user can insert the URL as text and then ask a non-anonymous user to convert it into a link (or, alternatively, he can just register and add the link himself as registered user). However, in that case account creation would also have to be protected somehow (captcha, email confirmation, whatever), to prevent spam bots from creating random user accounts.
Besides that, I think it would be a good idea to remove pairs of revisions which consist only of spam insertion and subsequent removal, with no other change being made, from the history. They only clutter the history and don't provide any value. This could probably be semi-automated (i.e. a bot searching for obvious spam insertion and checking that the following revision is an exact rollback). Completely automating it might be too dangerous (but then, if the criterion for identifying spam is very strict, even that may be possible). --Ce 03:36, 2 September 2008 (EDT)
- I think the ConfirmEdit extension would be helpful. It can be configured to require captcha only on account creation and link inertion, and it can also have a whitelist of urls for which cpatchas are not required.
- I am not sure, however, if this site will survive unless more than one person has access to do these things. My impression is that the site owner has lost interest in the wiki, and will not even answer questions for which he is the only person who can answer. This is understandable. We should not expect one person to go on forever doing lots of work for free, maintaining a site like this. If it is possible, we should try to distribute the responsibility of maintaining/running this site among more persons. I would be willing to participate in this, but there should also be more volunteers. This, of course, depends on Deco's willingness to share this responsibility. Ahy1 06:07, 20 September 2008 (EDT)
- Ce and Ahy1 both make very sensible suggestions here. We'll continue to remove the spam come what may, but prevention would be a whole lot better than cure. -- Derek Ross | Talk 11:40, 20 September 2008 (EDT)
- Hey all, apologies for my very long delayed response. I am presently disabling anonymous editing and enabling CAPTCHAs on all edits to control the vandalism/spam problem. Deco 22:59, 7 March 2009 (MST)
- Testing CAPTCHA again. DecoMortal 23:22, 7 March 2009 (MST)
Dump database
Has someone a dump of the lpwiki? I put a comment on Deco's pade, if you read it may be some of you could answer.
Lehalle 01:41, 28 November 2008 (EST)
- I'll look at producing a proper dump right away. Deco 23:26, 7 March 2009 (MST)
- Dumps are now working and you can view a valid one here. Deco 04:00, 8 March 2009 (MDT)
Apology
After seeing how hard you all worked to protect the wiki in my absence I feel really terrible for letting you down. I felt overwhelmed by other responsibilities, and felt like I had screwed up this project and was afraid to deal with it, but after dropping the ball on both database dumps and vandalism protection I didn't leave others a way forward. I promise you all I will remain responsive and helpful - I've added some additional contact info to my user page in case I don't check my talk page here. I'm also prepared to give shell access to any of you who wants to be an active developer and help deal with issues like this in the future. I'm very thankful to you all and I hope I can make up for my mismanagement now. Deco 04:35, 8 March 2009 (MDT)
- Don't feel bad. This is just a hobby for all of us. And if more important issues arise (for any of us), they have to be dealt with first. -- Derek Ross | Talk 12:39, 8 March 2009 (MDT)
- I have to agree with Derek here: While I'm indeed very glad that this site is online again, it's nothing our lifes depend on. If it were, I would have contacted you much earlier about it :-) Basically this site is a gift you give to us, and while we enjoy this gift and would have missed it if it had been gone, there's clearly no obligation for you to continue giving it. The more I'm thankful that we got it back. --Ce 12:17, 9 March 2009 (MDT)
Math images broken
The images generated by the <math>...</math> tags seem to be Error 403 Forbidden. --Spoon! 03:04, 11 March 2009 (MDT)
- Woops - fixed, thanks! It also appears I'm missing latex and dvips, so new math tags won't work. I'll have to get them installed. Deco 13:55, 12 March 2009 (MDT)
- It was a bit tricky, but new math notation will now work - I had to do a local install of LaTeX and convince Mediawiki to use it when it's not in the path. :-) Deco 21:42, 12 March 2009 (MDT)
Captcha kills summary
I've noticed that when editing a single section and adding something to the summary, then when the page with the captcha request appears, the summary is reset to the original value. If there's an easy fix for that, it would be nice if it could be fixed (it's not important, though, because after all, you can just enter the summary after the captcha) --Ce 13:23, 11 March 2009 (MDT)
- Ah, I see what you mean. I've reproduced this now. I think I'm just going to worry about upgrading Mediawiki and the ConfirmEdit extension along with it, and I think it will probably go away. Deco 21:56, 12 March 2009 (MDT)
New policy
Because I think LiteratePrograms needs something to shape its direction and scope, I have created a new policy document called LiteratePrograms:Purpose and scope. I think it's very much in line with what we've been doing so far. I invite your comments on it. Deco 21:28, 30 March 2009 (MDT)
- Seems pretty reasonable to me. -- Derek Ross | Talk 12:56, 31 March 2009 (MDT)
- I agree with Derek. It nicely articulates what I've always thought this wiki was about. --Allan McInnes (talk) 00:44, 3 April 2009 (MDT)
- Apart from the following minor roblem, I also agree:
- The last bullet says:
- "concerns such as efficiency and completeness should not take precedence over clarity of presentation."
- What exactly is meant with "completeness" in this context? I'd intuitively expect it to mean that the code is compilable as is (i.e. you don't need to add anything or fill out gaps in order to compile it), but in that case, I'd think the examples should be complete. Completeness in this sense doesn't remove anything from the clarity; after all, the literate programming technique makes sure that you can move things like boilerplate into separate sections, thus not interfering with the description of the interesting parts of the code.
- If something else is meant with "completeness", maybe it would be a good idea to clarify. --Ce 07:16, 4 April 2009 (MDT)
- My interpretation, based on the beginning of that bullet stating that LP is "not a database of raw code snippets", is that efficiency and completeness refer to code that would be both quick enough and robust enough to put into production, while LP is better used to clearly present code sketches. Certainly the code here should be compilable — and runnable — as is, but as you can tell from articles such as Dijkstra's algorithm (Inform 7), Hello World (IBM PC bootstrap), or Quicksort (Sed), my bias has been towards trying to present one or maybe two interesting points per article rather than in collecting quotidian code. Dave 15:40, 4 April 2009 (MDT)
Latest spam
I stupidly forgot to deny move permissions to anons (which also, it turns out, the CAPTCHAs don't protect). Fixed that, shouldn't happen again. Deco 15:43, 14 April 2009 (MDT)
|
http://en.literateprograms.org/LiteratePrograms:Public_forum
|
crawl-002
|
refinedweb
| 2,171
| 68.81
|
Hi all, Due to some changes in iOS 4 (aka iPhone OS 4), I've had to change how PocketSword uses ICU. I can go into the details off-list if required, but basically I can only use a cut-down version of ICU unless I want to (?) bundle a full copy of ICU into PocketSword myself. I've chosen to use Apple's included version, which is easier for the time being (especially given that I've had v1.3.0 waiting to be released for a couple of weeks now!). But that means I've had to introduce a #ifdef for the iPhone, which I've called _APPLE_IOS_ (Apple's iOS). So to compile SWORD for use on the iPhone using the built-in version of ICU, you need to compile with -D_APPLE_IOS_ and it'll work. Alternatively, you could probably (?) bundle up your own ICU and compile with the usual -D_ICU_ -- but you cannot use the _ICU_ if you wish to use the built-in Apple-supplied ICU (which I was recently made aware of, and looking back at how I got ICU stuff to work in the first place on the iPhone, alarm bells should have been ringing in my head!!!)... :) Thanks heaps, ybic nic... :) ps: Basically, by using _APPLE_IOS_ you aren't using the transliteration features of ICU that you would get if you were using _ICU_. Perhaps we could better name the #defines but I think this works fairly well atm, and I may find other instances where I want to use a #ifdef _APPLE_IOS_ so it's probably nice having this available. :) ---- Nic Carter PocketSword Developer - an iPhone Bible Study app www: iTunes: Twitter: diff -r 32448dd619a3 externals/sword/src/mgr/stringmgr.cpp --- a/externals/sword/src/mgr/stringmgr.cpp Thu Jun 24 14:26:28 2010 +1000 +++ b/externals/sword/src/mgr/stringmgr.cpp Thu Jul 01 11:59:56 2010 +1000 @@ -36,6 +36,10 @@ #include <unicode/locid.h> +#elif defined (_APPLE_IOS_) + +#include <unicode/ustring.h> + #endif SWORD_NAMESPACE_START @@ -115,7 +119,7 @@ } -#ifdef _ICU_ +#if defined (_ICU_) || defined (_APPLE_IOS_) //here comes our ICUStringMgr reimplementation class ICUStringMgr : public StringMgr { @@ -164,7 +168,7 @@ */ StringMgr* StringMgr::getSystemStringMgr() { if (!systemStringMgr) { -#ifdef _ICU_ +#if defined (_ICU_) || defined (_APPLE_IOS_) systemStringMgr = new ICUStringMgr(); // SWLog::getSystemLog()->logInformation("created default ICUStringMgr"); #else @@ -237,7 +241,7 @@ } -#ifdef _ICU_ +#if defined (_ICU_) || (_APPLE_IOS_) char *ICUStringMgr::upperUTF8(char *buf, unsigned int maxlen) const { char *ret = buf;
|
http://www.crosswire.org/pipermail/sword-devel/2010-June/034448.html
|
CC-MAIN-2017-04
|
refinedweb
| 405
| 60.24
|
Chapter 1
Introduction
C# is a language built specifically to program the Microsoft .NET Framework. The .NET Framework consists of a runtime environment called the Common Language Runtime (CLR), and a set of.
- Type safety and a unified and manual, and cross-language integration, and provides an excellent foundation for a rich set of class libraries.
Absolutely key to these benefits is the way .NET programs are compiled. Each language targeting .NET compiles source code into. The Visual C++ .NET, and such third-party languages as COBOL, Eiffel, Haskell, Mercury, ML, Oberon,.
Framework Class Library
The .NET Framework provides the .NET Framework Class Library (FCL), which can be used by all languages. The FCL offers features ranging from core functionality of the runtime, such as threading and runtime manipulation of types (reflection), to types that provide high-level functionality, such as data access, rich client support, and web services (whereby code can even be embedded in a web page). C# has almost no built-in libraries; it uses the FCL instead.
A First C# Program
Testthat contains a method, named
Main, that writes
to
C#!to the
Consolewindow. The
Consoleclass encapsulates standard input/output functionality, providing methods such as
WriteLine. To use types from another namespace, we use the
usingdirective. Since the
Consoleclass resides in the
Systemnamespace, we write
using
System; similarly, types from other namespaces could use our
Testclass by including the following statement:
using
FirstProgram;.
To compile this program into an executable, paste it into a text file, save it as Test.cs, then type csc Text.cs in the command prompt. This compiles the program into an executable called Test.exe. Add the
/debugoption to the
csccommand line to include debugging symbols in the output. This will let you run your program under a debugger and get meaningful stack traces that include line numbers.
TIP: .NET executables contain a small CLR host created by the C# compiler. The host starts the CLR and loads your application, starting at the
Mainentry point. Note that
Mainmust be specified as
static.
In C#, there are no standalone functions; functions. Of final note is that C# recognizes a method named
Mainas the default entry point of execution.
Back to: C# Essentials, 2nd Edition
© 2001, O'Reilly & Associates, Inc.
webmaster@oreilly.com
|
http://oreilly.com/catalog/csharpess2/chapter/ch01.html
|
crawl-002
|
refinedweb
| 380
| 59.4
|
Is there a keyword in Matlab that is roughly equivalent to
None in python?
I am trying to use it to mark an optional argument to a function. I am translating the following Python code
def f(x,y=None): if y == None: return g(x) else: return h(x,y)
into Matlab
function rtrn = f(x,y) if y == []: rtrn = g(x); else rtrn = h(x,y); end; end
As you can see currently I am using
[] as
None. Is there a better way to do this?
in your specific case. you may use
nargin to determine how many input arguments here provided when calling the function.
from the MATLAB documentation:
The nargin and nargout functions enable you to determine how many input and output arguments a function is called with. You can then use conditional statements to perform different tasks depending on the number of arguments. For example,
function c = testarg1(a, b) if (nargin == 1) c = a .^ 2; elseif (nargin == 2) c = a + b; end
Given a single input argument, this function squares the input value. Given two inputs, it adds them together.
NaN while not equivalent, often serves the similar purpose.
|
https://pythonpedia.com/en/knowledge-base/1737523/the-matlab-equivalent-of-python-s--none-
|
CC-MAIN-2020-45
|
refinedweb
| 195
| 64.41
|
tag:blogger.com,1999:blog-41915487402201307492017-11-17T04:30:54.654-08:00The Firebase Blogewood's new with FCM? Customizing messages across platforms!<figure class="profile"> <div class="profile-picture"> <img src="" data- </div> <figcaption><strong><div>Arthur Thompson</div></strong> <em>Developer Programs Engineer</em></figcaption></figure><h2>What is FCM?</h2><p>Firebase Cloud Messaging (FCM) is a cross-platform messaging solution that reliably delivers messages at no cost. FCM sends over 400 billion messages per day. Today we are excited to announce the launch of a new RESTful API, the <strong>FCM HTTP v1 API</strong>, that makes it safer and easier to send messages to your cross-platform applications. All existing FCM clients can receive messages sent via the new FCM API -- it does not require any changes on the client side. </p><h2>Why a new FCM API?</h2><p><strong>Security</strong></p><p>The new FCM API uses the OAuth2 security model. In the event that an access token becomes public, it can only be used for an hour or so before it expires. Refresh tokens are not transmitted as often and are thus much less likely to be captured. </p><strong>Cross Platform Support</strong><p>Sending messages to multiple platforms is possible with the legacy API. However, as you add functionality to messages, sending to multiple platforms becomes difficult. With the new FCM API, sending messages to multiple platforms is very easy. </p><p>You can still send simple messages to multiple platforms using the common top level fields. For example, you can send this message informing users about a sale: </p> <pre class="prettyprint">{<br /> "message": {<br /> "topic":"sale-watchers",<br /> "notification": {<br /> "title":"Check out this sale!",<br /> "body":"All items half off through Friday"<br /> }<br />}<br /></pre><p>When you send a notification like this one to devices subscribed to a topic, you probably want them to be taken to the description of the item. On Android you would compose a message including a <code>"click_action"</code> field indicating the activity to open. On iOS, APNs relies on a <code>"category"</code>indicating the action to take upon clicking, including which view to show. </p><p>Before, since these keys were unique to their respective platforms, developers would have to create two separate messages. Now, we can use platform-specific fields together with common ones in a single message: < /> }<br /> }<br /></pre><p><em>Note: In this case web apps subscribed to the 'sale-watchers' topic will receive a notification message with the defined title and body.</em></p><strong>Extendable</strong><p>The new FCM API fully supports messaging options available on iOS, Android and Web. Since each platform has its own defined block in the JSON payload, we can easily extend to other platforms as needed. If a future IoT messaging protocol requires a security_key field we could easily support an <code>iot</code> block within the FCM payload. < /> "iot": {<br /> "security_key": "SECURITY_KEY"<br /> }<br /> }<br /> }<br /></pre> <p>The new FCM API is the <strong>more secure</strong>, <strong>cross platform</strong>, <strong>future proof </strong>way of sending messages to FCM clients. If you are currently using the FCM legacy API, or if you are interested in using FCM to send messages to your apps, give the new FCM API a try. See the FCM guides and reference docs for more. </p><p><a href="">About FCM</a></p><p><a href="">Authorize requests</a></p><p><a href="">Build message requests</a></p><p><a href="">Migrate from GCM to FCM on Android</a></p><p><a href="">Migrate from GCM to FCM on iOS</a></p><img src="" height="1" width="1" alt=""/>Firebase Updates to the Firebase console<figure class="profile"> <div class="profile-picture"> <img src="" data- </div> <figcaption>Posted by John Shriver-Blake, Product Manager</figcaption></figure><p>When the folks at Fabric joined Firebase earlier this year, we aligned around a common mission: provide developers like you with a platform that solves common problems across the app development lifecycle, so you can focus on building an awesome user experience. </p><p. </p><h2>Redesigned navigation</h2><p. </p><h2>New Project Home</h2><p. </p><div class="blogimg1"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><h2>Latest Release</h2><p. </p><div class="blogimg3"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><h2>Updated Analytics dashboard</h2><p! </p><div class="blogimg2"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><h2>Realtime information</h2><p. </p><p. </p><p>As always, you can contact us at <a href=""></a>with feedback and questions. Happy building! </p><img src="" height="1" width="1" alt=""/>Firebase Announcing Firebase Crashlytics Beta<figure class="profile"> <div class="profile-picture"> <img src="" data- </div> <figcaption>Posted by Jason St. Pierre, Product Manager</figcaption></figure><p>Since Fabric joined forces with the Firebase team, our collective mission has been to bring the best of our amazing platforms together. Today, we are pleased to announce a major milestone in that mission with the beta launch of Firebase Crashlytics. </p><p>Firebase Crashlytics is a powerful, realtime crash reporting tool that will help you track, prioritize, and fix stability issues that erode your app quality. With this launch, Firebase developers can now access the <a href="">best-in-class</a visiting <a href="">g.co/firebase/optin</a>. </p><p>Here's what you'll get when you upgrade. These benefits and features make Crashlytics a must-have tool for all mobile app developers. </p><p><strong>Faster troubleshooting</strong></p><p. </p> <div class="blogimg1"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <p>On the overview page, your crash-free user rate is prominently displayed so you can gauge which builds are improving in stability. </p> <div class="blogimg2"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <p>You'll also notice <strong>significance badges. </strong>When highlighted, these badges indicate that important information is available for that issue that makes it unusual or stand out from the rest. For example, significance badges will tell you if certain issues only occur on a specific device, OS, or only on jailbroken phones. </p><div class="blogimg3"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <p. </p><p><strong>Custom Keys and Logs</strong></p><p>Crashlytics lets you instrument logs and keys, which provide additional information and context on why a crash occurred and what happened leading up to it. </p>. </p><p. </p><p>Logs and keys are great ways to find clues in the session metadata and retrace your users' steps to reproduce the bug. </p><div class="blogimg4"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <div class="blogimg5"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <p><strong>Realtime alerting</strong></p>. </p><div class="blogimg6"><a href="" imageanchor="1" ><img border="0" src="" data-</a> <a href="" imageanchor="1" ><img border="0" src="" data-</a> <a href="" imageanchor="1" ><img border="0" src="" data-</a></div><p><strong>Firebase Crashlytics + Cloud Functions for Firebase</strong></p><p. </p><p></p><p. </p><div class="blogimg9"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><h3><strong>More exciting updates ahead</strong></h3><p. </p><p><center><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></center></p><p>From now on, new and existing Firebase customers should use Crashlytics, since it is the primary crash reporter for Firebase. <em>New</em> Firebase customers can get started with Crashlytics by visiting <a href="">g.co/firebase/optin</a>, while <em>existing</em> Crash Reporting customers can click on the banner in their dashboard. We'll continue to build out Crashlytics and can't wait to hear your feedback! </p><p>If you're already using Crashlytics on Fabric, you're all set for now - no need to migrate yet. We'll share exciting news soon about how your Fabric apps can take advantage of Firebase Crashlytics. <> .blogimg5 img { max-width: 100%; display: block; margin: auto; padding: 0px 10px 0px 10px; border: 0; } .blogimg6 img { width: 30%; display: inline; margin: auto; padding: 0px 10px 0px 10px; border: 0; } .blogimg7 img { max-width: 30%; display: inline; margin: auto; padding: 10px 0 10px 0; border: 0; } .blogimg8 img { max-width: 30%; display: inline; margin: auto; padding: 10px 0 10px 0; border: 0; } .blogimg9 img { max-width: 100%; display: block; margin: auto; padding: 10px 0 10px 0; border: 0; } </style><img src="" height="1" width="1" alt=""/>Firebase Better A/B Testing with Firebase<figure class="profile"> <div class="profile-picture"> <img alt="Todd Kerpleman" src="" style="margin: 0px 0px 10px 0%;" /> </div> <figcaption> <strong><div><a href="">Todd Kerpelman</a></div></strong> <em>Developer Advocate</em> </figcaption></figure><h3>Announcing Better A/B Testing with Firebase</h3><p. </p><p. </p><p! </p><h3>Getting to Know the New A/B Testing Feature </h3><p>With the new A/B testing feature, you can create an A/B test that will allow you to play with any combination of values that you can control through <a href="">Remote Config</a>. Setting up an A/B test allows you to define how the experiment will behave in a number of different ways, including determining how many of your users are involved with the experiment at first… </p><div class="blogimg1"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><p>…how many variants you want to run, and how your app might behave differently for each variant… </p><div class="blogimg2"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><p>...and what the goal of the experiment is. </p><div class="blogimg3"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div>. </p> <a href="">Google Optimize</a>, Google's free testing and personalization product for websites. </p><h3>Using A/B Tests for Better Onboarding: A Case Study</h3><p><a href="">Fabulous</a>, a motivational app for building better habits, recently made improvements to their app's onboarding flow. </p><div class="blogimg4"><a href="" imageanchor="1"><img border="0" src="" data-</a></div><p><em>Some of the screens a typical user encounters when first using Fabulous</em></p><p% improvement in the rate of users completing the onboarding flow. More importantly, they confirmed that this shorter onboarding experience didn't have any negative impact on their app's retention. </p><h3>Test Your Notifications, Too!</h3><p. </p><h3>Getting Started</h3><p>A <a href="">documentation</a>, or check out the <a href="">A/B Test Like a Pro</a> video series we've been building. </p><p>Then, head on over to the Firebase Console and start making your app better — one experiment at a time! <><img src="" height="1" width="1" alt=""/>Firebase Firebase Predictions Beta<figure class="profile"> <div class="profile-picture"><img alt=" Jumana Al Hashal" src="" style="margin: 0 0 0 0%;" /> </div><figcaption> <strong><div> Jumana Al Hashal</div></strong> <em>Product Manager</em> </figcaption></figure><p>We. </p><p>Out of the box, there are four predicted groups: </p><ul><li><strong>Churn</strong> is a group of users who are predicted to soft churn -- i.e. stop using your app -- in the next 7 days. <li><strong>Not_Churn </strong>is a group of users who are predicted to stay engaged with your app in the next 7 days. <li><strong>Spend</strong> is a group of users who are predicted to make an in-app purchase in the next 7 days <li><strong>Not_Spend</strong> is a group of users who are <em>not</em>predicted to make an in-app purchase in the next 7 days</li></ul><p>You'll see these three groups right away when you select <strong>Predictions</strong> in the left nav bar of Firebase Console. </p><div class="blogimg1"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><p>You'll notice that each of these cards has actions that you can take upon them. </p><p><strong>Tolerance</strong>:. </p><p><strong>Target Users</strong>: This gives you a drop-down on which you can select <em>Remote Config </em>or<em> Notifications </em>for that user group. It also links to some handy guidance for offering in-app incentives. </p><p></p><div class="blogimg2"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><p>Selecting <strong>Remote Config</strong>. </p><div class="blogimg3"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <p>Selecting <strong>Notifications</strong> will take you to the familiar composer for messages to be sent using Firebase Cloud Messaging, but in addition to the usual options for picking target audience, you'll also get the predicted user group pre-populated as a user segment. </p><p></p><div class="blogimg4"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><p>This allows you to target notifications at that user group. So, for example, for users at a risk of churning, you could send a notification with an enticement to continue using the app. </p><p><strong>Creating your own predictions. </strong>You aren't limited to the built-in predictions cards, of course, and can create your own based on <a href="">custom events</a> that you set up in your app. In this case, you'll see a card that allows you to create a prediction. </p><div class="blogimg5"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><p>And when you select it, you can then create a prediction for when your event will, or will not happen. This helps you identify users who are likely to engage in that conversion event: </p><div class="blogimg6"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><p>So, for example, in the above case, whenever a user levels up in the game, the level_up conversion event is logged. Thus, you could create a prediction for players who may level up, and incentivize them to continue playing. </p><p>Then, once you've saved your prediction, over time a card will populate on the Firebase Console in the same way as the built-in ones. </p><p>And this card can be used in the same way as the others -- including targeting users with notifications and Remote Config. </p><p>Firebase Predictions is a Beta product, and we're continuing to work on it and improve it. If you have any questions or feedback, please reach out -- and for bugs and product suggestions, you can reach us at <a href="firebase.google.com/support">firebase.google.com/support</a>. </p><p>Learn more about Firebase Predictions at firebase.google.com/products/predictions/ or dive straight into our docs right <a href="">here</a>. </p><style> .blogimg1 img { max-width: 100%; display: block; margin: auto; padding: 10px 5px 10px 5px; border: 0; } .blogimg2 img { max-width: 100%; display: block; margin: auto; padding: 10px 5px 10px 5px; border: 0; } .blogimg3 img { max-width: 100%; display: block; margin: auto; padding: 10px 0 10px 0; border: 0; } .blogimg4 img { max-width: 100%; display: block; margin: auto; padding: 10px 0 10px 0; border: 0; } .blogimg5 img { max-width: 70%; display: block; margin: auto; padding: 10px 0 10px 0; border: 0; } .blogimg6 img { max-width: 70%; display: block; margin: auto; padding: 10px 0 10px 0; border: 0; } </style><img src="" height="1" width="1" alt=""/>Firebase’s new at Firebase Dev Summit 2017<figure class="profile"> <div class="profile-picture"><img alt="Francis Ma" src="" style="margin: 0px 0px 10px 0%;" /></div><figcaption> <strong><div>Francis Ma</div></strong><em>Group Product Manager</em></figcaption></figure> <p>Our mission for Firebase is to help you build better apps and grow your business, by providing tools that solve common problems throughout your app development lifecycle. We manage your backend infrastructure, provide you with the tools to improve the quality and stability of your app, and help you acquire and engage users, so you can focus on building a fantastic user experience. </p><p>To date, over <em>one million developers</em> have used Firebase to build their apps across iOS, Android and the web. It's both inspiring and humbling to hear the many stories that all of you share with us. Take Doodle, for instance, a company that helps you find the best date and time to meet with people. Doodle recently used Firebase to redesign their app and increase retention and engagement. </p> <center><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></center><p> We're excited to be hosting the second annual Firebase Dev Summit here in Amsterdam, where we get to meet many members of our developer community! We've been working hard to improve Firebase, so that our products work seamlessly together, and we have several exciting new updates to share today. We've integrated Crashlytics into Firebase, enabled first-class A/B support and taken our first step in bringing the power of Google's machine learning into Firebase with a new product called Predictions. We've also made a few other improvements, so let's dive in! </p><h2>Bringing Crashlytics into Firebase</h2><p>Since <a href="">Fabric joined Google</a>, we've been working to bring the best of our platforms together. Today we're announcing a big step in that journey: we're adding Crashlytics to the Firebase Console for new and existing Firebase users. Crashlytics is the <a href="">best-in-class crash reporter</a> that helps you track, prioritize, and fix stability issues that erode your app quality, in realtime. We'll be rolling out this update over the next several weeks, but if you're eager to try it out sooner, you can visit <a href="">g.co/firebase/opt-in</a> and get access today. </p> <center><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></center> <p>We're also integrating Crashlytics with other parts of Firebase. You can now use Crashlytics events to trigger Cloud Functions and power custom workflow integrations. For example, you can automate a workflow to route issues in a critical app flow - like your purchase path - to a particular developer or Slack room, ensuring the proper escalations, reducing the time to resolution, and increasing stability. </p><h2>Redesigning the console</h2><p>In addition to bringing Crashlytics to Firebase, collaborating with the Fabric team has allowed us to make some exciting updates to the Firebase console that will help you find key information about your app more easily and efficiently. </p><p>First of all, you're going to notice a new structure in the left-hand navigation bar. We've clustered Firebase products into four main areas, based on the app development lifecycle: Develop, Stability, Analytics, and Grow. All of the products that you're used to seeing in the Firebase console are still there; we've simply reorganized things to more accurately reflect the way your team works. </p><p>We've also redesigned the first screen you see when you open a Firebase project — what we call your Project Overview screen. We've heard from you that the majority of the time, when you come to the console, you're looking for four main statistics: daily active users, monthly active users, crash-free user rate, and total crashes. We've taken those four key metrics and made them front-and-center for any apps in the project. We've also added sparklines, so you can understand how your app is trending over time. </p><p>Finally, we've overhauled the Analytics section of the console. You'll find a new dashboard that is organized around the questions and tasks that you tackle on a day-to-day basis. We've also added a Latest Release section that gives you all the information you need about the stability and adoption of your latest app release, so you can make quick decisions after a launch. Lastly, we've added realtime cards to both of these sections, so you can have up-to-the-second insight into your app data. Like Crashlytics, these changes are rolling out over the next few weeks, but you can get access today by visiting <a href="">g.co/firebase/opt-in</a>. </p><div class="blogimg1"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <div class="blogimg2"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><center><em>Analytics dashboard, before and after</em></center><h2>Improving the Cloud Messaging API</h2><p>Firebase Cloud Messaging (FCM) gives you an easy way to send notifications to your users, either programmatically or through the Firebase Console. However, sending cross-platform notifications with more complex functionality has been difficult, sometimes requiring you to create multiple, separate messages. </p><p>Today, we're announcing a new RESTful, FCM HTTP v1 API that makes it safer and easier to send messages to your cross-platform applications. The new FCM API allows you to use platform-specific fields in a single notification. For example, you might send a simple text notification to iOS, but a request with a <code>click_action</code> to Android, all in one API call. To read more about the new FCM API, <a href="">visit our documentation</a>. </p><h2>Announcing a new A/B testing framework</h2><p>In addition to FCM, another powerful tool for driving user engagement and retention is Remote Config. Up until now, running variant tests with either Remote Config or FCM has been manual and quite some work. We've heard from many of you that you want an easier way to test how different app variants or push notification messages impact your key business metrics. </p><p>Today, we're launching the beta version of A/B testing, a new Firebase feature that's integrated with Analytics, FCM and Remote Config. It's built on the statistical engine and years of learning from <a href="">Google Optimize</a>, our free website testing and personalization product, and makes it easy to design experiments right from the Firebase console. </p><p>Setting up an A/B test is quick and simple. You can create an experiment with Remote Config or FCM, define different variant values and population sizes to test on, then set the experiment goal. From there, Firebase will take care of the rest, automatically running the experiment then letting you know when a winner towards your goal is determined with statistical significance. Learn more and <a href="">get started with A/B testing here</a>. </p><h2>Introducing Firebase Predictions</h2><p>Whether you're driving engagement, revenue, or a different business metric, determining the right targeting can be difficult. Being proactive, instead of reactive, is always better, but up until now, there's been no easy way to anticipate what actions your users are likely to take. To help with this, we're taking our first step in bringing the power of Google's machine learning to Firebase with a new product called Firebase Predictions. </p><p><center><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></center></p><p>We've already started using machine learning in other parts of Google, to enhance consumer products like Photos, Inbox, or the Assistant. Now, you can harness Google's machine learning, using Firebase, to help you build great products. Predictions automatically creates dynamic user groups based on predicted behavior from your Analytics data and, out of the box, it will generate four user groups: </p><ul><li>Users who are predicted to churn in the next 7 days <li>Users who are predicted to stay engaged with your app <li>Users who are predicted to spend money <li>Users who are predicted to not spend money in the next 7 days</li></ul><p>You can use these predictions for targeting with Remote Config and notifications composer, giving you the ability to only show ads to users who are predicted to not spend money in your app or send a notification to users who are predicted to churn in the next 7 days. </p><p>You can also create predictions for any <a href="">Analytics conversion event</a> in your app. For example, if completing level 3 is an important milestone in your app, you can create a prediction for users who are likely to not hit that milestone and then send them an in-app promotion using Remote Config. </p><p>We're already hearing from partners that Predictions helps them drive growth in their key business metrics. Halfbrick, a games developer known for popular titles such as <em>Fruit Ninja</em> and <em>Dan the Man</em>, used Predictions and Remote Config and boosted their 7-day retention rate by 20%! To learn more about Predictions, as well as read the full Halfbrick story, <a href="">visit our product page here</a>. </p><h2>Looking to the future</h2><p>While we're excited about the updates to Firebase that we've announced today, we also know that there's a lot more work to be done. We are working hard to prepare for the General Data Protection Regulation (GDPR) across Firebase and we're committed to helping you succeed under it. Offering a data processing agreement where appropriate is one important step we're taking to make sure that Firebase works for you, no matter how large your business or where your users are. We'll also be publishing tools and documentation to help developers ensure they are compliant. You can check out our privacy FAQs at <a href="">g.co/firebase/gdpr</a>. </p><p>As we continue to grow and improve the platform, we'd love to have your input. <a href="">Join our Alpha program</a> to help shape the future of the platform and stay on the cutting edge of Firebase. </p><p>If you weren't able to join us in person in Amsterdam, all of our sessions are recorded and posted to <a href="">our YouTube channel</a>. Thanks for being a part of our community and happy building! </p><style> .blogimg1 img { max-width: 50%; display: block; margin: auto; padding: 10px 5px 10px 5px; border: 0; float: left; } .blogimg2 img { max-width: 50%; display: block; margin: auto; padding: 10px 5px 10px 5px; border: 0; float: right; } .blogimg3 img { max-width: 85%; display: block; margin: auto; padding: 10px 0 10px 0; border: 0; } </style><img src="" height="1" width="1" alt=""/>Firebase 5 talks you can’t miss at Firebase Dev Summit<figure class="profile"> <div class="profile-picture"><img alt="Frank van Puffelen" src="" /> </div><figcaption> <strong><div>Frank van Puffelen</div></strong> <em>Engineer</em> </figcaption></figure><p>We're just days away from the Firebase Dev Summit in Amsterdam! This means that all across the company, dozens of engineers and product managers are hopping on planes and saying to themselves, "Oh, whoops. I guess I better start working on that presentation." </p><p>Just kidding! As your conference organizer, I've had a chance to see a sneak preview of what's in store for you at the Firebase Dev Summit. And I feel confident telling you that this year's Dev Summit will be full of exciting product announcements, instructor-led codelabs, and way too many jokes about wooden shoes. </p><p>Now if you can't attend the Firebase Conference in person, never fear! We'll be livestreaming all of the talks from the main track in our YouTube channel, starting at 10:00 AM (Amsterdam time). And if this ends up being inconvenient for your time zone, that's okay. All of those talks, along with the in-depth sessions from our secondary track, will be recorded and posted within hours. So you can stay informed of the latest Firebase news while also getting a good night's sleep. </p><p>So while it's always hard to pick my favorite talks out of all of the great ones we've got lined up, here are a few that I'm pretty excited about: </p><p><strong>Firebase overview and announcements</strong>: If you just tune in to one presentation, make it this one. This is where we'll be covering all of the major announcements around what's new in Firebase. Thought we were done with the <a href="">announcement of Cloud Firestore</a>? Nope! We've got even more on the way and you can find out about it here! </p><p><strong>Actionable insights with Firebase</strong>: Speaking of new and exciting announcements, this suspiciously-vaguely-worded presentation might contain a few details about some exciting new developments with Firebase. Or it might not. You'll just have to tune in to find out! </p><p><strong>Automating your app's release process using fastlane: </strong>fastlane has been one of the most popular open-source tools among mobile developers, helping to automate many of the tedious parts of releasing an app, so you can focus on the fun bits. If you're not familiar yet with fastlane, this is a great way to find out everything that it can do for you. </p><p><strong>BigQuery for Analytics:</strong> There's a lot of amazing stuff you can perform with BigQuery and Google Analytics for Firebase, but it can also be incredibly overwhelming for mobile developers whose SQL skills might be a little rusty. Todd will be showing you some really useful tricks that you can do with BigQuery to make tackling those SQL queries a little more useful. </p><p><strong>Write production quality Cloud Functions code </strong>Cloud Functions for Firebase is a powerful tool that you can use to help realize your goal of creating a truly serverless app. But they can sometimes be tricky to get right. And while Jennifer's videos are a great way to get started with Cloud Functions, Thomas and Lauren's presentation can help you move your Cloud Functions from "neat little parlor trick" to "production-level toolkit". </p><p>Of course, these are just my thoughts; you're the ones who want to know more about Firebase, so feel free to check out the talk that's of most interest to you! My only strong opinion is about where you can get the best stroopwafels. (Honestly, you should just get the cheap packs at the supermarket, because the fancy ones just aren't worth it.) </p><img src="" height="1" width="1" alt=""/>Firebase Test Lab October 2017 Update<figure class="profile"><div class="profile-picture"><img alt="Doug Stevenson" src="" style="margin-left: 0;" /> </div><figcaption> <strong><div>Doug Stevenson</div></strong> <em>Developer Advocate</em> </figcaption></figure><p>It's October and Halloween is around the corner! I'm sure many of you have <a href="">scary costumes</a> to put together. When you're not spending time planning your costume masterpiece, check out some of the new features available in <a href="">Firebase Test Lab for Android</a>: </p><h3>Robo test improvements</h3><p>Test Lab's automated <a href="">Robo test</a <a href="">scary bugs</a> before they frighten your users away! If you haven't run a Robo test against your app, just upload your APK to Test Lab in the Firebase console at no cost. You also get a Robo test with your <a href="">pre-launch report</a> in the Play Console. For the crafty among you, you could possibly <a href="">make your own Robo</a>. </p><h3>Faster test results</h3><p>If you run a lot of tests with the <a href="">gcloud command line</a>, and primarily want to know if your tests simply pass or fail, you can speed up your tests by opting out of some of the extra information that Test Lab collects for you. Passing the <code>--no-record-video</code> flag will opt out of the collection of the video of your app, and <code>--no-performance-metrics</code> will opt out of performance data collected for <a href="">game loop tests</a>. So use these options to give your tests a good cardio workout for sustained high speed, which is imperative for escaping zombies. </p><h3>Support for Android Test Orchestrator</h3><p>The <a href="">Android Testing Support Library</a> recently published some enhancements to the tooling used to test Android apps. With these updates, you can now make use of <a href="">Android Test Orchestrator</a>, which helps you isolate your Android test cases and therefore promote more consistent test results. Test Lab now supports this handy utility, so consider making use of it in your test suites today. Here's a gratuitous link to an <a href="">orchestra in costume</a>. </p><p>If you want to chat with the Test Lab team and others in the community who love testing their apps, why don't you join the <a href="">Firebase Slack</a> and find us in the #test-lab channel? There's no tricks there, only <a href="">treats</a>. </p><img src="" height="1" width="1" alt=""/>Firebase Firestore for Realtime Database Developers<figure class="profile"> <div class="profile-picture"> <img alt="Todd Kerpleman" src="" style="margin: 0px 0px 10px 0%;" /> </div> <figcaption> <strong><div>Todd Kerpelman</div></strong> <em>Developer Advocate</em> </figcaption></figure> <p>Hey, did you hear the <a href="">big news</a>? We just announced the beta release of Cloud Firestore -- the new database that lets you easily store and sync app data to the cloud, even in realtime! </p><p>Now if you're experiencing some deja vu, you're not alone. We realize this sounds awfully similar to another product you might already be using -- the <a href="">Firebase Realtime Database</a>. So if you're experiencing some deja vu, you're not alone. </p><p>So why did we build another database? And when would you choose one over another? Well, let's talk about what's new and different with Cloud Firestore, and why you might want to use it for your next app. </p><h2>What's different with Cloud Firestore?</h2><p>While <a href="">our documentation</a> covers all of the differences between the Realtime Database and Cloud Firestore in much more detail, let's look at the main differences between the two products. And we'll start with... </p><h3>Better querying and more structured data</h3><p>While the Firebase Realtime Database is basically a giant JSON tree where anything goes and lawlessness rules the land<sup id="fnref1"><a href="#fn1" rel="footnote">1</a></sup>, Cloud Firestore is more structured. Cloud Firestore is a document-model database, which means that all of your data is stored in objects called <em>documents</em> that consist of key-value pairs -- and these values can contain any number of things, from strings to floats to binary data to JSON-y looking objects the team likes to call <em>maps</em>. These documents, in turn, are grouped into <em>collections</em>. </p><div class="blogimg"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><p>Your Cloud Firestore database will probably consist of a few collections that contain documents that point to subcollections. These subcollections will contain documents that point to other subcollections, and so on. </p><div class="blogimg2"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><p>This new structure gives you several important advantages in being able to query your data. </p><p>For starters, all queries are <em>shallow</em>, meaning that you can simply fetch a document without having to fetch all of the data contained in any of the linked subcollections. This means you can store your data hierarchically in a way that makes sense logically without worrying about downloading tons of unnecessary data. </p><div class="blogimg3"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><p><div class="blogcptn">In this example, the document at the top can be fetched without grabbing any of the documents in the subcollections below</div></p><p>Second, Cloud Firestore has more powerful querying capabilities than the Realtime Database. In the Realtime Database, trying to create a query across multiple fields was a lot of work and usually involved denormalizing your data. </p><p>For example, imagine you had a list of cities, and you wanted to find a list of all cities in California with a population greater than 500k. </p><div class="blogimg4"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><p><div class="blogcptn">Cities, stored in the Realtime Database</div></p><p>In the Realtime Database, you'd need to conduct this search by creating an explicit "states plus population" field and then running a query sorted on that field. </p><div class="blogimg5"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><p><div class="blogcptn">Creating a combined state_and_population field, just for queries</div></p><p>With Cloud Firestore, this work is no longer necessary. In some cases, Cloud Firestore can automatically search across multiple fields. In other cases, like our cities example, Cloud Firestore will guide you towards automatically building an index required to make these kinds of queries possible… </p><div class="blogimg6"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><p>...and then you can simply search across multiple fields. </p><div class="blogimg7"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><p>Cloud Firestore will automatically maintain this index for you throughout the lifetime of your app. No combo fields required! </p><h3>Designed to Scale</h3><p>While the Realtime Database does scale to meet the needs of <em>most</em> apps, things can start to get difficult when your app becomes really popular, or your dataset gets truly massive. </p><p>Cloud Firestore, on the other hand, is built on top of the same Google Cloud infrastructure that powers some <a href="">pretty popular apps</a>. So it will be able to scale much more easily and to a much greater capacity than the Realtime Database can. </p><p Firestore." </p><p>At the start of our beta period, Cloud Firestore will already allow scaling to levels somewhat greater than that of the Realtime Database, although we are putting a few restrictions around things as we monitor how the database performs in more real-world situations. But look for Cloud Firestore to be able to expand automatically to ludicrous levels<sup id="fnref2"><a href="#fn2" rel="footnote">2</a></sup> as we get out of the beta period and move closer to a general availability release. </p><h3>Easier manual fetching of data</h3>. </p><p>Although the Realtime Database does support this with <code>.once</code>. </p><p>Of course, you can still add support for listeners, so that your clients have the ability to receive updates whenever you data changes in the database. But now you have the flexibility to retrieve your data however you'd like. </p><h3>Multi-Region support for better reliability</h3><p>Cloud Firestore is a multi-region database. This means that your data is automatically copied to multiple geographically separate regions at once. So if some unforeseen disaster were to render a data center -- or even an entire region -- offline, you can rest assured that your data is safe. </p><p>And for you database aficionados out there, we should point out that our multi-region database offers strong consistency (just like Cloud Spanner!), which means that you get the benefits of multi-region support, while also knowing that you'll be getting the latest version of your data whenever you perform a read. </p><h3>Different pricing model</h3><p>The two databases have fairly different pricing models: The Realtime Database primarily determines cost based on the <em>amount</em> of data that's downloaded, as well as the amount of data you have stored on the database. </p><p>While Cloud Firestore does charge for these things as well, they are <em>significantly </em>lower than what you would see in the Realtime Database<sup id="fnref3"><a href="#fn3" rel="footnote">3</a></sup>. Instead, Cloud Firestore's pricing is primarily driven by the <em>number</em> of reads or writes that you perform. </p><p. </p><p.<sup id="fnref4"><a href="#fn4" rel="footnote">4</a></sup> At least for that portion of your app -- you can always use both databases together, and that's fine, too. </p><p>Of course these are just rough guidelines, make sure you check out the <a href="">Pricing section</a> of our documentation for all the details on Cloud Firestore pricing. </p><h2>Why you still might want to use the Realtime Database</h2><p>With this list of changes, you might come away with the impression that Cloud Firestore is simply better than the Realtime Database. And while Cloud Firestore does have a fair number of improvements over the Realtime Database, there are still situations where you might want to consider using the Realtime Database for some of your data. Specifically… </p><ul><li>The Realtime Database will probably have slightly better latency. Usually not by much -- maybe a couple hundred milliseconds from the database to the client -- but if you're looking for a database with reliably low-latency updates to power an app that feels instant, you might prefer the Realtime Database. <li>The Realtime Database has native support for <em>presence</em> -- that is, being able to tell when a user has come online or gone offline. While we do have <a href="">a solution</a> for Cloud Firestore, it's not quite as elegant. <li>As we noted above, Cloud Firestore's pricing model means that applications that perform very large numbers of small reads and writes per second per client could be significantly more expensive than a similarly performing app in the Realtime Database. <li>Cloud Firestore is still a beta product. The Realtime Database has been available for four years and has been battle-tested by hundreds of thousands of production-level apps. Cloud Firestore has seen limited production usage over the last few months by several dozen apps. And while some of these apps -- like <a href="">HomeAway</a>and <a href="">Hawkin Dynamics</a> -- are already out in the real world and performing quite nicely, there will likely be issues or edge cases with Cloud Firestore that we simply haven't discovered yet.</li></ul><h2>The tl;dr: Just tell me what to use!</h2><p>In general, we recommend that most <em>new</em> applications start with Cloud Firestore, unless you think that your app has unique needs, like those we outlined above, that make it more suitable for the Realtime Database. </p><p>On the other hand, if you have an <em>existing</em>. </p><p></p><p>And if you're looking for a magic, "Please convert my database from the Realtime Database to Cloud Firestore" button, there isn't one<sup id="fnref5"><a href="#fn5" rel="footnote">5</a></sup>!. </p><h2>Interested in getting started?</h2><p>If you're interested in giving Cloud Firestore a try, there's a lot of places for you to get started. You can check out the <a href="">documentation</a>, play around with <a href="">our</a> <a href="">sample</a><a href="">apps</a>, try our <a href="">interactive code labs</a>, and maybe watch a <a href="">getting</a> <a href="">started</a> <a href="">video</a> or two. </p><p>There's a lot we think you'll be able to do with Cloud Firestore and we're excited to see what kinds of apps you're able to build with it. As always, if you have questions, you can hit us up on any of our <a href="">support channels</a>, or post questions on Stack Overflow with the <a href=""><code>google-cloud-firestore</code>and <code>firebase</code> tags</a>. Good luck, and have fun! <!-- Footnotes themselves at the bottom. --><h2>Notes</h2><div class="footnotes"><hr><ol><li id="fn1"><p> Subject to <a href="">security rules</a>, of course <a href="#fnref1" rev="footnote">↩</a><li id="fn2"><p> Not an official term for database capacity… yet. <a href="#fnref2" rev="footnote">↩</a><li id="fn3"><p> Something on the order of 27 times cheaper, in the case of data storage <a href="#fnref3" rev="footnote">↩</a><li id="fn4"><p> As an aside, though, I've personally found that the new pricing structure makes it much easier for me to estimate my costs, which is nice. <a href="#fnref4" rev="footnote">↩</a><li id="fn5"><p> Although we do have a very handy <a href="">Migration Guide</a>. <a href="#fnref5" rev="footnote">↩</a></ol></div> <style> .blogimg img { max-width: 60%; display: block; margin: auto; padding: 10px 0 10px 0; border: 0; } .blogimg2 img { max-width: 85%; display: block; margin: auto; padding: 10px 0 10px 0; border: 0; } .blogimg3 img { max-width: 85%; display: block; margin: auto; padding: 10px 0 10px 0; border: 0; } .blogimg4 img { max-width: 100%; display: block; margin: auto; padding: 10px 0 10px 0; border: 0; } .blogimg5 img { max-width: 100%; display: block; margin: auto; padding: 10px 0 10px 0; border: 0; } .blogimg6 img { max-width: 100%; display: block; margin: auto; padding: 10px 0 10px 0; border: 0; } .blogimg7 img { max-width: 100%; display: block; margin: auto; padding: 10px 0 10px 0; border: 0; } .footnotes { font-size: 75%; } .blogcptn { font-size: 85%; font-style: italic; text-align: center; } </style><img src="" height="1" width="1" alt=""/>Firebase Introducing Cloud Firestore: Our New Document Database for Apps<figure class="profile"> <div class="profile-picture"><img alt="Alex Dufetel" src="" style="margin:0; max-height: 105%;" /> </div><figcaption> <strong><div>Alex Dufetel</div></strong> <em>Product Manager</em> </figcaption></figure> <p>Today we're excited to launch Cloud Firestore, a fully-managed NoSQL document database for mobile and web app development. It's designed to easily store and sync app data at global scale, and it's now available in beta. </p><p>Key features of Cloud Firestore include: </p><ul><li>Documents and collections with powerful querying <li>iOS, Android, and Web SDKs with offline data access <li>Real-time data synchronization <li>Automatic, multi-region data replication with strong consistency <li>Node, Python, Go, and Java server SDKs </li></ul><p>And of course, we've aimed for the simplicity and ease-of-use that is always top priority for Firebase, while still making sure that Cloud Firestore can scale to power even the largest apps. </p><p><em><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></em></p><h2>Optimized for app development</h2><p>Managing app data is still hard; you have to scale servers, handle intermittent connectivity, and deliver data with low latency. </p><p>We've optimized Cloud Firestore for app development, so you can focus on delivering value to your users and shipping better apps, faster. Cloud Firestore: </p><ul><li><strong>Synchronizes data between devices in real-time. </strong.</li></ul><ul><li><strong>Uses collections and documents to structure and query data</strong>..</li></ul><ul><li><strong>Enables offline data access via a powerful, on-device database.</strong> This local database means your app will function smoothly, even when your users lose connectivity. This offline mode is available on Web, iOS and Android.</li></ul><ul><li><strong>Enables serverless development</strong>..</li></ul><ul><li><strong>Integrates with the rest of the Firebase platform</strong>. You can easily configure Cloud Functions to run custom code whenever data is written, and our SDKs automatically integrate with Firebase Authentication, to help you get started quickly.</li></ul><h2>Putting the 'Cloud' in Cloud Firestore </h2><p>As you may have guessed from the name, Cloud Firestore was built in close collaboration with the Google Cloud Platform team. </p><p. </p><p>It also means that delivering a great server-side experience for backend developers is a top priority. We're launching SDKs for Java, Go, Python, and Node.js today, with more languages coming in the future. </p><h2>Another database?</h2><p. </p><p. </p><p <a href="">in-depth comparison between the two databases here</a>. </p><p>We're continuing development on both databases and they'll both be available in our console and documentation. </p><h2>Get started!</h2><p>Cloud Firestore enters public beta starting today. If you're comfortable using a beta product you should give it a spin on your next project! Here are some of the companies and startups who are already building with Cloud Firestore: </p><div class="blogimg"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><p>Get started by visiting the <a href="">database tab in your Firebase console</a>. For more details, see the <a href="">documentation</a>, <a href="">pricing</a>, <a href="">code samples</a>, <a href="">performance limitations during beta</a>, and view our open source <a href="">iOS</a> and <a href="">JavaScript</a> SDKs on GitHub. </p><p>We can't wait to see what you build and hear what you think of Cloud Firestore! </p><img src="" height="1" width="1" alt=""/>Firebase Functions Locally with the Cloud Functions Shell<figure class="profile"><div class="profile-picture"><img alt="Doug Stevenson" src="" style="margin-left: 0;" /> </div><figcaption> <strong><div>Doug Stevenson</div></strong> <em>Developer Advocate</em> </figcaption></figure> If you've been working with <a href="">Cloud Functions for Firebase</a>, you've probably wondered how you could speed up the development of your functions. This is possible for HTTPS type functions using the firebase serve in the Firebase CLI, but this wasn't an option for other types of functions. Now, local testing of all of your functions is easy with the Firebase CLI. If you want to try out your code before you deploy it to Cloud Functions, you can do that with the <a href="">Cloud Functions shell</a> in the Firebase CLI starting at version 3.11.0 or later. <p>Here's how it works, in a nutshell. We'll use a Realtime Database trigger as an example. </p><p>Imagine you have an existing project with a single function in it called makeUppercase. It doesn't have to be deployed yet, just defined in your index.js: </p> <pre class="prettyprint"><br />exports.makeUppercase = functions.database.ref('/messages/{pushId}/original').onCreate(event => {<br /> const original = event.data.val()<br /> console.log('Uppercasing', event.params.pushId, original)<br /> const uppercase = original.toUpperCase()<br /> return event.data.ref.parent.child('uppercase').set(uppercase)<br />})<br /></pre> <p>This onCreate database trigger runs when a new message is pushed under /messages with a child called original, and writes back to that message a new child called uppercased with the original value capitalized. </p><p>Now, if you can kick off the emulator shell from your command line using the Firebase CLI: </p> <pre class="prettyprint"><br />$ cd your_project_dir<br />$ firebase experimental:functions:shell<br /></pre> <p>Then, you'll see something like this: </p> <pre class="prettyprint"><br />i functions: Preparing to emulate functions.<br />✔ functions: makeUppercase<br />firebase> <br /></pre> <p>That firebase prompt is waiting there for you to issue some commands to invoke your makeUppercase function. The <a href="">documentation for testing database triggers</a> says that you can use the following syntax to invoke the function with incoming data to describe the event: </p> <pre class="prettyprint"><br />makeUppercase('foo')<br /></pre> <p>This emulates the trigger of an event that would be generated when a new message object is pushed under /messages that has a child named original with the string value "foo". When you run this command in the shell, it will generate some output at the console like this: </p> <pre class="prettyprint"><br />info: User function triggered, starting execution<br />info: Uppercasing pushId1 foo<br />info: Execution took 892 ms, user function completed successfully<br /></pre> <p>Notice that the console log in the function is printed, and it shows that the database path wildcard pushId was <em>automatically</em> assigned the value pushId1 for you. Very convenient! But you can still specify the wildcard values yourself, if you prefer: </p> <pre class="prettyprint"><br />makeUppercase('foo', {params: {pushId: 'custom_push_id'}})<br /></pre> <p>After emulating this function, if you look inside the database, you should also see the results of the function on display, with /messages/{pushId}/uppercased set to the uppercased string string value "FOO". </p><p>You can simulate any database event this way (onCreate, onDelete, onUpdate, onWrite). Be sure to read the docs to learn how to invoke them each correctly. </p><p>In addition to database triggers, you can also emulate <a href="">HTTPS functions</a>, <a href="">PubSub functions</a>, <a href="">Analytics functions</a>, <a href="">Storage functions</a>, and <a href="">Auth functions</a>, each with their own special syntax. </p><p>The Cloud Functions shell is currently an experimental offering, and as such, you may experience some rough edges. If you encounter a problem, please let us know by <a href="">filing a bug report</a>. You can also talk to other Cloud Functions users on the <a href="">Firebase Slack</a> in the #functions channel. </p> <h3>Some tips for using the shell</h3> <p>Typing the function invocation each time can be kind of a pain, so be sure to take advantage of the fact that you can navigate and repurpose your invocation history much like you would your shell's command line using the arrow keys. </p><p>Also note that the shell is actually a full <a href="">node REPL</a> that you can use to execute arbitrary JavaScript code and use <a href="">special REPL commands and keys</a>. This can be useful for scripting some of your test code. </p><p>Since you can execute arbitrary code, you can also dynamically load and execute code from other files using the require() function that you're probably already familiar with. </p><p>And lastly, if you're like me, and you prefer to use a programmer's editor such as <a href="">VS Code</a> to write your all JavaScript, you can easily emulate functions by sending code you want to run to the Firebase CLI. This command will run test code from a file redirected through standard input: </p><pre class="prettyprint"><br />$ firebase experimental:functions:shell < tests.js<br /></pre><p>Happy testing! </p><img src="" height="1" width="1" alt=""/>Firebase launches eco-friendly, free ridesharing with Firebase<figure class="profile"> <div class="profile-picture"> <img alt="Ken Yarmosh (from Savvy Apps)" src="" style="margin:0;" /> </div> <figcaption> <strong><div>Ken Yarmosh</div></strong><em style="font-size:85%;">Guest post from Savvy Apps CEO & Founder</em> </figcaption></figure> <div class="blogimg"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> A newcomer to the ridesharing space, <a href="">Sprynt</a> is taking a<a href="">different approach to building its service</a>. They have a 100% electric fleet and rides are 100% free, paid for by local and corporate sponsorships. So when they first contacted our agency<a href=""> Savvy Apps</a>, we were excited about the opportunity to work with them. We knew on the technology side, though, that Sprynt would pose some unique challenges. After considering a few options, we decided to use Firebase to tackle these challenges and create the best experience for riders, drivers, and the Sprynt management team. <h2>Prioritizing real-time communication and queue management</h2><p>One of the most important components of a ridesharing app is keeping everything synced in real-time. Sprynt needed fast and reliable synchronized rider and driver apps, GPS tracking, and ride-request queue management. That's why one of the first features that attracted us to Firebase for this app was the<a href=""> Realtime Database</a>. </p><p>We leveraged Firebase's synchronization solution for its speed, as well as the Realtime Database listeners for keeping the system fast and lightweight. In our experience, Firebase excels when dealing with simple data schemas that need real-time communication between clients and server. </p><h2>Extending to a complete solution</h2><p>Besides the core product requirement of real-time communication, Sprynt needed a platform that could support a fully-featured app. For example: authentication for registering and logging in, notifications to help with rider and driver communication, and an easy-to-use dashboard to help the Sprynt team understand and manage their system. </p><p>Firebase has all of these components, which made it a leading candidate and our eventual choice. It provides the ability to quickly set up and scale a backend with authentication, push notifications, custom cloud functions, file storage, and analytics. The dashboards and admin tools also allow us to stay focused on building what matters most: a compelling user experience. Simply put, Firebase let Savvy begin a product like Sprynt quickly without compromise. </p><p>For authentication, we turned to<a href=""> Firebase Auth</a> because we wanted to take advantage of the new phone authentication added this year at Google I/O. We were able to quickly build an authentication mechanism that allowed for users to sign up via phone numbers. This feature was an important one for Sprynt, since it streamlined the onboarding process. That's especially important when someone might want to get started with Sprynt in a hurry. </p><p>When it came to building in notifications, we used<a href=""> Firebase Cloud Messaging</a>. FCM allowed us to send notifications programmatically, such as when a driver is on the way to a rider. Beyond that, FCM gives Sprynt admins the ability to send out quick one-off messages to their user base through the notifications dashboard. We feel that this functionality will prove invaluable for handling services outages, highlighting new specials from advertisers, or other comparable communication regarding the Sprynt service. </p><h2>Ensuring Sprynt's longevity</h2><p>Sprynt launched to great success. In the first month of service, they delivered around 5,000 passengers in their pilot service area. The app maintains a 5-star rating and their advertisers are very happy with their results so far. </p> <div class="blogimg"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <p>Sprynt is already pushing hard to keep up with demand from riders and advertisers, as well as the influx of new driver applications. They also have already begun building a steady, repeat ridership base.<a href=""> Google Analytics for Firebase</a> has proven helpful in tracking this kind of usage, as well as version update adoption rates, user device types, and custom events. </p><p>We built Sprynt using Firebase for long-term sustainability without constant developer involvement. By leveraging the Firebase console, we made it as easy as possible for Sprynt's team to manage their business, with as little development support as needed.<a href=""> Cloud Storage for Firebase</a> plus<a href=""> Cloud Functions for Firebase</a> allow Sprynt to upload and process updated or new service areas without directly editing the database. These features will become even more important as Sprynt continues to grow in popularity and open new service areas. </p><h2>A smooth ride</h2><p>While Firebase Realtime Database has some weaknesses in its query support — particularly around complex queries that include filtering and sorting collections — overall, we've been happy with the platform and its progress. </p><p>We've used Firebase since it launched years ago, but we continue to appreciate when the observeSingleEventOfType function on one device responds to an event triggered by another. Watching it happen for the first time between the Sprynt Rider app and Sprynt Driver app still provides that "aha" moment, even today. </p><p>Firebase continues to enhance our ability to build and scale new businesses as quickly as possible. </p><p>If you want to learn more about using Firebase yourself, check out the<a href=""> use cases section of the website</a> or subscribe to the Firebase channel on<a href=""> YouTube</a>. </p> <style> .blogimg img { max-width: 100%; display: block; margin: auto; padding: 10px 0 10px 0; border: 0; } </style><img src="" height="1" width="1" alt=""/>Firebase tips for getting the most out of Crashlytics<figure class="profile"> <div class="profile-picture"> <img src="" style="margin:0;"> </div> <figcaption>Originally posted on the <a href="">Fabric Blog</a> by Jason St. Pierre, Product Manager</figcaption></figure> <p!). </p> <div class="blogimg"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <p><strong>In this post, we want to share 7 pro-tips that will help you get even more value out of Crashlytics</strong>, which is now part of the <a href="">new Fabric dashboard</a>, so you can track, prioritize, and solve issues faster. </p> <div class="blogimg"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <h2>1. Speed up your troubleshooting by checking out crash insights</h2><p>In July, we officially released crash insights out of beta. <a href="">Crash insights</a> helps you understand your crashes better by giving you more context and clarity on <em>why</em> those crashes occurred. When you see a green lightning bolt appear next to an issue in your issues list, click on it to see potential root causes and troubleshooting resources. </p> <div class="blogimg"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <h2>2. Mark resolved issues as "closed" to track regressions</h2> <a href="">regression detection</a>. Regression detection alerts you when a previously closed issue reoccurs in a new app version, which is a signal that something else may be awry and you should pay close attention to it. </p> <div class="blogimg"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <h2>3. Close and lock issues you want to ignore and declutter your issue list</h2><p>As a general rule of thumb, you should close issues so you can monitor regression. However, you can also close <em>and lock</em> issues that you <em>don't</em>>4. Use wildcard builds as a shortcut for adding build versions manually</h2><p <a href="">APK Splits on Android</a>, a wildcard build will quickly show you crashes for the combined set of builds. </p> <div class="blogimg"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <h2>5. Pin your most important builds to keep them front and center</h2><p>As a developer, you probably deploy a handful of builds each day. As a development <em>team</em>, that number can shoot up to tens or hundreds of builds. The speed and agility with which mobile teams ship is impressive and awesome. But you know what's <strong>not</strong>. </p> <div class="blogimg"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <h2>6. Pay attention to velocity alerts to stay informed about critical stability issues</h2><p <a href="">velocity alert</a>. Velocity alerts are proactive alerts that appear right in your crash reporting dashboard when an issue suddenly increases in severity or impact. We'll send you an email too, but you should also install the <a href="">Fabric mobile app</a>, which will send you a push notification so you can stay in the loop even on the go. Keep an eye out for velocity alerts and you'll never miss a critical crash, no matter where you are! <>7. Use logs, keys, and non-fatals in the right scenarios</h2><p. </p><p><strong><em>Logs:</em></strong> <a href="">iOS</a>, <a href="">Android</a>, and <a href="">Unity</a>apps. </p> <div class="blogimg"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <p><strong><em>Keys: </em></strong <a href="">iOS</a>, <a href="">Android</a>, and <a href="">Unity</a>apps. </p> <div class="blogimg"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <p><strong><em>Non-fatals: </em></strong>While Crashlytics captures crashes automatically, you can also record non-fatal events. Non-fatal events mean that your app is experiencing an error, but not actually crashing. </p><p. </p><p>You should set up non-fatal events for something you want the stack trace for so you can triage and troubleshoot the issue. </p><p>If you simply want to count the number of times something happens (and don't need the stack trace), we'd recommend checking out <a href="">custom events</a>. </p> <div class="blogimg"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <p>These 7 tips will help you get the most out of Crashlytics. If you have other pro-tips that have helped you improve your app stability with Crashlytics, <a href="">tweet them at us</a>! We can't wait to learn more about how you use Crashlytics. </p><p><a href="">Get Crashlytics </a></p> <style> .blogimg img { max-width: 100%; display: block; border: 0; margin: auto; padding: 10px 0 10px 0; } </style><img src="" height="1" width="1" alt=""/>Firebase Cross-platform Firebase Sample App Featuring Best Practices<figure class="profile"> <div class="profile-picture"><img alt="Ibrahim Ulukaya" src="" style="margin:0;" /> </div><figcaption> <strong><div>Ibrahim Ulukaya</div></strong> <em>Developer Programs Engineer</em> </figcaption></figure> <p>We've provided a number of different ways for you to get started building your app with the Firebase platform -- everything from <a href="">quickstarts</a> for many of our individual products, to <a href="">codelabs</a>, to some Getting Started screencasts on our <a href="">YouTube channel</a>. </p><p? </p><p>For all you developers who want to see an app built for a real life scenario, we've created an open sourced narrative app called <a href="">FriendlyPix</a>. FriendlyPix uses some of the most popular Firebase SDKs, such as Analytics, Cloud Messaging, Cloud Functions, Authentication (with FirebaseUI), Realtime Database, Storage, Remote Config, Invites, and AdMob. </p> <div class="blogimg"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <h2>Best Practices</h2><p>FriendlyPix highlights some of the best practices when using Firebase, such as: </p><ul><li>Using FirebaseUI for Auth <li>Creating indexes in the Realtime Database for fast search <li>Fanning out simultaneous writes to avoid race conditions <li>Building a data hierarchy of flat, denormalized data for fast access <li>Running ordered, filtered queries for partial data access <li>Creating lazily updated feeds <li>Using the proper file and folder structure when uploading images to Firebase Storage in conjunctions with Cloud Functions</li></ul><p>We look forward to seeing you use these best practices in your app, or use FriendlyPix as a starting point for your app. </p><h2>Get Started</h2><p>To get started with FriendlyPix, you can read the <a href="">design document</a> or check out the apps (<a href="">Android</a>, <a href="">iOS</a>, and <a href="">Web</a>) and associated <a href="">Cloud Functions</a> on GitHub. </p><p>The web version is already hosted at <a href=""></a> for you to try out, and we are planning to release FriendlyPix on other platforms for you try as well. </p><p>We'll be updating the app and adding further SDKs in the coming weeks, so keep an eye on this blog or watch our Github repos to stay updated. </p><h2>Questions / Issues / Contribute</h2><p>You can ask FriendlyPix related questions on StackOverflow with the firebase and friendlypix tags. Issue trackers are hosted on Github in their respective platforms repos: <a href="">Web</a>, <a href="">iOS</a>, and <a href="">Android</a>. We'd love for you to contribute to the project, although before doing so please read our <a href="">Contributor guide</a>. </p> <style> .blogimg img { width: 100%; border: 0; margin: 0; padding: 10px 0 10px 0; } </style><img src="" height="1" width="1" alt=""/>Firebase's new with Firebase Dynamic Links?<figure class="profile"> <div class="profile-picture"> <img alt="Todd Kerpleman" src="" style="margin: 0px 0px 10px 0%;" /> </div> <figcaption> <strong><div>Todd Kerpelman</div></strong> <em>Developer Advocate</em> </figcaption></figure> <p>Perhaps you're already familiar with <a href="">Firebase Dynamic Links</a> --! </p><h2>Better App Preview page</h2><p. </p><p>That said, our initial page was a little… spartan. Since introducing this page, we've made a few improvements to dress it up with graphics and assets taken either from your app store's listing in the app store, or from preview assets that <a href="">you can specify directly</a>. We've found this has lead to a significant bump in the number of users who continue to click through to the app store. And it looks better, too. </p> <div class="blogimg"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <div class="blogcptn">App Preview pages: Before, the newer default version, and one with custom assets </div> <p>Of course, if you're still not excited about the idea of having an App Preview page, you're always welcome to remove it. You can do this by adding <code><a href="">efr=1</a></code>to the dynamic link URL you're generating, checking the "Skip the app preview page" checkbox in the Firebase Console, or using the <code>forcedRedirectEnabled</code> parameter in the <a href="">iOS</a>and <a href="">Android</a>builder APIs. <h2>Better error messages -- now with links!</h2><p. </p><h2>Self-diagnostic tools on iOS</h2><p>While we're on the subject of making it easier for you to implement Dynamic Links, we've also included self-diagnostic tools with the Dynamic Links library on iOS. By calling <code>DynamicLinks.performDiagnostics(completion: nil)</code. </p><h2>More detailed analytics </h2><p. </p> <div class="blogimg"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <p>And, as always, if you want to add in <code>utm</code> parameters to your Dynamic Links, Google Analytics for Firebase can make sure it attributes any important conversion events to the Dynamic Link that brought the user to your app in the first place. </p><h2>Give 'em a try!</h2><p>All of these changes are on top of a bunch of other improvements we've made to Firebase Dynamic links over the past few months, including: </p><ul><li>Adding a <a href="">REST API</a> for retrieving analytics information on your short Dynamic Links, in case you want analytics information but just don't feel like visiting the Firebase Console <li>A link debugging page that shows you, through a pretty fantastic <a href="">flow chart</a>, exactly what will happen in every situation when a user clicks on a dynamic link <li>Better tools on iOS and Android to <a href="">build dynamic links on the fly</a></li></ul><p>So if you haven't tried Firebase Dynamic Links lately, this would be a great time to give 'em a try! You can check out <a href="">all of our documentation</a> to get started, and you can always reach us through our <a href="">support channels</a>. </p> <style> .blogimg img { width: 100%; margin: 0; border: 0; padding: 10px 0 10px 0; } .blogcptn { font-size: 85%; font-style: italic; text-align: center !important; } </style><img src="" height="1" width="1" alt=""/>Firebase we migrated to Firebase and GCP: Smash.gg<head><img itemprop="image" style="display:none" src=""></head> <em>Originally posted by By Nathan Welch, Engineering Director/Co-founder, Smash.gg on the Google Cloud Platform Blog</em><p><em>[Editor's note: <a href="">Smash.gg</a>.] </em></p><p. </p><p <a href="">PubNub</a> and <a href="">Firebase</a>. Ultimately, we decided to launch with Firebase because it's widely used, is backed by Google, and is incredibly well-priced. </p> <div class="blogimg"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <p>Two players checking into, setting up, and reporting an online match using the Firebase Realtime Database for real-time interactions. </p><p>We got our start with Firebase in May, 2016. Our first release used the <a href="">Firebase Realtime Database</a>as a kind of real-time cache to keep match data in sync between both entrants. When matches were updated or reported on our backend, we also wrote the updated match data to Firebase. We use <a href="">React</a> and <a href="">Flux</a> so we made a wrapper component to listen to Firebase and dispatch updated match data to our Flux stores. Implementing a chat service with Firebase was similarly easy. Using <a href="">Firechat</a> as inspiration, it took us about a day to build the initial implementation and another day to make it production-ready. </p><p>Compared with rolling our own solution, Firebase was an obvious choice given the ease of development and time/financial cost savings. Ultimately, it reduced the load on our servers, simplified our reporting flow, and made the match experience truly real-time. Later that year, we started using <a href="">Firebase Cloud Messaging</a> (FCM) to send browser push notifications using <a href="">Cloud Functions</a> triggers as Firebase data changed (e.g., <a href="">to notify admins of moderator requests</a>). Like the Realtime Database, Cloud Functions was incredibly easy to use and felt magical the first time we used it. Cloud Functions also gave us a window into how well Firebase interacts with Google Cloud Platform (GCP) services like <a href="">Cloud Pub/Sub</a> and <a href="">Google BigQuery</a>. </p><h2>Migrating to GCP</h2><p>In March of 2017 we attended <a href="">Google Cloud Next '17</a> for the Cloud Functions launch. There, we saw that other GCP products had a similar focus on improving the developer experience and lowering development costs. Current products like Pub/Sub, <a href="">Stackdriver Trace</a> and <a href="">Logging</a>, and <a href="">Google Cloud Datastore<., <a href="">Google Container Engine</a>and <a href="">App Engine</a> with Stackdriver Trace/Logging, Stackdriver with Pub/Sub and BigQuery), we decided to evaluate a full migration. </p><p>We started migrating our application in mid May, using the following services: Container Engine, Pub/Sub, <a href="">Google Cloud SQL</a>, <a href="">log-based metrics</a> and <a href="">logs export</a> from Stackdriver to BigQuery. You could also do this using other services, but our GCP-only approach was a quick and mostly free way for us to get to parity while experimenting with GCP services. </p><p: </p><ol><li>We built a trace reporter to log out traces as JSON. <li>We then sent the traces to a Pub/Sub topic using Stackdriver log exports. <li>Finally, we made a Pub/Sub subscriber in Cloud Functions to report the traces using the REST API.</li></ol><p>The Google Cloud SDK is certainly a more appropriate solution for tracing in production, but the fact that this combination of services worked well speaks to how easy it is to develop in GCP. </p><h2>Post-migration results</h2><p>After running our production environment on GCP for a month, we've saved both time and money. Overall costs are ~10% lower without any <a href="">Committed Use Discounts</a>,.<em> </em>We <a href="">recently closed</a> our Series A from <a href="">Spark Capital</a>, <a href="">Accel</a>, and <a href="">Horizon Ventures</a>, and <a href="">we're hiring</a>! </p> <style> .blogimg img { width: 100%; padding: 10px 0 10px 0; margin: 0; border: 0; } </style><img src="" height="1" width="1" alt=""/>Firebase Firebase Admin SDK for Go<figure class="profile"><div class="profile-picture"><img alt="Hiranya Jayathilaka" src="" style="margin: 0px 0px 0px -7%;" /></div><figcaption> <strong><div>Hiranya Jayathilaka</div></strong> <em>Software Engineer</em></figcaption></figure> <p>The Firebase Admin SDK for Go is now generally available. This is the fourth programming language to join our growing family of Admin SDKs, which already includes support for Java, Python and Node.js. Firebase Admin SDKs enable application developers to programmatically access Firebase services from trusted environments. They complement the Firebase client SDKs, which enable end users to access Firebase from their web browsers and mobile devices. The initial release of the Firebase Admin SDK for Go comes with some Firebase Authentication features: custom token minting and ID token verification. </p><h2>Initializing the Admin SDK for Go</h2><p>Similar to the other Firebase Admin SDKs, the Go Admin SDK can be initialized with a variety of authentication credentials and client options. The following code snippet shows how to initialize the SDK using a service account credential obtained from the Firebase console or the Google Cloud console: </p> <pre class="prettyprint">import (<br /> "golang.org/x/net/context"<br /><br /> firebase "firebase.google.com/go"<br /> "google.golang.org/api/option"<br />)<br /><br />opt := option.WithCredentialsFile("path/to/key.json")<br />app, err := firebase.NewApp(context.Background(), nil, opt)</pre><p>If you are running your code on Google infrastructure, such as Google App Engine or Google Compute Engine, the SDK can auto-discover application default credentials from the environment. In this case you do not have to explicitly specify any credentials when initializing the Go Admin SDK: </p> <pre class="prettyprint">import (<br /> "golang.org/x/net/context"<br /><br /> firebase "firebase.google.com/go"<br />)<br /><br />app, err := firebase.NewApp(context.Background(), nil)</pre><h2>Minting Custom Tokens and Verifying ID Tokens</h2><p>The initial release of the Firebase Admin SDK for Go comes with support for <a href="">minting custom tokens</a> and <a href="">verifying Firebase ID tokens</a>. The custom token minting allows you to authenticate users using your own user store or authentication mechanism: </p> <pre class="prettyprint">client, err := app.Auth()<br />if err != nil {<br /> return err<br />}<br /><br />claims := map[string]interface{}{<br /> "premium": true,<br /> "package": "gold",<br />}<br />token, err := client.CustomToken("some-uid", claims)</pre><p>The resulting custom token can be sent to a client device, where it can be used to <a href="">initiate an authentication flow</a> using a Firebase client SDK. On the other hand, the ID token verification facilitates securely identifying the currently signed in user on your server: </p> <pre class="prettyprint">client, err := app.Auth()<br />if err != nil {<br /> return err<br />}<br /><br />decoded, err := client.VerifyIDToken(idToken)<br />uid := decoded.UID</pre><p>To learn more about using the Firebase Admin SDK for Go, see our <a href="">Admin SDK setup guide</a>. </p><h2>What's Next?</h2><p>We plan to further expand the capabilities of the Go Admin SDK by implementing other useful APIs such as user management and Firebase Cloud Messaging. This SDK is also open source. Therefore we welcome you to browse our <a href="">Github repo</a> and get involved in the development process by reporting issues and sending pull requests. To all Golang gophers out there, happy coding with Firebase! </p><img src="" height="1" width="1" alt=""/>Firebase automated screenshots: fastlane’s snapshot now supports multiple concurrent simulators <em>Originally Posted by <a href="">David Ohayon</a>, Software Engineer on the <a href="">Fabric Blog</a></em> <div class="blogimg"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <p>Every mobile developer needs to take app screenshots in order to have their app listed on the app stores. Like a book cover, screenshots are crucial in depicting the best parts of your app and convincing potential users to download it. </p><p>Unfortunately, generating app screenshots is a huge pain because they take a ton of time, especially if your app supports different locales and languages. For example, if you need to take 5 screenshots for your app store listing - but your app supports 20 languages for 6 devices - you'll manually have to take 600 screenshots (5 x 20 x 6)! It makes us shudder to think how many precious hours that would eat up. </p><p>fastlane's snapshot tool automates the process of taking screenshots (in the background) so you can focus on building features users love. <strong>Today, we're excited to share that snapshot now supports multiple, concurrent simulators for iOS apps in Xcode 9</strong>. Taking screenshots just got even faster because you can now generate screenshots for all of your devices at the same time! </p><h2>Speeding up screenshots (even more!)</h2><p>Before Xcode 9, only one simulator could be running at a time, which means that you had to run snapshot once for each device you wish to support. While snapshot automated the process of taking screenshots, we wanted to make things even easier. </p><p>The launch of Xcode 9 gave us another opportunity to improve snapshot. In Xcode 9, multiple UI tests can run simultaneously, so we added multiple simulator support to snapshot as well. Now, you can <strong>take screenshots for all specified devices with a single command, at the same time</strong>. This drastically shortens the time it takes to generate your screenshots. </p><p>Here's an example: </p> <div class="blogimg"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <h2>More exciting updates on the way</h2><p>fastlane's mission is to save you time by automating the cumbersome tasks of app deployment, even as mobile evolves. That's why we're fully committed to updating the fastlane toolset to take advantage of new releases and features - such as Xcode 9. </p><p>And since fastlane is open source, we're so thankful that our community also helps us make fastlane better by <a href="">building and using plugins</a>. In fact, we now have more user-generated plugins available for you to try than native fastlane actions. We recently <a href="">reorganized these plugins</a> to make it easier to find the right plugins for your unique needs. </p><p>We always strive to anticipate your needs and build our tools to be ready for the future. To start using the new version of snapshot, simply update fastlane and run snapshot as you normally would. If you're taking screenshots manually, check out our <a href="">guide to start using snapshot</a> (and enjoy the extra free time!). As always, we can't wait to hear what you think! </p> <style> .blogimg img { width: 100%; margin: 0; border: 0; padding: 10px 0 10px 0; } </style><img src="" height="1" width="1" alt=""/>Firebase Guard Your Web Content from Abuse with reCAPTCHA and Firebase<figure class="profile"><div class="profile-picture"><img alt="Doug Stevenson" src="" style="margin-left: 0;" /> </div><figcaption> <strong><div>Doug Stevenson</div></strong> <em>Developer Advocate</em> </figcaption></figure> If you've browsed the web at all, you've probably seen some sites that ask you to prove you're a human by presenting a <a href="">reCAPTCHA</a> challenge. For example, if you try to use the goo.gl URL shortener, it won't let you shorten a link until you satisfy the reCAPTCHA, which looks like this: <div class="blogimg"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><p>Web site engineers do this to protect their site from spam and abuse from bots, while allowing legitimate human use. Why is protection needed? Maybe you have some backend code that's expensive in time and storage and you only want actual users of your web to access it. </p><p>If you have a web site, you can also use reCAPTCHA to protect its services. And, if you're building your site with <a href="">Firebase Hosting</a>, it's pretty easy to get it integrated with the help of <a href="">Cloud Functions for Firebase</a> to provide a secure, scalable backend to verify the completion of the reCAPTCHA. </p><p>In this blog post, I'll walk you through a few steps that will get you to a very basic integration that you can extend later for your own site. For this walkthrough, I'm assuming you already have some experience with web development, the <a href="">Firebase console</a>, and the <a href="">Firebase CLI</a>. </p><h2>1. Create a Firebase project in the console</h2><p>Navigate to the Firebase console and create a new project. There's no need to add billing to this project - you can experiment fully without providing a credit card. Once you create the project, there's nothing else you need to do in the console. </p><h2>2. Set up a directory for your project code</h2><p>Using the Firebase CLI, make sure you're logged in with the same Google account that you used to create the project: </p> <pre class="prettyprint">$ firebase login<br /></pre><p>Now, create a root project directory and initialize it: </p> <pre class="prettyprint">$ mkdir my_project<br />$ cd my_project<br />$ firebase init<br /></pre><p>When running <code>firebase init</code>, be sure to select <strong>both hosting and functions</strong>. When you're asked to choose a project, select the one you just created earlier. Take the defaults for every other prompt. You'll end up with a directory structure that contains a <code>public</code> folder for web content, and a <code>functions</code> folder for your backend code. </p><p>For the Cloud Functions backend, we'll need a couple modules from npm to help verify the reCAPTCHA. The reCAPTCHA API requires you to make an HTTP request for verification from your backend, and you can do that with the request and request-promise modules. Pull them into your project like this: </p> <pre class="prettyprint">$ cd functions<br />$ npm install request request-promise<br /></pre><p>Your <code>package.json</code> file should now show those two new modules in addition to firebase-functions and firebase-admin. </p><h2>3. Test web deployment</h2><p>Make sure you can deploy web content by running this deploy command: </p> <pre class="prettyprint">$ firebase deploy --only hosting<br /></pre><p>When this finishes, you'll be given the public URL to your new web site, which will look something like this: </p> <pre class="prettyprint">✔ Deploy complete!<br /><br />Project Console:<br />Hosting URL:<br /></pre><p>where <code>your-project</code> is the unique id that was given to your project at the time it was created in the console. If you paste the Hosting URL into your browser, you should see a page that says "Firebase Hosting Setup Complete". </p><h2>4. Get a reCAPTCHA API Key</h2><p>reCAPTCHA requires a couple API keys for operation, one for the web client and one for the server API. You can get those from the <a href="">reCAPTCHA admin panel</a>, so navigate there. Create a new site and give it a name. Select "reCAPTCHA V2". For domains, put the full hostname of your Firebase Hosting site name (e.g. "your-project.firebaseapp.com"). </p> <div class="blogimg"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><p>After you register, you'll be given a <strong>Site key</strong> and a <strong>Secret key</strong>. The Site key will be used in your frontend HTML, and the Secret key will be used in your backend hosted by Cloud Functions. </p><h2>5. Add a page with a reCAPTCHA</h2><p>Now we'll add a new HTML page to display the reCAPTCHA. In the <code>public</code> directory in your project, add a new HTML file called <code>recaptcha.html</code> to display the reCAPTCHA. Simply copy and paste the following content directly into that new file: </p> <pre class="prettyprint"><html><br /> <head><br /> <title>Firebase + reCAPTCHA</title><br /> <script src="" async defer></script><br /> <script type="text/javascript"><br /> function dataCallback(response) {<br /> console.log("dataCallback", response)<br /> window.location.<br /> </body><br /></html><br /></pre><p>Notice in the body there is a div with the class "g-recaptcha". <strong>The first thing you should do here</strong> is copy your reCAPTCHA site key into the div's data-sitekey attribute value. This div will get automatically transformed into a reCAPTCHA UI after the first script at the top is loaded. You can read more about that <a href="">here in the docs</a>. </p><p>You can see it right away if you <code>firebase deploy</code> again, then navigate to /recaptcha.html under your Hosting URL. Don't bother dealing with the reCAPTCHA yet, because we still need some backend code to complete the verification! </p><p>The JavaScript code in this page defines two functions <code>dataCallback</code>and <code>dataExpiredCallback</code>. These are referenced in the div, and provide callbacks for the reCAPTCHA to tell you when the reCAPTCHA has been satisfied, or if the user took too long to proceed. </p><p>The important thing to note in <code>dataCallback</code> is that it redirects the browser to another URL in the site with the path <code>/checkRecaptcha</code>, and pass it a parameter named <code>response</code>. This response string is generated by reCAPTCHA and looks like a random collection of characters. </p><p>The path <code>/checkRecaptcha</code> in your web site obviously doesn't exist yet, so we need to create a Cloud Function to validate the response string it's going to receive. </p><h2>6. Create a Cloud Function to verify the reCAPTCHA response</h2><p>In the functions directory in your project, edit the existing index.js file. This has some sample code, but you can delete it. In its place, paste the following JavaScript code: </p> <pre class="prettyprint">const functions = require('firebase-functions')<br />const rp = require('request-promise')<br /><br />exports.checkRecaptcha = functions.https.onRequest((req, res) => {<br /> const response = req.query.response<br /> console.log("recaptcha response", response)<br /> rp({<br /> uri: '',<br /> method: 'POST',<br /> formData: {<br /> secret: 'PASTE_YOUR_SECRET_CODE_HERE',<br /> response: response<br /> },<br /> json: true<br /> }).then(result => {<br /> console.log("recaptcha result", result)<br /> if (result.success) {<br /> res.send("You're good to go, human.")<br /> }<br /> else {<br /> res.send("Recaptcha verification failed. Are you a robot?")<br /> }<br /> }).catch(reason => {<br /> console.log("Recaptcha request failure", reason)<br /> res.send("Recaptcha request failed.")<br /> })<br />})<br /></pre><p><strong>The first thing you should do here</strong> is paste your reCAPTCHA secret key from the registration site in place of "PASTE_YOUR_SECRET_CODE_HERE". </p><p><em>(Astute readers may note that the reCAPTCHA API endpoint host is "recaptcha.google.com", while the docs say "". This is OK! You have to use recaptcha.google.com as shown in order to make the call on the Spark plan, because that host has been whitelisted for outgoing traffic from Cloud Functions.)</em></p><p>This code defines an HTTPS function that, when triggered, will make another HTTPS request (using the request-promise module) to the reCAPTCHA API in order to <a href="">verify the response</a> that was received in the query string. Notice that there are three cases with three different responses to the client. Either: </p><ol><li>The reCAPTCHA verifies successfully (the user is human) <li>The reCAPTCHA fails (could be a robot) <li>The API call fails altogether</li></ol><p>It's important to send a response to the client in <em>all cases</em>, otherwise the function will time out with an error message in the Firebase console log. </p><p>To deploy this new function (and the web content at the same time) run the following command: </p> <pre class="prettyprint">$ firebase deploy<br /></pre><p>You'll notice in the output that the function is assigned its own URL, which looks something like this: </p> <pre class="prettyprint"><br /></pre><p>This is clearly a different host than the one with your web content. However, what we really want instead is for the function to be referenced through your web host at a URL that looks like this: </p> <pre class="prettyprint"><br /></pre><p>This makes the function look like it's part of your web site. With Firebase Hosting a Cloud Functions, this can be done! </p><h2>7. Add rewrites to map a hosting URL to a Cloud Function</h2><p>Edit the file <code>firebase.json</code> in the project root directory and paste the follow JSON configuration as its contents: </p> <pre class="prettyprint">{<br /> "hosting": {<br /> "public": "public",<br /> "rewrites": [<br /> {<br /> "source": "/checkRecaptcha",<br /> "function": "checkRecaptcha"<br /> }<br /> ]<br /> }<br />}<br /></pre><p>What you've done here is add a new section for rewrites, and you can read more about those in the docs. Specifically what this does is allow access to the URL path <code>/checkRecaptcha</code> invoke the function called <code>checkRecaptcha</code> that you pasted into your <code>functions/index.js</code> file. </p><p>Remember that the JavaScript code in <code>recaptcha.html</code> redirects to this path when the reCAPTCHA is satisfied by the user, so this effectively sends to user to the function after they complete the reCAPTCHA. </p><p>Now do one final deploy to send everything to Firebase: </p> <pre class="prettyprint">$ firebase deploy<br /></pre><h2>8. Test the reCAPTCHA!</h2><p>Navigate to <code>/recaptcha.html</code> under your hosting URL, then solve the reCAPTCHA. It may ask you to identify some cars or roads in a set of pictures. Once you've satisfied the reCAPTCHA with your humanity, the JavaScript in your HTML should redirect you to your function, which verifies with the server that you're indeed human, and you should see the message "You're good to go, human." </p><p>This example of how to use reCAPTCHA with Cloud Functions for Firebase is much more simple than what you'd probably do in your own web site. You have several options for how to send the reCAPTCHA response to your function, and you'd obviously want to provide something more useful than a message to the user. But this should get you started protecting your web content from abuse from bots. </p> <style> .blogimg img { width: 100%; border: 0; margin: 0; padding: 10px 0 10px 0; } </style><img src="" height="1" width="1" alt=""/>Firebase our Experimental Linter Tool for Firebase on iOS<figure class="profile"> <div class="profile-picture"><img alt="Ibrahim Ulukaya" src="" style="margin:0;" /> </div><figcaption> <strong><div>Ibrahim Ulukaya</div></strong> <em>Developer Programs Engineer</em> </figcaption></figure> <p>Ever spend an hour wondering why Remote Config wasn't working, only to realize that you forgot to call <code>activateFetched</code>? Or didn't read a Dynamic Link because you forgot to implement the <code>application:continue:restorationHandler</code> method? Well, now there's a tool to help stop those mistakes before they happen! </p><p><a href="">SwiftLint</a> is a great open source tool that makes it easier for you to follow Swift style and conventions. It also helps with identifying possible errors early by highlighting problematic usage. You can run SwiftLint on your Xcode project to see all the style guide exceptions on the lines where they occur, and fix them quickly. I found it was a great help when<a href="">I migrated my code from Objective-c to Swift</a>. </p><p>In the spirit of making SwiftLint even more useful for Firebase developers, we've added some experimental new Firebase rules into SwiftLint. These rules will display warnings on common mistakes that might lead to errors when using the Firebase SDK. </p><p><h2>How to use</h2></p><p>Currently we are hosting the rules on<a href=""> our fork in a firebase_rules branch</a>. Our<a href=""> pre-release binary</a> holds the Firebase rules. You simply download the .pkg file and double click to install. You can also build the binary from the source. </p><p>Since the rules are opt-in, you'll need to add a <b>.swiftlint.yml</b>file in the same folder as your Swift source files, containing the following text: </p><pre class="prettyprint"><br />```<br />opt_in_rules:<br /> - firebase_config_activate<br /> - firebase_config_defaults<br /> - firebase_config_fetch<br /> - firebase_core<br /> - firebase_dynamiclinks_customschemeURL<br /> - firebase_dynamiclinks_schemeURL<br /> - firebase_dynamiclinks_universallink<br /> - firebase_invites<br />```<br /></pre><p>Then, just run SwiftLint on your project like normal. </p><p>If you're interested in how we put the rules together, you can read our post with all the <a href="">development details.</a></p><p>We'd love for you to give it a try and send us feedback on Twitter with #FirebaseLinter. You can also ask questions on <a href="">StackOverflow</a>using the <code>firebase</code> and <code>swiftlint</code> tags together. </p> Happy coding! <style>.profile-picture img { left: 20px; } </style><img src="" height="1" width="1" alt=""/>Firebase App Takes the Stage<figure class="profile"> <div class="profile-picture"><img alt="Darin Hilton" src="" style="margin: 0; width: 130px;" /> </div><figcaption> <strong><div> David DeRemer</div></strong> </figcaption></figure> <em>Originally posted on the <a href="">Google Developers Blog</a> by David DeRemer (from Posse)</em> <p>Whether it's opening night for a Broadway musical or launch day for your app, both are thrilling times for everyone involved. Our agency, <a href="">Posse</a>, collaborated with Hamilton to design, build, and launch the official Hamilton app... in only three short months. </p><p>We decided to use <a href="">Firebase</a>, Google's mobile development platform, for the backend and infrastructure, while we used <a href="">Flutter</a>, a new UI toolkit for iOS and Android, for the front-end. In this post, we share how we did it. </p> <div class="blogimg1"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <h2>The Cloud Where It Happens</h2><p>We love to spend time designing beautiful UIs, testing new interactions, and iterating with clients, and we don't want to be distracted by setting up and maintaining servers. To stay focused on the app and our users, we implemented a full serverless architecture and made heavy use of Firebase. </p><p><div class="blogimg2"><a href="" imageanchor="1" ><img border="0" src="" data-</a></div></p><p>A key feature of the app is the <em>ticket lottery</em>, which offers fans a chance to get tickets to the constantly sold-out Hamilton show. We used Cloud Functions for Firebase, and a data flow architecture we <a href="">learned about at Google I/O</a>, to coordinate the lottery workflow between the mobile app, custom business logic, and partner services. </p><p>For example, when someone enters the lottery, the app first writes data to specific nodes in Realtime Database and the database's security rules help to ensure that the data is valid. The <em>write</em> triggers a Cloud Function, which runs business logic and stores its result to a new node in the Realtime Database. The newly written result data is then pushed automatically to the app. </p><h2>What'd I miss?</h2><p>Because of Hamilton's intense fan following, we wanted to make sure that app users could get news the instant it was published. So we built a custom, web-based Content Management System (CMS) for the Hamilton team that used Firebase Realtime Database to store and retrieve data. The Realtime Database eliminated the need for a "pull to refresh" feature of the app. When new content is published via the CMS, the update is stored in Firebase Realtime Database and every app user <em>automatically</em> sees the update. No refresh, reload, or pull required! </p><h2>Cloud Functions Left Us Satisfied</h2><p>Besides powering our lottery integration, Cloud Functions was also extremely valuable in the creation of user profiles, sending push notifications, and our #HamCam — a custom Hamilton selfie and photo-taking experience. Cloud Functions resized the images, saved them in Cloud Storage, and then updated the database. By taking care of the infrastructure work of storing and managing the photos, Firebase freed us up to focus on making the camera fun and full of Hamilton style. </p><h2>Developing UI? Don't Wait For It.</h2><p>With only three months to design and deliver the app, we knew we needed to iterate quickly on the UX and UI. Flutter's <em>hot reload</em> development cycle meant we could make a change in our UI code and, in about a second, see the change reflected on our simulators and phones. No rebuilding, recompiling, or multi-second pauses required! Even the state of the app was preserved between hot reloads, making it very fast for us to iterate on the UI with our designers. </p><p>We used Flutter's reactive UI framework to implement Hamilton's iconic brand with custom UI elements. Flutter's "everything is a widget" approach made it easy for us to compose custom UIs from a rich set of building blocks provided by the framework. And, because Flutter runs on both iOS and Android, we were able to spend our time creating beautiful designs instead of porting the UI. </p><p>The <a href="">FlutterFire</a>project helped us access Firebase Analytics, Firebase Authentication, and Realtime Database from the app code. And because Flutter is open source, and easy to extend, we even built a custom <a href="">router</a> library that helped us organize the app's UI code. </p><h2>What comes next?</h2><p>We enjoyed building the Hamilton app (find it on the <a href="">Play Store</a> or the <a href="">App Store</a>) in a way that allowed us to focus on our users and experiment with new app ideas and experiences. And based on our experience, we'd happily recommend serverless architectures with Firebase and customized UI designs with Flutter as powerful ways for you to save time building your app. </p><p>For us, we already have plans how to continue and develop Hamilton app in new ways, and can't wait to release those soon! </p><p>If you want to learn more about Firebase or Flutter, we recommend the <a href="">Firebase docs</a>, the <a href="">Firebase channel on YouTube</a>, and the <a href="">Flutter website</a>. </p> <style> .blogimg1 img { width: 50%; display: block; margin: auto; border: 0; padding: 20px 0 20px 0; } .blogimg2 img { width: 100%; margin: 0; border: 0; padding: 20px 0 20px 0; } </style><img src="" height="1" width="1" alt=""/>Firebase Making great mobile games with Firebase<img itemprop="image" style="display:none" src=""> <figure class="profile"> <div class="profile-picture"><img alt="Darin Hilton" src="" style="margin: 0;" /> </div><figcaption> <strong><div> Darin Hilton</div></strong> <em>Art Director</em> </figcaption></figure> So much goes into building and maintaining a mobile game. Let's say you want to ship it with a level builder for sharing content with other players and, looking forward, you want to roll out new content and unlockables linked with player behavior. Of course, you also need players to be able to easily sign into your soon-to-be hit game. <p>With a DIY approach, you'd be faced with having to build user management, data storage, server side logic, and more. This will take a lot of your time, and importantly, it would take critical resources away from what you really want to do: build that amazing new mobile game! </p><p>Our Firebase SDKs for Unity and C++ provide you with the tools you need to add these features and more to your game with ease. Plus, to help you better understand how Firebase can help you build your next chart-topper, we've built a sample game in Unity: MechaHamster. Check it out on <a href="">Google Play</a> or download the sample project at <a href="">Github</a> to see how easy it is to integrate Firebase into your game. </p><div class="parent"> <a href="" imageanchor="1" ><img border="0" src="" data-</a></div> <p>Before you dive into the sample code for MechaHamster, here's a rundown of the Firebase products that can help your game be successful. </p><h2>Analytics</h2><p>One of the best tools you have to maintain a high-performing game is your analytics. With <a href="">Google Analytics for Firebase</a>, you can see where your players might be struggling and make adjustments as needed. Analytics also integrates with Adwords and other major ad networks to maximize your campaign performance. If you monetize your game using AdMob, <a href="">you can link your two accounts</a> and see the lifetime value (LTV) of your players, from in-game purchases and AdMob, right from your Analytics console. And with Streamview, you can see how players are interacting with your game in realtime. </p><h2>Test Lab for Android - Game Loop Test</h2><p>Before releasing updates to your game, you'll want to make sure it works correctly. However, manual testing can be time consuming when faced with a large variety of target devices. To help solve this, we recently launched <a href="">Firebase Test Lab for Android Game Loop Test</a> at Google I/O. If you add a demo mode to your game, Test Lab will automatically verify your game is working on a wide range of devices. You can read more in <a href="">our deep dive blog post here</a>. </p><h2>Authentication </h2><p>Another thing you'll want to be sure to take care of before launch is easy sign-in, so your users can start playing as quickly as possible. <a href="">Firebase Authentication</a>can help by handling all sign-in and authentication, from simple email + password logins to support for common identity providers like Google, Facebook, Twitter, and Github. Just announced recently at I/O, Firebase also now <a href="">supports phone number authentication</a>. And Firebase Authentication shares state cross-device, so your users can pick up where they left off, no matter what platforms they're using. </p><h2>Remote Config</h2><p>As more players start using your game, you realize that there are few spots that are frustrating for your audience. You may even see churn rates start to rise, so you decide that you need to push some adjustments. With <a href="">Firebase Remote Config</a>, you can change values in the console and push them out to players. Some players having trouble navigating levels? You can adjust the difficulty and update remotely. Remote Config can even benefit your development cycle; team members can tweak and test parameters without having to make new builds. </p><h2>Realtime Database</h2><p>Now that you have a robust player community, you're probably starting to see a bunch of great player-built levels. With <a href="">Firebase Realtime Database</a>, you can store player data and sync it in real-time, meaning that the level builder you've built can store and share data easily with other players. You don't need your own server and it's optimized for offline use. Plus, Realtime Database integrates with Firebase Auth for secure access to user specific data. </p><h2>Cloud Messaging & Dynamic Links</h2><p>A few months go by and your game is thriving, with high engagement and an active community. You're ready to release your next wave of new content, but how can you efficiently get the word out to your users? <a href="">Firebase Cloud Messaging</a> lets you target messages to player segments, without any coding required. And <a href="">Firebase Dynamic Links</a> allow your users to share this new content — or an invitation to your game — with other players. Dynamic Links survive the app install process, so a new player can install your app and then dive right into the piece of content that was shared with him or her. </p><p>At Firebase, our mission is to help mobile developers build better apps and grow successful businesses. When it comes to games, that means taking care of the boring stuff, so you can focus on what matters — making a great game. Our mobile SDKs for C++ and Unity are available now at <a href="">firebase.google.com/games</a> and don't forget to <a href="">check out our sample game project, MechaHamster, on GitHub</a>. </p> <style> .parent { display: flex; width: 100%; justify-content: space-around; align-items: center; flex-wrap: wrap; } .child { max-width: 33%; display: flex; justify-content: center; align-items: center; padding: 5px; } </style><img src="" height="1" width="1" alt=""/>Firebase Updates to Apps Using Google Play services<figure class="profile"><div class="profile-picture"><img alt="Doug Stevenson" src="" style="margin-left: 0;" /> </div><figcaption> <strong><div>Doug Stevenson</div></strong> <em>Developer Advocate</em> </figcaption></figure> There are a couple recent changes to the way you build your Android apps with Google Play services (and Firebase SDKs, which are distributed as part of Play services). Here's what you need to know to stay up to date. <h2>1. Play services (and Firebase) dependencies are now available via maven.google.com</h2><p>Until recently, developers were required to update their Android tools to make use of new versions of the local maven repository that contains Play services compile dependencies. Only after updating were the Android build tools able to locate them. Now, the dependencies are available directly from maven.google.com. You can update your app's Gradle build scripts to use this repository by simply configuring the build like this: </p> <pre class="prettyprint">allprojects {<br /> repositories {<br /> jcenter()<br /> maven { url '' }<br /> }<br />}<br /></pre><p>Note the new Google maven repository. This is where dependencies are now hosted. Using Gradle 4.0 and later, you can simply specify <code>google()</code> as a shortcut instead. Once configured like this, Gradle will be able to locate, download, and cache the correct Play services dependencies without requiring an update to the Android build tools. Play services SDKs going back to version 3.1.36 will be available in this repo. </p> <p>You can read more about Google's maven repo <a href="">here</a>. </p><h2>2. Starting with Play services dependencies version 11.2.0, your app's <code>compileSdkVersion</code> must be at least 26</h2><p>When you upgrade your app's Play services dependencies to 11.2.0 or later, your app's <code>build.gradle</code> must also be updated to specify a <code>compileSdkVersion</code> of at least 26 (Android O). This will not change the way your app runs. You will not be required to update <code>targetSdkVersion</code>. If you do update <code>compileSdkVersion</code>to 26, you may receive an error in your build with the following message referring to the Android support library: </p><p><em>This support library should should not use a different version (25) than the compileSdkVersion (26).</em></p><p>This error can be resolved by upgrading your support library dependencies to at least version 26.0.0. Generally speaking, the <code>compileSdkVersion</code> of your app should always match the <em>major</em> version number of your Android support library dependencies. In this case, you'll need to make them both 26. </p><img src="" height="1" width="1" alt=""/>Firebase Dev Summit 2017 in Amsterdam!<figure class="profile"> <div class="profile-picture"><img alt="Frank van Puffelen" src="" /> </div><figcaption> <strong><div>Frank van Puffelen</div></strong> <em>Engineer</em> </figcaption></figure> We're excited to announce that the registration for the Firebase Dev Summit is opening today! <p>Please join us in Amsterdam on October 31st for a day of talks, codelabs, and office hours, as well as (of course) an after-party. </p><div class=blogimg><a href="" imageanchor="1" ><img border="0" src="" data-</a></div><div class=blogcptn>I had a blast at last year's Dev Summit in Berlin </div><p>Three months ago, thousands of developers joined us at Google I/O to hear about improvements to the Firebase platform, like <a href="">Performance Monitoring</a>, <a href="">Phone Authentication</a>, and our newly <a href="">open sourced SDKs</a>.. </p><p>Registration is now open, but keep in mind that space will be filled on a first-come, first-serve basis, so make sure to <a href="">request an invitation today</a>. </p><h2>What is the Firebase Dev Summit?</h2><p>The Firebase Dev Summit is full day event for app developers that will focus on solving core infrastructure and growth challenges in app development. We'll have deep dive sessions, as well as introductory overviews, so all levels of Firebase familiarity are welcome! </p><p>We also want you to get your hands dirty with Firebase. You'll get a chance to put your new knowledge into practice with instructor-led codelabs, as well as ask our team any questions you have at our #AskFirebase lounge. </p><p. </p><p :-). </p><p>We're looking forward to meeting you in person. Dank je en tot gauw! </p> <style> .blogimg img { width: 100%; padding: 10px 0 5px 0; margin: 0; border: 0; } .blogcptn { font-size: 85%; font-style: italic; text-align: center; padding: 0 0 10px 0; margin: 0; border: 0; } </style><img src="" height="1" width="1" alt=""/>Firebase Performance Monitoring for Android Tip #1: Automatic Traces for All Activities<figure class="profile"><div class="profile-picture"><img alt="Doug Stevenson" src="" style="margin-left: 0;" /> </div><figcaption> <strong><div>Doug Stevenson</div></strong> <em>Developer Advocate</em> </figcaption></figure> <p>If you haven't tried <a href="">Firebase Performance Monitoring</a> yet, many Firebase developers have found it to be a helpful way to get a sense of some of the performance characteristics of their iOS or Android app, without writing many extra lines of code. To get more detailed information beyond what's collected <a href="">automatically</a>, you'll eventually have to write some <a href="">custom traces and counters</a>. Traces are a report of performance data within a distinct period of time in your app, and counters let you measure performance-related events during a trace. In today's perf tip, I'll propose a way to add potentially <em>many</em> more traces to your Android app without writing very much code at all. </p> <em>all</em> of them with their own trace. </p><p>Android gives you a way to listen in on the lifecycle of every single Activity in your app. The listeners are implementations of the interface <code><a href="">ActivityLifecycleCallbacks</a></code>, and you can register one with the <code><a href="">Application.registerLifecycleCallbacks()</a></code>method. For measuring performance, I suggest creating a trace that corresponds to the <code>onStart()</code> and <code>onStop()</code>): <pre class="prettyprint">public class PerfLifecycleCallbacks<br /> implements Application.ActivityLifecycleCallbacks {<br /><br /> private static final PerfLifecycleCallbacks instance =<br /> new PerfLifecycleCallbacks();<br /><br /> private PerfLifecycleCallbacks() {}<br /> public static PerfLifecycleCallbacks getInstance() {<br /> return instance;<br /> }<br />}<br /></pre><p>Then, inside that class, I'll add some members that manage custom traces for each Activity: </p> <pre class="prettyprint"> private final HashMap<Activity, Trace> traces = new HashMap<>();<br /><br /> @Override<br /> public void onActivityStarted(Activity activity) {<br /> String name = activity.getClass().getSimpleName();<br /> Trace trace = FirebasePerformance.startTrace(name);<br /> traces.put(activity, trace);<br /> }<br /><br /> @Override<br /> public void onActivityStopped(Activity activity) {<br /> Trace trace = traces.remove(activity);<br /> trace.stop();<br /> }<br /><br /><br /> // ...empty implementations of other lifecycle methods...<br /></pre><p!) </p><p>I'll add one more method to it that will return the trace of a given Activity object. That can be used in any activity to get a hold of the current trace so that counters can be added to it: </p> <pre class="prettyprint"> @Nullable<br /> public Trace getTrace(Activity activity) {<br /> return traces.get(activity);<br /> }<br /></pre><p>This class should be registered before any Activity starts. A ContentProvider is a good place to do that. If you're not familiar with how that works, you can read about how <a href="">Firebase uses a ContentProvider to initialize</a>. </p> <pre class="prettyprint">public class PerfInitContentProvider extends ContentProvider {<br /> @Override<br /> public boolean onCreate() {<br /> context = getContext();<br /> if (context != null) {<br /> Application app = (Application) context.getApplicationContext();<br /> app.registerActivityLifecycleCallbacks(<br /> PerfLifecycleCallbacks.getInstance());<br /> }<br /> }<br />}<br /></pre><p>Don't forget to add the ContentProvider to your app's manifest! This will ensure that it gets created before any Activity in your app. </p><p>Once this ContentProvider is in place, your app will automatically create traces for all your activities. If you want to add counters to one of them, simply use the <code>getTrace()</code> method from the PerfLifecycleCallbacks singleton using the current Activity object. For example: </p> <pre class="prettyprint">private Trace trace;<br /><br />@Override<br />protected void onCreate(Bundle savedInstanceState) {<br /> trace = PerfLifecycleCallbacks.getInstance().getTrace(this);<br /> // use the trace to tally counters...<br />}<br /></pre><p <a href="">Firebase on Twitter</a> to get more Firebase Performance Monitoring tips. </p><img src="" height="1" width="1" alt=""/>Firebase
|
http://feeds.feedburner.com/FirebaseBlog
|
CC-MAIN-2017-47
|
refinedweb
| 19,747
| 50.57
|
.basic;23 24 /**25 * @author <a HREF="mailto:bill@jboss.org">Bill Burke</a>26 * @version $Revision: 37406 $27 */28 public class Address29 {30 31 public Address()32 {33 }34 35 public Address(String street,36 String city,37 String state)38 {39 this.street = street;40 this.city = city;41 this.state = state;42 }43 44 private String street;45 private String city;46 private String state;47 48 49 public String getStreet()50 {51 return street;52 }53 54 public void setStreet(String street)55 {56 this.street = street;57 }58 59 public String getCity()60 {61 return city;62 }63 64 public void setCity(String city)65 {66 this.city = city;67 }68 69 public String getState()70 {71 return state;72 }73 74 public void setState(String state)75 {76 this.state = state;77 }78 79 }80 81
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/org/jboss/test/aop/basic/Address.java.htm
|
CC-MAIN-2017-04
|
refinedweb
| 157
| 75.2
|
Bug Description
I am REALLY impressed with the winner of the 0.47 About screen contest: http://
I ran this image through the scour script (http://
$ ./scour.py -i fulltests/
scour 0.14
File: fulltests/
Time taken: 10.1166666667s
Number of elements removed: 108
Number of attributes removed: 328
Number of unreferenced id attributes removed: 363
Number of style properties fixed: 450
Number of raster images embedded inline: 0
Number of path segments reduced/removed: 186
Number of bytes saved in path data: 9574
Number of bytes saved in colors: 3319
Number of points removed from polygons: 0
Original file size: 214145 bytes; new file size: 109515 bytes (51.14%)
NOTE that scour automatically removes inkscape elements and attributes in the inkscape/sodipodi namespaces. I don't know if this information is important in the About image for the editor. I plan to add an option to keep these elements/attributes in a future version of scour. I'm open to suggestions on this.
I've just noticed a problem with scour removing that line so I've manually added it back.
I tried to lod the file into /usr/local/
It does not show up as about picture.
It does load fine inkscape.
Could you please investigate any further? Maybe some whatever box attributes are missing?
Adib.
Try removing viewBox="0 0 750 625" and adding back width="750" height="625" in the root <svg>
I inserted width and height as described. Now the size comes down from 165,000 to 106,000 bytes.
It displays fine in the about dialogue.
However: on my system E7200 2GB RAM I do not see _much_ speed improvement. This might not the case on older machines. pls try yourself!
Adib.
I do not see any noticeable speed improvement either. Scour is more about decreasing loading times, removing unnecessary elements and reducing file size. Any rendering improvements would only be a bonus.
Sorry, closed this as won't fix. Part of the beauty of distributing the About screen with all of the inkscape specific info in there is that for curious users, they can learn more about how certain effects were achieved.
Hi ScislaC,
I have no problem with you closing this bug. FYI, after originally raising the bug I did add an option to keep the editor data (Inkscape, Illustrator, etc) in the SVG file. I have no idea if that would offer any marked improvement in loading time/file size (I actually doubt it), but I thought I should mention it.
Jeff
pls could you provide the scoured inkscape about screen. PLs attach the document here in the Launchpad tracker.
please note it must contain the copyright info as well as:. org/) -->
<!-- Created with Inkscape (http://
Thx, Adib.
|
https://bugs.launchpad.net/inkscape/+bug/387967
|
CC-MAIN-2016-26
|
refinedweb
| 457
| 74.49
|
using System; public class Test { public static void Main() { string line = Console.ReadLine(); string[] splitLines = line.Split(' '); double withdrawal = double.Parse(splitLines[0]); double balance = double.Parse(splitLines[1]); if(withdrawal % 5 == 0 && balance >= withdrawal + 0.5) { double updatedBalance = balance - withdrawal - 0.5; Console.WriteLine(updatedBalance.ToString("F")); } else { Console.WriteLine(balance.ToString("F")); } } }
Here’s my solution to the ATM problem on CodeChef. The user has to input a withdrawal amount and a balance amount on the same line(Why? IDK). There is an ATM usage fee of $0.50 that must be added to the withdrawal. The withdrawal of course cannot exceed the balance and must be a multiple of 5. Simple may this problem be, I actually felt like I learned few things from doing it.
- Split() is a really nifty method for when you need to split strings.
- Format specifiers, in this case “F”, are really nifty for when you need to specify precision.
- Try not to get big-headed even when doing simple tasks. Next thing you know, you’ll have a logical brain fart, get frustrated, and slip into an existential crisis.
Here’s the link to the problem.
Advertisements
|
https://ccgivens.wordpress.com/2017/02/23/codechef-atm-problem/
|
CC-MAIN-2019-35
|
refinedweb
| 196
| 61.83
|
catch_logs class¶
(Shortest import:
from brian2.utils.logger import catch_logs)
- class brian2.utils.logger.catch_logs(log_level=30)[source]¶
A context manager for catching log messages. Use this for testing the messages that are logged. Defaults to catching warning/error messages and this is probably the only real use case for testing. Note that while this context manager is active, all log messages are suppressed. Using this context manager returns a list of (log level, name, message) tuples.
- Parameters
log_level : int or str, optional
The log level above which messages are caught.
Examples
>>> logger = get_logger('brian2.logtest') >>> logger.warn('An uncaught warning') WARNING brian2.logtest: An uncaught warning >>> with catch_logs() as l: ... logger.warn('a caught warning') ... print('l contains: %s' % l) ... l contains: [('WARNING', 'brian2.logtest', 'a caught warning')]
|
https://brian2.readthedocs.io/en/latest/reference/brian2.utils.logger.catch_logs.html
|
CC-MAIN-2022-40
|
refinedweb
| 129
| 55.5
|
User Tag List
Results 1 to 3 of 3
- Join Date
- Feb 2009
- 2
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
Suspecting Errors in Simply Rails 2
Hi All
I am new to Rails but not new to programming.
I have been trying to learn Rails using the Simply Rails 2 Book that I purchased and downloaded.
I have spent hours repeating the steps in the first 6 chapters, at the point of all most giving up I managed to finally get through it. I don't want anybody else to go through the same pain that I did to get this working.
PLEASE NOTE: If the following errors are in fact not errors and I have failed in following the instructions, please let me know.
My suspected errors are as follows
On page 159 the line
<% form_for @story do |f| %>
did not work for me
I got it working using
<% form_for :story do |f| %>
NOTE: I have replaced the @ symbol with a :
On page 164 I was asked to add
map.resources :stories
to my routes.rb file
I found that it made no difference if I added that or not (test done after I got everything working)
On page 166 I found that the following did not work for me
def create
@story = Story.new(params[:story])
@story.save
end
I had to add the code
@story = Story.new(params[:story])
@story.save
to the already existing "new" method (not create another method called "create")
If I am wrong I would like to know where I went wrong. If I am correct I hope I have helped other get over this hurdle.
Cheers
mistermac
-
|
http://www.sitepoint.com/forums/showthread.php?599974-Suspecting-Errors-in-Simply-Rails-2&p=4150587&viewfull=1
|
CC-MAIN-2017-26
|
refinedweb
| 277
| 72.5
|
Add and manage custom metafields to your products, collections
Easy to use
Advanced Metafields is extremely simple to use and learn.
Bulk Import/Export
Sync data by importing your Metafields in bulk.
Enhance your store
Create a custom look for your store and listings by using new fields as Metafields in your store themes.
About CRO ‑ Advanced Metafields
Metafields are the most crucial feature of any Shopify store. They allow you to add your custom flair to your store and make it stand out. Use Advanced Metafields by CRO to create, add, import, export, and manage new and existing Metafields to your store. Now optimize your store by adding valuable functionality and custom data to your products, collections, blogs, orders, customers, and even to your shop. Advanced Metafields works excellent to expand your Shopify storefront and provide your customers with the best shopping experience.
Features
By choosing to use CRO Advanced Metafields, you get the following advantages: * Add global settings for your Metafields such as a namespace, key, and nature, so you just have to enter the value for every new resource. * Sync your existing Metafields or edit the Metafields used by other applications. * Make changes to the Metafields after the creation. * Export or import Metafields from other applications. * Import and export the configuration of Metafields. * Sort out the Metafields more easily. * Add extra information about your resources. * Delete Metafields you do not need. * Create multiple and custom Metafields for every resource. * Add images to products, blog posts, and collections, etc. * Add videos to your resources. * Include related products or articles. * Create a custom homepage.
Fields You Can Add:
- Images (Single & Multiple)
- Videos
- HTML
- Link/URL
- Files
- Number
- Color
- Phone Number
- Strings
Exemplary Customer Service:
We know you take your business very seriously, and so do we. It is the reason we offer 24/7 customer support so that you can reach out at any time of the day. We’d love to answer all of your queries.
Media gallery
Support
- Developer website
- ahmedmobindev@gmail.com
-
Pricing 14-day free trial
Basic Plan
$18/month
* All charges are billed in USD.
** Recurring charges, including monthly or usage-based charges, are billed every 30 days.
The overall rating reflects the current state of the app. It accounts for all app reviews but prioritizes the most recent ones.
|
https://apps.shopify.com/cro-advanced-metafields?surface_detail=store-management&surface_inter_position=18&surface_intra_position=20&surface_type=category
|
CC-MAIN-2021-43
|
refinedweb
| 386
| 66.74
|
single-market-robot-simulator
A stand alone nodejs app and software module for creating numerical experiments with robots trading in a single market.
The induced supply and demand is configurable, as are the types and speeds of trading robots populating the market.
This code can run either in a browser or on NodeJS and would normally be a "middle" portion of a code stack. Visualizations and friendly user-interfaces are the responsibility of other code, or you, the user.
Programmer's Documentation on ESDoc
The ESDoc site for single-market-robot-simulator contains documentation prepared from source code of this module.
Use of Modern JavaScript -- Babel compiler
The source code uses ES6 JavaScript syntax and may need to be compiled with the Facebook-sponsored open source Babel compiler to be compatible with JavaScript implementations in browsers or nodejs. The source code is in ./src and the Babel-compiled version in ./build. The babel tools are linked as package.json devDependencies. This is primarily a concern for other programmers and does not affect stand-alone usage.
Installation
installation not necessary -- use Docker
No installation is necessary if you have Docker (highly recommended for Windows and Mac usage). Skip to the "Usage" section.
as stand alone JavaScript software
To run as a nodejs command-line program, clone this repository and run
npm install -D or
npm i -D from the cloned directory to install all of the dependencies, including the testing and development dependencies
-D:
git clone cd ./single-market-robot-simulator npm install -D
as a library in another open source npm JavaScript program
If, instead, you want to use it as a library in another module to be released on npm, simply use
npm i -S as usual:
npm i single-robot-market-simulator -S
as a library in a JavaScript web app
To use this as part of a web site, you will probably want to look at something like browserify, jspm, or webpack to server as a wrapper and help with bundling and integration.
To use this as a library on the browser with
jspm, you should set an override option on install forcing dependency
fs to
@empty.
This was done in the robot-trading-webapp example prototype web app that uses a very early version of this code (1.0.0) from May, 2017. The "robot-trading-webapp" prototype is no longer under active development and does not receive updates or bug fixes. You may still try it but I do not recommend it for producing new research data.
It can also be used with
webpack. I do not recall if any
special settings are required.
Paid App Under Development
An afforable paid web app is in development that is much nicer, includes visualization and an editor, has time-saving features, and integrates with Google Cloud and Google Drive.
Configuration
Configuration is a matter of preparing a
sim.json file BEFORE usage.
Configuration in the stand alone app occurs in a .json file called
config.json or
sim.json.
config.json is currently read by
main() by default in stand-alone app mode but this may change to
sim.json in v6.0.0 to better agree with other contexts ([2], and the Docker stand-alones) where the file
sim.json is used,
When used as a software module, the configuration object
config read from the configuration file or other location is passed to the constructor
new Simulation(config).
A partial (but still valid) machine and human readable format for
config.json is given in
configSchema.json as a JSON Schema.
A more human-readable version for most of the allowed fields can be found in the programmer's documentation for the public constructor config params for
Simulation.
Configurable supply and demand
The values and costs to be distributed among the trading robots are configured in the properties
buyerValues and
sellerCosts, each an array that is distributed round-robin style to the buyer robots and seller robots respectively. Each of these values and costs will be distributed exactly once at the beginning of each period of the market.
To be clear, if the
numberOfBuyers exceeds the length of
buyerValues, then some buyers will not receive a unit value. Those buyers will exist but do nothing. If the length of
buyerValues exceeds the
numberOfBuyers then some buyers will receive more than one unit value, which is OK and even expected. By "round-robin" I mean that an element
j of
buyerValues will be assigned to buyer
j mod numberOfBuyers . This form of specification is not convenient for every imaginable use, but it is convenient for setting a particular aggregate supply and demand and keeping it constant while tinkering with the number of buyers, sellers or other parameters.
The descending sorted
buyerValues can be used to form a step function that is the aggregate demand function for the market.
Similarly the ascending sorted
sellerCosts can be used to form a step function that is the aggregate supply function for the market.
Robot Trading agents
The types of buyers and sellers are set in configration properties
buyerAgentType and
sellerAgentType and the buyers and sellers configured round-robin from these types.
For example, if there is only one type of buyer, then all buyers are that type. If there are two types of buyers configured then the buyers will alternate between these types, with half the buyers will be the first type, and half the buyers will be the second type if the number of buyers is even. If the number of buyers is odd then there will be an extra buyer of the first type. Perhaps a good practice is to have the buyerAgentType and sellerAgentType arrays have an entry for each buyer and seller, but for convenience in simple cases the round robin is used.
The module market-agents is imported to provide the robot trading agents.
The algorithms provided are intentionally fairly simple when compared to Neural Networks and some other approaches to machine learning. Several of the algorithms chosen have been the topics of papers in the economics literature.
Among the choices are:
- The Zero Intelligence trader of Gode and Sunder[1] that bids/asks randomly for non-zero profit.
- a Sniper similar to Kaplan's Sniper algorithm but explicitly liquidity-reducing. For now, I still call it "KaplanSniperAgent" because of its historical roots. See [2].
- a "truthful" or identity-function algorithm that always bids the unit value or asks the unit cost.
- a bisection algorithm that bids or asks halfway between the current bid/current ask if profitable to do so, and initially bids/asks an extreme value when no bid/ask is present
- a "oneupmanship" algorithm that increases the bid or decreases the ask by 1 unit if profitable to do so
- others, and a base class for writing your own algorithm
Usage
Stand Alone App
when run from Docker
It is possible to run the software on Docker without having a Linux system (otherwise recommended), and without installing nodejs and npm (otherwise required). Docker downloads a Linux container containing everything needed and runs it on any computer.
To run on Docker, you must first install Docker Desktop (Windows 10 Pro, Windows 10 for Education, Mac) or Docker community edition (Linux).
Create a work directory containing a
sim.json file with the simulation configuration. This follows the format given above
for
config.json only the filename is changed.
The most recent Docker container is for version 5.6.0. To run that, use this docker command:
docker run -it \ -v /path/to/your/work/directory:/work \ drpaulbrewer/single-market-robot-simulator:5.6.0
To run the simulator code as it existed for the research project [2] (version 4.3.0), use this Docker command:
docker run -it \ -v /path/to/your/work/directory:/work \ drpaulbrewer/single-market-robot-simulator:4.3.0
when installed from GitHub
If installed from github onto a suitable system (preferably Linux, though it may run on Windows 10 or Mac -- and with nodejs and npm previously installed) it can be used as a stand alone nodejs app.
node build/index.js from the installation directory will run the simulation, reading the
config.json file and outputting various log files.
You can name a file like
/my-files/research/project123/sim.json but the simulator will then fetch that file but continue to run and output market data files into the current directory, and not necessarily in the directory where that
sim.json file is located. Instead, consider copying the
sim.json file to a new directory,
cd to that new directory, and run
node /path/to/single-market-robot-simulator/build/index.js sim.json
where you should replace
/path/to/ with the actual directory path where the simulator is installed.
Outputs
A number of .csv comma-separated-value files are produced containing the market data.
Output files include:
buyorders.csv,
sellorders.csv,
ohlc.csv,
trades.csv,
profits.csv, and
effalloc.csv.
These logs have header rows and are compatible with Excel and other spreadsheets and most analysis software.
There are no output progress messages unless
quiet: false is in the
sim.json properties. There is a file called
period that can be used as a progress indicator. It contains only a single number -- the current period number.
Usage as a software module
Depending on whether you are using ES6 or CJS modules, importing looks like this:
import * as SMRS from 'single-market-robot-simulator'; // ES6 const SMRS = require("single-market-robot-simulator"); // CJS
and returns an object
SMRS containing a constructor for the JavaScript class
Simulation and a few other miscellaneous items. Ideally, this code
will run either in the browser or on the server via nodejs without being modified for the specific environment ("isomorphic javascript").
On the browser, standard browser security policies require different procedures for writing out files. Therefore, the data logs cannot be immediately written out to .csv files (as with the stand alone app) but are maintained in memory for use with other systems, such as browser-based plotting software. It is the responsibility of other software (e.g.
single-market-robot-simulator-savezip) to write the logs to browser-side
.csv files or elsewhere and/or to provide for visualizations.
Simulations can be run in either synchronous or asynchronous mode. Asynchronous mode is useful for running on the browser so that the event loop and user interface do not freeze while waiting for simulation results.
Example source code for a web-based simulator based on
single-market-robot-simulator may be found at
and the resulting simulator web app is at
However, those are very early prototypes (v1, May 2017), are not actively updated, and should not be relied upon for new research. I have a paid version of this market simulator in development. You should also prefer the docker and stand-alone versions to the early web prototype.
Tests
npm test
from the local git-cloned and npm-installed copy of this repository will run the tests.
You may also be interested in the tests for
market-agents,
market-example-contingent or other dependencies, which are available from those modules' directories.
You can also click on the build or coverage badges to view public test reports.
License:
The software is available under the industry standard open sou] Allocative Efficiency of Markets with Zero-Intelligence Traders: Market as a Partial Substitute for Individual Rationality Dhananjay K. Gode and Shyam Sunder, Journal of Political Economy, Vol. 101, No. 1 (Feb., 1993), pp. 119-137
[2] This sniper robot was used for an academic research project and its history detailed in Appendix 1 of the resulting publication:
Paul Brewer and Anmol Ratan (2019), "Profitability, Efficiency, and Inequality in Double Auction Markets with Snipers." Accepted at Journal of Economic Behavior and Organization, forthcoming.
Before asking the author for help
I hope you enjoy the free software
and the thrill of researching and solving problems
I will appreciate a social "hello" from researchers, students, and others attempting to use the free version of this software.
But I also reserve the right to ignore email. Don't take it personally, or as a snub. 24-hr on call unlimited free support is not included with this free software, or any free software for that matter
I have written this section to help with that issue.
First, if you are a student, I wouldn't dream of taking your homework problem or class project problem away from you -- even if, in a moment of weakness or desperation the day before the deadline you were having trouble completing it at the last minute. You can do it! I believe in you! And, it is a learning experience.
Technology can be frustrating, and having a conversation about frustration that also involves lacking useful notes and being ill-prepared, is often mutually frustrating and tends to be a waste of time. If that seems arrogant, imagine I am talking about myself.
I lack useful notes on what happens if the software is run on unsuitable machines. Or what happens when problems of unclear documentation or insufficient prior experience combine with other issues between the keyboard and the chair.
And I am ill-prepared to continue working for free on things I actually care about, and much less enthusiastic about becoming someone's private arbitrage gain. If this simulation software helps with your group's goals and is saving money by providing a head-start on research or teaching projects -- please consider becoming a financial sponsor.
I wrote above that I might lack notes or be ill-prepared.
Keep in mind that you might also lack useful notes or be ill-prepared (i.e. it doesn't work but you don't know why and you didn't write anything down about the error messages or exactly what you did; or your question is about how to construct a simulation without reading the documentation or studying any examples).
Questions
Before asking me a question, please try these things first:
- consider that your problem might be solved faster by
- asking a local computer-savvy colleague to sit down with you and review what is happening.
- explaining the question out loud to an unfamiliar (or even a fictional) person can help you solve your own problem. Also known as Rubber duck debugging.
- upgrading your computer or using a better or different computer. More cores, 8 GB or more ram, and an SSD are all a plus. The simulator software is single-threaded. But Docker on Windows or Mac installs its own Linux -- so on Docker you'll benefit from at least 2 cores. Typically a full-sized desktop has more heat dissipation and can be higher performance than a laptop or mini cube.
- optionally spending less than $50 on the paid version of this software when available at -- which will be used over the web (no installation), be compatible with the free Docker usage method above, has a web-based editor, can run in the cloud, and stores the results in your Google Drive.
- be sure you really have a short, solvable question
- open-ended discussions are not short, solvable
- not short if it takes several pages to ask or answer
- constructive criticism is ok but I'll be the judge of its constructive-ness. Keep it civil and remember that you haven't paid anything for this software, it was not a custom project for you, and my goals may have nothing to do with your specific needs.
- be prepared to answer: "What have you tried?"
- if suspecting a bug, prepare and test a short, complete, verifiable list of steps to reproduce it and include that with your question
- don't be be a help vampire. While it seems natural to ask preliminary questions instead of "wasting time" reading, learning, or trying things yourself -- the strategy of pushing your preparatory work (reading, learning, trying things yourself) off on others is generally seen as counterproductive.
- others can often answer your general computer or programming question faster and better than I can. Post a public question to a popular, relevant forum. The sites below are popular and include peer-review of questions and answers. The same rules apply -- do your homework before asking:
- for Docker questions or general software usage questions, try
- for JavaScript programming questions, try
- for Economics questions, try
|
https://doc.esdoc.org/github.com/drpaulbrewer/single-market-robot-simulator/
|
CC-MAIN-2021-17
|
refinedweb
| 2,725
| 52.8
|
On 4 August 2017 at 10:31, Peter Billam <pjb@pjb.com.au> wrote: > Daurnimator wrote: >> >> Looks like luaL_register should be replaced here with luaL_newlib >> Examples should then be rewritten to do: local ao = require "ao" > > > Thanks for that; this compiles just fine: > > #if LUA_VERSION_NUM >= 502 > luaL_newlib(L, ao); /* 5.2 */ > #else > luaL_register(L, "ao", ao); /* 5.1 */ > #endif > > Unfortunately, I still can't get the example script to work > well :-( It produces half a second of loud noise, > then half a second of sine-wave with a glitch in the middle. > And attempts to output more than one second segfault. > I'll wrestle with it a bit... I had a quick look into it and fixed it:
|
http://lua-users.org/lists/lua-l/2017-08/msg00027.html
|
CC-MAIN-2022-33
|
refinedweb
| 118
| 72.87
|
Not logged in
Log in now
Weekly Edition
Recent Features
Deadline scheduling: coming soon?
LWN.net Weekly Edition for November 27, 2013
ACPI for ARM?
LWN.net Weekly Edition for November 21, 2013
GNU virtual private Ethernet
Here are a couple of keyrings patches I'd like to propose to aid the support
for Kerberos to cache tickets in keys:
(1) A large-capacity key type.
This is a general purpose key type that can hold up to 1MB of data. The
data is stored in a SHM file and so can be stored out to swap if need be.
I think it might be worth storing the data encrypted in the SHM file with
a randomly generated transient symmetric key that the kernel creates and
retains in memory for each key. This would then be lost when the power
went off, rendering the tickets irretrievable.
This is necessary because some Kerberos tickets, particularly those
returned by Windows Active Directory, for instance, can be huge, easily
64KB or more, due to additional information stored in the ticket (the
MS-PAC).
(2) Per-user_namespace registers of per-UID kerberos caches.
This allows the kerberos cache to be retained beyond the life of all a
user's processes so that the user's cron jobs can).
The cache returned is a keyring named "_krb.<uid>" that the possessor can
read, search, clear, invalidate, unlink from and add links to. SELinux
and co. get a say as to whether this call will succeed as the caller must
have LINK permission on the cache keyring.
Each uid's cache keyring is created when it first accessed and is given a
timeout that is extended each time this function is called so that the
keyring goes away after a while. The timeout is configurable by sysctl
but defaults to 3 days.
Each user_namespace gets a lazily-created keyring that serves as the
register. The cache keyrings are added to it. This means that standard
key search and garbage collection facilities are available.
The user_namespace's register goes away when it does.
Note that these patches were constructed on top of my keyring capacity
expansion patches - but there shouldn't be much to change to apply them on top
of the vanilla kernel.
David
---
David Howells (2):
KEYS: Implement a big key type that can save to tmpfs
KEYS: Add per-user_namespace registers for persistent per-UID kerberos caches
include/keys/big_key-type.h | 27 ++++++
include/linux/key.h | 1
include/linux/user_namespace.h | 6 +
include/uapi/linux/keyctl.h | 1
kernel/user.c | 4 +
kernel/user_namespace.c | 2
security/keys/Kconfig | 23 +++++
security/keys/Makefile | 2
security/keys/big_key.c | 181 ++++++++++++++++++++++++++++++++++++++++
security/keys/compat.c | 3 +
security/keys/internal.h | 9 ++
security/keys/keyctl.c | 3 +
security/keys/krbcache.c | 132 +++++++++++++++++++++++++++++
security/keys/sysctl.c | 11 ++
14 files changed, 405 insertions(+)
create mode 100644 include/keys/big_key-type.h
create mode 100644 security/keys/big_key.c
create mode 100644 security/keys/krbcache
|
http://lwn.net/Articles/561820/
|
CC-MAIN-2013-48
|
refinedweb
| 497
| 59.3
|
Consolidation, integration, refactoring, and migration are some of today's popular data center catchwords. All of these words reflect some kind of renewal or replacement process--the old is either substantially modified or thrown in the garbage and replaced with the new. However, in many cases, we are often stuck with old equipment and software. We must continue to extract more services from aging infrastructure and still make reasonable claim to them being manageable.
Java Dynamic Management Kit (JD JDMK code. The following listing is an excerpt from a generated Java file, called RFC1213_MIBOidTable.java (available in the sample code, in the Resources section below). This file is generated with reference to a specified standard"),
SnmpOidRecord
Later, I'll look at ways in which JDMK can provide something of a management makeover for legacy devices. As we'll see, it's reasonably easy and inexpensive to produce entry-level management tools. Such tools may even help IT managers to gain a deeper understanding of the dynamics of their networks and the services that sit on top of them.
One other take-away for readers is the use of the adapter pattern as a means of accessing the JDMK API. This increases the level of abstraction in the way we use the standard APIs., etc. Basically, you've got to concentrate on everything!
Let's assume Figure 1 is the hypothetical network for which you've become responsible.
Figure 1. An enterprise network:
Remember that a network is only ever as strong as its weakest link--this means that our network is vulnerable. It's the job of the network designer to try to balance service continuity against the cost of providing redundancy. In Figure 1, there are some weak points that might profit from a review! I'll focus on these by writing some JDMK code to help us see when problems have occurred and when problems might be just about to occur.
Related Reading
Head First Design Patterns
By Eric Freeman, Elisabeth Robson, Kathy Sierra, Bert Bates
An important requirement in any IT manager's job is identifying the weak points in the network. This involves a careful combination of talking to your users and your predecessor (if possible) and instigating data collection. Every network has its very own folklore! Certain network links may periodically become overloaded; one or two routers or switches may be a little flaky; a server may be past its sell-by date, etc.
A considerate predecessor might well pass on such vital information to you as you embark on your new job. Let's assume your predecessor is a kindly soul who wants to help you make an orderly transition into your new role. Further, let's assume that she tells you to "Watch out for Link 1--it tends to become congested, and the folks on Floor 1 get a little angsty." This is important insider know-how, and we'll put it to use in the Java code later on.
In many cases, networks are held together by a fragile combination of scripts and insider know-how. What I hope to show in this article is that it is pretty straightforward to produce some JDMK tools that will assist you in holding your own against the network you manage. There is, of course, no real substitute for a well-designed and well-maintained network, but even in this rare case, our Java tools might provide some assistance.
HP OpenView Network Node Manager (NNM) provides a widely used application that is found in both enterprise and service provider networks. It provides some useful features, including automatic discovery and mapping of network devices, receipt of notification messages, and the ability to add your own proprietary software. In short, NNM provides a GUI that allows you to see your network. If NNM is available to you, then it may prove invaluable in discovering and monitoring your network. If not, then don't despair!
The key to effective IT management is selective use of automated tools. If you have access to high-end application software, then use it. As we move into an era of autonomic computing, it will increasingly be the case that systems and software will execute some or all of their own management. Get aboard this bandwagon early by maximizing your use of software solutions in your IT management tasks!
Using JDMK, we can create software that both listens for events and pro-actively reads device status. In this article, I'll be focusing on the latter, just to illustrate the principles.
To begin with, I'll write a simple program that looks at a specific network link and tries to determine if it's prone to congestion. We do this by sampling and averaging some SNMP counters on an interface at one end of this link. These are standard objects that are maintained by the SNMP entity running on the device. Sometimes, the SNMP entity is not running by default--in this case, I'll assume the network manager (i.e., your predecessor!) has chosen to run SNMP on all of those devices where it is available. Let's now describe the simple requirements for the code.
We want to create some software that fulfils the following simple requirements:
An interface usually has an administrative state and an operational state. The administrative state is the one desired by the network manager; i.e., "I want this interface to be up." The operational state is the actual state of the interface. Try to think about the operational state as the network's response to the requested state. If the administrative state is up and the operational state is down, then we know there's a problem.
The interface type I'll be using is Ethernet, specifically 10Mbps (or 10,000,000bps). I'll be retrieving a snapshot of the count of incoming bits received at an interface at one end of Link 1 in Figure 1. This will give us an instantaneous picture of the inward traffic level at that interface. Then, we'll wait a bit and retrieve the same counter value. The difference between these two values gives us the required utilization value. Let's have a look at some source code now.
The Java class I use is called RequestData. It contains a main() method and makes use of the following JDMK resources (among others):
RequestData
main()
import com.sun.management.snmp.SnmpDefinitions;
import com.sun.management.snmp.SnmpOid;
import com.sun.management.snmp.SnmpVarBindList;
import com.sun.management.snmp.manager.SnmpPeer;
To begin with, I initialize the SNMP Manager API. This allows us to access the generated table mentioned in the introduction.
final SnmpOidTableSupport oidTable =
new RFC1213_MIBOidTable();
SnmpOid.setSnmpOidTable(oidTable);
Next, I create an SnmpPeer object. This represents the entity with which we will communicate. Note that this uses the port passed in as a command-line parameter.
SnmpPeer
final SnmpPeer agent =
new SnmpPeer(host, Integer.parseInt(port));
We must now create a communications session with the remote entity. This requires us to specify SNMP community strings. These data elements are then associated with the agent.
final SnmpParameters params =
new SnmpParameters("public", "private");
agent.setParams(params);
We're nearly there! We now have to build the session to manage the data request and then we're ready to create the data request list (or variable binding list).
final SnmpSession session =
new SnmpSession("SyncManager session");
session.setDefaultPeer(agent);
final SnmpVarBindList list =
new SnmpVarBindList(
"SyncManager varbind list");
The program is a single JDMK class that builds an SNMP request message. This message specifies four objects of interest, using the following code:
// A description of the host device
list.addVarBind("sysDescr.0");
// The operational state of interface 1
list.addVarBind("ifOperStatus.1");
// The number of incoming octets on interface 1
list.addVarBind("ifInOctets.1");
// The speed of interface 1
list.addVarBind("ifSpeed.1");
Our four required objects are packed into an SNMP getRequest message and sent to the receiving entity as follows:
getRequest
SnmpRequest request =
session.snmpGetRequest(null, list);
We now retrieve the same set of objects twice; the difference in time between the samples is found using this Java code:
// Calculate the time between messages
long oldTime = date1.getTime();
long newTime = new Date().getTime();
long elapsed = (newTime - oldTime) / MS_DIVIDEND;
println("Elapsed time in seconds " + elapsed);
In this section, we get the most recent time and subtract a time value recorded just before the first retrieval. This gives us a rough estimate of the elapsed time between the data samples.
When the returned data is displayed, we see the following major elements:
Value : 25625, Object ID : 1.3.6.1.2.1.2.2.1.5.1 (Syntax : Gauge32)
Value : 10000000
>> Press Enter to resend the request.
Elapsed time in seconds 16
Value : 26005, Object ID : 1.3.6.1.2.1.2.2.1.5.1 (Syntax : Gauge32)
Value : 10000000
The three bold data items above represent the two values of the ifInOctets object taken at an interval of 16 seconds. The selected interface (which supports a speed of 10,000,000bps) has received 25625 octets (or bytes) at the time T1 and 26005 octets at the time T2. To determine the incoming link utilization, we apply the following formula:
ifInOctets
T1
T2
Incoming Link % Utilization =
((T2 octets - T1 octets) * 8 * 100) /
(ifSpeed * Sample speed)
Plugging in the values above gives us a utilization of
(26005 - 25625) * 8 * 100/(10,000,000 * 16),
or
0.0019 percent.
Clearly, the interface is very lightly loaded on the incoming side! A similar measurement can be made for the outgoing direction (using the ifOutOctets object instead). Then, both values can be summed to determine the overall loading. Obviously, care is required in drawing any conclusions from the figures (they are instantaneous snapshots of data that can change rapidly), but they do provide some minimal level of understanding of the loading on the interface.
ifOutOctets
Plying this program with diligence and observing loading trends over a period of a day might lead us to understand why the outgoing network manager made the comment concerning Link 1. In any case, it means that you are beginning to learn about the secrets that the network holds in store! Extending this approach to other regions of the network should help in acquiring a broader understanding again.
To run the example program, you'll need to install JDMK. Free evaluation copies can be downloaded from Sun Microsystems, though these copies expire after 90 days. So don't be too leisurely about running this code! Alternatively, if you win a couple of lotteries, you might be tempted to purchase JDMK.
In either case, just follow the instructions in the examples\current\Snmp\Manager\ReadMe file and the example should compile and run successfully. I used JDMK version 5.1. Also, there's detail and further Java examples in my book, Network Management, MIBs & MPLS: Principles,
Design & Implementation--no lottery win required!
I strongly encourage using the adapter pattern to hide the complexity of the JDMK API. Really, JDMK isn't complex per se, but it is proprietary. For this reason, it's important to not litter your application code with calls into such an API. The adapter provides a useful model for achieving this noble design aim.
The adapter serves to insulate the application code from the details of the JDMK (or other) technology. Your code then calls into the adapter, rather than directly using the JDMK interface. So, if you later decide to change from JDMK and use another technology, any required changes to your code will have been minimized ahead of time.
Further details on the adapter pattern and its applications can be
found in design pattern books, such as O'Reilly's Head First Design Patterns.
Supporting legacy systems and equipment is difficult and unforgiving, particularly as IT budgets and staffing levels are squeezed. However, nothing is too much of a challenge for a game Java developer! Using some simple concepts from network management and SNMP, it's possible to quickly create some quite powerful JDMK-based software tools. These tools can be used to keep an eye on troublesome corners of your network, while you get on with more interesting tasks. They might also help you troubleshoot in times of difficulty.
I've barely scratched the surface of what's possible with JDMK: you can employ notifications, create your own agents as well as managers, use browsers to access the management infrastructure, etc. However, perhaps more importantly, what we have seen on the one hand is the conceptual simplicity of network management, and on the other the potentially boundless complexity of running a network. Both endeavors must meet at a common boundary, and JDMK provides a fertile ground for this.
Stephen B. Morris
is an independent writer/consultant based in Ire.
|
http://archive.oreilly.com/pub/a/onjava/2005/02/16/jdmk.html
|
CC-MAIN-2015-18
|
refinedweb
| 2,135
| 55.13
|
Memoizing expensive functions in python and saving results
Posted June 20, 2013 at 01:29 PM | categories: programming | tags: | View Comments
Sometimes a function is expensive (time-consuming) to run, and you would like to save all the results of the function having been run to avoid having to rerun them. This is called memoization. A wrinkle on this problem is to save the results in a file so that later you can come back to a function and not have to run simulations over again.
In python, a good way to do this is to "decorate" your function. This way, you write the function to do what you want, and then "decorate" it. The decoration wraps your function and in this case checks if the arguments you passed to the function are already stored in the cache. If so, it returns the result, if not it runs the function. The memoize decorator below was adapted from here.
from functools import wraps def memoize(func): cache = {} @wraps(func) def wrap(*args): if args not in cache: print 'Running func' cache[args] = func(*args) else: print 'result in cache' return cache[args] return wrap @memoize def myfunc(a): return a**2 print myfunc(2) print myfunc(2) print myfunc(3) print myfunc(2)
Running func 4 result in cache 4 Running func 9 result in cache 4
The example above shows the principle, but each time you run that script you start from scratch. If those were expensive calculations that would not be desirable. Let us now write out the cache to a file. We use a simple pickle file to store the results.
import os, pickle from functools import wraps def memoize(func): if os.path.exists('memoize.pkl'): print 'reading cache file' with open('memoize.pkl') as f: cache = pickle.load(f) else: cache = {} @wraps(func) def wrap(*args): if args not in cache: print 'Running func' cache[args] = func(*args) # update the cache file with open('memoize.pkl', 'wb') as f: pickle.dump(cache, f) else: print 'result in cache' return cache[args] return wrap @memoize def myfunc(a): return a**2 print myfunc(2) print myfunc(2) print myfunc(3) print myfunc(2)
reading cache file result in cache 4 result in cache 4 result in cache 9 result in cache 4
Now you can see if we run this script a few times, the results are read from the cache file.
Copyright (C) 2013 by John Kitchin. See the License for information about copying.
|
http://kitchingroup.cheme.cmu.edu/blog/2013/06/20/Memoizing-expensive-functions-in-python-and-saving-results/
|
CC-MAIN-2020-05
|
refinedweb
| 418
| 69.11
|
Methods of Managing Controls
Control Creation
In the previous lesson, we saw that the easiest way to
create a control is by selecting it from the Toolbox and adding it to the form.
If for any reason you cannot visually add a control, you can programmatically
create it.
The classes used to manage controls of the .Net ensemble are
created in the System namespace. Inside of the System namespace,
there is a Windows namespace. Inside of the Windows namespace, there is
the Forms namespace. All Windows control available in the .Net
programming studio are created in the System.Windows.Forms namespace.
Every .Net control is based on a class. Every one of these
control classes has at least a default constructor. You can use this constructor
to dynamically create a control. The general syntax used is:
ClassName VariableName;
Here is an example:
System.Windows.Forms.Button btnSubmit;
You must use the new operator and call its default constructor to
initialize the control. After calling its constructor, you can initialize any of
its properties as necessary. Here is an example:
public class Form1 : System.Windows.Forms.Form
{
System.Windows.Forms.Button btnSubmit;
/// <summary>
/// Required designer variable.
/// </summary>
private System.ComponentModel.Container components = null;
public Form1()
{
btnSubmit = new System.Windows.Forms.Button();
btnSubmit.Location = new System.Drawing.Point(88, 32);
btnSubmit.TabIndex = 0;
btnSubmit.Text = "Submit";
Controls.Add(btnSubmit);
...
}
Focus
The focus is a visual aspect that indicates that a control is ready to receive input from the user. Various controls have different ways of expressing that they have received focus.
Buttons controls indicate that they have focus by drawing a dotted rectangle around their caption.
A text-based control indicates that it has focus by displaying a blinking
cursor. A list-based control indicates that it has focus when one of its items has a surrounding dotted rectangle.
To give focus to a control, the user can click it or press a key. To programmatically give focus to a control, call the
Focus() method. Its syntax is:
bool Focus();
|
http://functionx.com/vjsharp/lesson04.htm
|
CC-MAIN-2017-22
|
refinedweb
| 338
| 52.46
|
Abstract Class & Interface807600 Jul 3, 2007 5:40 AM
Hi ,
This content has been marked as final. Show 11 replies
1. Re: Abstract Class & Interface807600 Jul 3, 2007 5:44 AM (in response to 807600)In this section we will redesign our OneRowNim game to fit within a hierarchy of classes of two-player games. There are many games that characteristically involve two players: checkers, chess, tic-tac-toe, guessing games, and so forth. However, there are also many games that involve just one player: blackjack, solitaire, and others. There are also games that involve two or more players, such as many card games. Thus, our redesign of OneRowNim as part of a two-player game hierarchy will not be our last effort to design a hierarchy of game-playing classes. We will certainly redesign things as we learn new Java language constructs and as we try to extend our game library to other kinds of games.
This case study will illustrate how we can apply inheritance and polymorphism, as well as other object-oriented design principles. The justification for revising OneRowNim at this point is to make it easier to design and develop other two-player games. As we have seen, one characteristic of class hierarchies is that more general attributes and methods are defined in top-level classes. As one proceeds down the hierarchy, the methods and attributes become more specialized. Creating a subclass is a matter of specializing a given class.
8.6.1. Design Goals
One of our design goals is to revise the OneRowNim game so that it fits into a hierarchy of two-player games. One way to do this is to generalize the OneRowNim game by creating a superclass that contains those attributes and methods that are common to all two-player games. The superclass will define the most general and generic elements of two-player games. All two-player games, including OneRowNim, will be defined as subclasses of this top-level superclass and will inherit and possibly override its public and protected variables and methods. Also, our top-level class will contain certain abstract methods, whose implementations will be given in OneRowNim and other subclasses.
Generic superclass
A second goal is to design a class hierarchy that makes it possible for computers to play the game, as well as human users. Thus, for a given two-player game, it should be possible for two humans to play each other, or for two computers to play each other, or for a human to play against a computer. This design goal will require that our design exhibit a certain amount of flexibility. As we shall see, this is a situation in which Java interfaces will come in handy.
--------------------------------------------------------------------------------
[Page 376]
Another important goal is to design a two-player game hierarchy that can easily be used with a variety of different user interfaces, including command-line interfaces and GUIs. To handle this feature, we will develop Java interfaces to serve as interfaces between our two-player games and various user interfaces.
8.6.2. Designing the TwoPlayerGame Class
To begin revising the design of the OneRowNim game, we first need to design a top-level class, which we will call the TwoPlayerGame class. What variables and methods belong in this class? One way to answer this question is to generalize our current version of OneRowNim by moving any variables and methods that apply to all two-player games up to the TwoPlayerGame class. All subclasses of TwoPlayerGamewhich includes the OneRowNim classwould inherit these elements. Figure 8.18 shows the current design of OneRowNim.
Figure 8.18. The current OneRowNim class.
What variables and methods should we move up to the TwoPlayerGame class? Clearly, the class constants, PLAYER_ONE and PLAYER_TWO, apply to all two-player games. These should be moved up. On the other hand, the MAX_PICKUP and MAX_STICKS constants apply just to the OneRowNim game. They should remain in the OneRowNim class.
The nSticks instance variable is a variable that only applies to the OneRowNim game but not to other two-player games. It should stay in the OneRowNim class. On the other hand, the onePlaysNext variable applies to all two-player games, so we will move it up to the TwoPlayerGame class.
Because constructors are not inherited, all of the constructor methods will remain in the OneRowNim class. The instance methods, takeSticks() and getSticks(), are specific to OneRowNim, so they should remain there. However, the other methods, getPlayer(), gameOver(), getWinner(), and reportGameState(), are methods that would be useful to all two-player games. Therefore these methods should be moved up to the superclass. Of course, while these methods can be defined in the superclass, some of them can only be implemented in subclasses. For example, the reportGameState() method reports the current state of the game, so it has to be implemented in OneRowNim. Similarly, the getWinner() method defines how the winner of the game is determined, a definition that can only occur in the subclass. Every two-player game needs methods such as these. Therefore, we will define these methods as abstract methods in the superclass. The intention is that TwoPlayerGame subclasses will provide game-specific implementations for these methods.
--------------------------------------------------------------------------------
[Page 377]
Constructors are not inherited
Given these considerations, we come up with the design shown in Figure 8.19. The design shown in this figure is much more complex than the designs used in earlier chapters. However, the complexity comes from combining ideas already discussed in previous sections of this chapter, so don't be put off by it.
Figure 8.19. TwoPlayerGame is the superclass for OneRowNim and other two-player games.
To begin with, note that we have introduced two Java interfaces into our design in addition to the TwoPlayerGame superclass. As we will show, these interfaces lead to a more flexible design and one that can easily be extended to incorporate new two-player games. Let's take each element of this design separately.
--------------------------------------------------------------------------------
[Page 378]
8.6.3. The TwoPlayerGame Superclass
As we have stated, the purpose of the TwoPlayerGame class is to serve as the superclass for all two-player games. Therefore, it should define the variables and methods shared by two-player games.
The PLAYER_ONE, PLAYER_TWO, and onePlaysNext variables and the getPlayer(), setPlayer(), and changePlayer() methods have been moved up from the OneRowNim class. Clearly, these variables and methods apply to all two-player games. Note that we have also added three new variables, nComputers, computer1, computer2, and their corresponding methods, getNComputers() and addComputerPlayer(). We will use these elements to give our games the capability to be played by computer programs. Because we want all of our two-player games to have this capability, we define these variables and methods in the superclass rather than in OneRowNim and subclasses of TwoPlayerGame.
Note that the computer1 and computer2 variables are declared to be of type IPlayer. IPlayer is an interface containing a single method declaration, the makeAMove() method:
public interface IPlayer {
public String makeAMove(String prompt);
}
Why do we use an interface here rather than some type of game-playing object? This is a good design question. Using an interface here makes our design more flexible and extensible because it frees us from having to know the names of the classes that implement the makeAMove() method. The variables computer1 and computer2 will be assigned objects that implement IPlayer via the addComputerPlayer() method.
Game-dependent algorithms
The algorithms used in the various implementations of makeAMove() are game-dependentthey depend on the particular game being played. It would be impossible to define a game playing object that would suffice for all two-player games. Instead, if we want an object that plays OneRowNim, we would define a OneRowNimPlayer and have it implement the IPlayer interface. Similarly, if we want an object that plays checkers, we would define a CheckersPlayer and have it implement the IPlayer interface. By using an interface here, our TwoPlayerGame hierarchy can deal with a wide range of differently named objects that play games, as long as they implement the IPlayer interface. Using the IPlayer interface adds flexibility to our game hierarchy and makes it easier to extend it to new, yet undefined, classes. We will discuss the details of how to design a game player in Section 8.6.7.
The IPlayer interface
Turning now to the methods defined in TwoPlayerGame, we have already seen implementations of getPlayer(), setPlayer(), and changePlayer() in the OneRowNim class. We will just move those implementations up to the superclass. The getNComputers() method is the assessor method for the nComputers variable, and its implementation is routine. The addComputerPlayer() method adds a computer player to the game. Its implementation is as follows:
--------------------------------------------------------------------------------
[Page 379]
public void addComputerPlayer(IPlayer player) {
if (nComputers == 0)
computer2 = player;
else if (nComputers == 1)
computer1 = player;
else
return; // No more than 2 players
++nComputers;
}
As we noted earlier, the classes that play the various TwoPlayerGames must implement the IPlayer interface. The parameter for this method is of type IPlayer. The algorithm we use checks the current value of nComputers. If it is 0, which means that this is the first IPlayer added to the game, the player is assigned to computer2. This allows the human user to be associated with PLAYERONE if this is a game between a computer and a human user.
If nComputers equals 1, which means that we are adding a second IPlayer to the game, we assign that player to computer1. In either of these cases, we increment nComputers. Note what happens if nComputers is neither 1 nor 2. In that case, we simply return without adding the IPlayer to the game and without incrementing nComputers. This, in effect, limits the number of IPlayers to two. (A more sophisticated design would throw an exception to report an error. but we will leave that for a subsequent chapter.)
The addComputerPlayer() method is used to initialize a game after it is first created. If this method is not called, the default assumption is that nComputers equals zero and that computer1 and computer2 are both null. Here's an example of how it could be used:
OneRowNim nim = new OneRowNim(11); // 11 sticks
nim.add(new NimPlayer(nim)); // 2 computer players
nim.add(new NimPlayerBad(nim));
Note that the NimPlayer() constructor takes a reference to the game as its argument. Clearly, our design should not assume that the names of the IPlayer objects would be known to the TwoPlayerGame superclass. This method allows the objects to be passed in at runtime. We will discuss the details of NimPlayerBad in Section 8.6.7.
The getrules() method is a new method whose purpose is to return a string that describes the rules of the particular game. This method is implemented in the TwoPlayerGame class with the intention that it will be overridden in the various subclasses. For example, its implementation in TwoPlayerGame is:
public String getRules() {
return "The rules of this game are: ";
}
Overriding a method
--------------------------------------------------------------------------------
[Page 380]
and its redefinition in OneRowNim is:";
}
The idea is that each TwoPlayerGame subclass will take responsibility for specifying its own set of rules in a form that can be displayed to the user.
You might recognize that defining geTRules() in the superclass and allowing it to be overridden in the subclasses is a form of polymorphism. It follows the design of the toString() method, which we discussed earlier. This design will allow us to use code that takes the following form:
TwoPlayerGame game = new OneRowNim();
System.out.println(game.getRules());
Polymorphism
In this example the call to getrules() is polymorphic. The dynamic-binding mechanism is used to invoke the getrules() method defined in the OneRowNim class.
The remaining methods in TwoPlayerGame are defined abstractly. The gameOver() and getWinner() methods are both game-dependent methods. That is, the details of their implementations depend on the particular TwoPlayerGame subclass in which they are implemented.
This is good example of how abstract methods should be used in designing a class hierarchy. We give abstract definitions in the superclass and leave the detailed implementations up to the individual subclasses. This allows the different subclasses to tailor the implementations to their particular needs, while allowing all subclasses to share a common signature for these tasks. This enables us to use polymorphism to create flexible, extensible class hierarchies.
Figure 8.20 shows the complete implementation of the abstract TwoPlayerGame class. We have already discussed the most important details of its implementation.
Figure 8.20. The TwoPlayerGame class
(This item is displayed on page 381 in the print version)
public abstract class TwoPlayerGame {
public static final int PLAYER_ONE = 1;
public static final int PLAYER_TWO = 2;
protected boolean onePlaysNext = true;
protected int nComputers = 0; // How many computers
// Computers are IPlayers
protected IPlayer computer1, computer2;
public void setPlayer(int starter) {
if (starter == PLAYER_TWO)
onePlaysNext = false;
else onePlaysNext = true;
} // setPlayer()
public int getPlayer() {
if (onePlaysNext)
return PLAYER_ONE;
else return PLAYER_TWO;
} // getPlayer()
public void changePlayer() {
onePlaysNext = !onePlaysNext;
} // changePlayer()
public int getNComputers() {
return nComputers;
} // getNComputers()
public String getRules() {
return "The rules of this game are: ";
} // getRules()
public void addComputerPlayer(IPlayer player) {
if (nComputers == 0)
computer2 = player;
else if (nComputers == 1)
computer1 = player;
else
return; // No more than 2 players
++nComputers;
} // addComputerPlayer()
public abstract boolean gameOver(); // Abstract Methods
public abstract String getWinner();
} // TwoPlayerGame class
Effective Design: Abstract Methods
Abstract methods allow you to give general definitions in the superclass and leave the implementation details to the different subclasses.
--------------------------------------------------------------------------------
[Page 381]
8.6.4. The CLUIPlayableGame Interface
We turn now to the two interfaces shown in Figure 8.19. Taken together, the purpose of these interfaces is to create a connection between any two-player game and a command-line user interface (CLUI). The interfaces provide method signatures for the methods that will implement the details of the interaction between a TwoPlayerGame and a UserInterface. Because the details of this interaction vary from game to game, it is best to leave the implementation of these methods to the games themselves.
Note that CLUIPlayableGame extends the IGame interface. The IGame interface contains two methods that are used to define a standard form of communication between the CLUI and the game. The getGamePrompt() method defines the prompt used to signal the user for a move of some kindfor example, "How many sticks do you take (1, 2, or 3)?" And the reportGameState() method defines how the game will report its current statefor example, "There are 11 sticks remaining." CLUIPlayableGame adds the play() method to these two methods. As we will see shortly, the play() method contains the code that will control the playing of the game.
--------------------------------------------------------------------------------
[Page 382]
Extending an interface
The source code for these interfaces is very simple:
public interface CLUIPlayableGame extends IGame {
public abstract void play(UserInterface ui);
}
public interface IGame {
public String getGamePrompt();
public String reportGameState();
} // IGame
Note that the CLUIPlayableGame interface extends the IGame interface. A CLUIPlayableGame is a game that can be played through a CLUI. The purpose of its play() method is to contain the game-dependent control loop that determines how the game is played via a user interface (UI). In pseudocode, a typical control loop for a game would look something like the following:
Initialize the game.
While the game is not over
Report the current state of the game via the UI.
Prompt the user (or the computer) to make a move via the UI.
Get the user's move via the UI.
Make the move.
Change to the other player.
The play loop sets up an interaction between the game and the UI. The UserInterface parameter allows the game to connect directly to a particular UI. To allow us to play our games through a variety of UIs, we define UserInterface as the following Java interface:
public interface UserInterface {
public String getUserInput();
public void report(String s);
public void prompt(String s);
}
Any object that implements these three methods can serve as a UI for one of our TwoPlayerGames. This is another example of the flexibility of using interfaces in object-oriented design.
To illustrate how we use UserInterface, let's attach it to our KeyboardReader class, thereby letting a KeyboardReader serve as a CLUI for TwoPlayerGames. We do this simply by implementing this interface in the KeyboardReader class, as follows:
public class KeyboardReader implements UserInterface
--------------------------------------------------------------------------------
[Page 383]
As it turns out, the three methods listed in UserInterface match three of the methods in the current version of KeyboardReader. This is no accident. The design of UserInterface was arrived at by identifying the minimal number of methods in KeyboardReader that were needed to interact with a TwoPlayerGame.
Effective Design: Flexibility of Java Interfaces
A Java interface provides a means of associating useful methods with a variety of different types of objects, leading to a more flexible object-oriented design.
The benefit of defining the parameter more generally as a UserInterface instead of as a KeyboardReader is that we will eventually want to allow our games to be played via other kinds of command-line interfaces. For example, we might later define an Internet-based CLUI that could be used to play OneRowNim among users on the Internet. This kind of extensibilitythe ability to create new kinds of UIs and use them with TwoPlayerGamesis another important design feature of Java interfaces.
Generality principle
Effective Design: Extensibility and Java Interfaces
Using interfaces to define useful method signatures increases the extensibility of a class hierarchy.
As Figure 8.19 shows, OneRowNim implements the CLUIPlayableGame interface, which means it must supply implementations of all three abstract methods: play(), getGamePrompt(), and reportGameState().
8.6.5. Object-Oriented Design: Interfaces or Abstract Classes?
Why are these methods defined in interfaces? Couldn't we just as easily define them in the TwoPlayerGame class and use inheritance to extend them to the various game subclasses? After all, isn't the net result the same, namely, that OneRowNim must implement all three methods.
These are very good design questions, exactly the kinds of questions one should ask when designing a class hierarchy of any sort. As we pointed out in the Animal example earlier in the chapter, you can get the same functionality from an abstract interface and an abstract superclass method. When should we put the abstract method in the superclass, and when does it belong in an interface? A very good discussion of these and related object-oriented design issues is available in Java Design, 2nd Edition, by Peter Coad and Mark Mayfield (Yourdan Press, 1999). Our discussion of these issues follows many of the guidelines suggested by Coad and Mayfield.
Interfaces vs. abstract methods
We have already seen that using Java interfaces increases the flexibility and extensibility of a design. Methods defined in an interface exist independently of a particular class hierarchy. By their very nature, interfaces can be attached to any class, and this makes them very flexible to use.
Flexibility of interfaces
Another useful guideline for answering this question is that the superclass should contain the basic common attributes and methods that define a certain type of object. It should not necessarily contain methods that define certain roles that the object plays. For example, the gameOver() and getWinner() methods are fundamental parts of the definition of a TwoPlayerGame. One cannot define a game without defining these methods. By contrast, methods such as play(), getGamePrompt(), and reportGameState() are important for playing the game but they do not contribute in the same way to the game's definition. Thus these methods are best put into an interface. Therefore, one important design guideline is:
--------------------------------------------------------------------------------
[Page 384]
Effective Design: Abstract Methods
Methods defined abstractly in a superclass should contribute in a fundamental way to the basic definition of that type of object, not merely to one of its roles or its functionality.
8.6.6. The Revised OneRowNim Class
Figure 8.21 provides a listing of the revised OneRowNim class, one that fits into the TwoPlayerGame class hierarchy. Our discussion in this section will focus on the features of the game that are new or revised.
Figure 8.21. The revised OneRowNim class, Part I.
(This item is displayed on page 385 in the print version)
public class OneRowNim extends TwoPlayerGame implements CLUIPlayableGame {
public static final int MAX_PICKUP = 3;
public static final int MAX_STICKS = 11;
private int nSticks = MAX_STICKS;
public OneRowNim() { } // Constructors
public OneRowNim(int sticks) {
nSticks = sticks;
} // OneRowNim()
public OneRowNim(int sticks, int starter) {
nSticks = sticks;
setPlayer(starter);
} // OneRowNim()
public boolean takeSticks(int num) {
if (num < 1 || num > MAX_PICKUP || num > nSticks)
return false; // Error
else // Valid move
{ nSticks = nSticks - num;
return true;
} // else
} // takeSticks()
public int getSticks() {
return nSticks;
} // getSticks()";
} // getRules()
public boolean gameOver() { /*** From TwoPlayerGame */
return (nSticks <= 0);
} // gameOver()
public String getWinner() { /*** From TwoPlayerGame */
if (gameOver()) //{
return "" + getPlayer() + " Nice game.";
return "The game is not over yet."; // Game is not over
} // getWinner()
The gameOver() and getWinner() methods, which are nowinherited from the TwoPlayerGame superclass, are virtually the same as in the previous version. One small change is that getWinner() now returns a String instead of an int. This makes the method more generally useful as a way of identifying the winner for all TwoPlayerGames.
Similarly, the getGamePrompt() and reportGameState() methods merely encapsulate functionality that was present in the earlier version of the game. In our earlier version the prompts to the user were generated directly by the main program. By encapsulating this information in an inherited method, we make it more generally useful to all TwoPlayerGames.
Inheritance and generality
The major change to OneRowNim comes in the play() method, which controls the playing of OneRowNim (Fig. 8.22). Because this version of the game incorporates computer players, the play loop is a bit more complex than in earlier versions of the game. The basic idea is still the same: The method loops until the game is over. On each iteration of the loop, one or the other of the two players, PLAYER_ONE or PLAYER_TWO, takes a turn making a movethat is, deciding how many sticks to pick up. If the move is a legal move, then it becomes the other player's turn.
Figure 8.22. The revised OneRowNim class, Part II.
(This item is displayed on page 386 in the print version)
/** From CLUIPlayableGame */
public String getGamePrompt() {
return "\nYou can pick up between 1 and " +
Math.min(MAX_PICKUP,nSticks) + " : ";
} // getGamePrompt()
public String reportGameState() {
if (!gameOver())
return ("\nSticks left: " + getSticks() +
" Who's turn: Player " + getPlayer());
else
return ("\nSticks left: " + getSticks() +
" Game over! Winner is Player " + getWinner() +"\n");
} // reportGameState()
public void play(UserInterface ui) { // From CLUIPlayableGame interface
int sticks = 0;
ui.report(getRules());
if (computer1 != null)
ui.report("\nPlayer 1 is a " + computer1.toString());
if (computer2 != null)
ui.report("\nPlayer 2 is a " + computer2.toString());
while(!gameOver()) {
IPlayer computer = null; // Assume no computers
ui.report(reportGameState());
switch(getPlayer()) {
case PLAYER_ONE: // Player 1's turn
computer = computer1;
break;
case PLAYER_TWO: // Player 2's turn
computer = computer2;
break;
} // cases
if (computer != null) { // If computer's turn
sticks = Integer.parseInt(computer.makeAMove(""));
ui.report(computer.toString() + " takes " + sticks + " sticks.\n");
} else { // otherwise, user's turn
ui.prompt(getGamePrompt());
sticks =
Integer.parseInt(ui.getUserInput()); // Get user's move
}
if (takeSticks(sticks)) // If a legal move
changePlayer();
} // while
ui.report(reportGameState()); // The game is now over
} // play()
} // OneRowNim class
Let's look now at how the code decides whether it is a computer's turn to move or a human player's turn. Note that at the beginning of the while loop, it sets the computer variable to null. It then assigns computer a value of either computer1 or computer2, depending on whose turn it is. But recall that one or both of these variables may be null, depending on how many computers are playing the game. If there are no computers playing the game, then both variables will be null. If only one computer is playing, then computer1 will be null. This is determined during initialization of the game, when the addComputerPlayer() is called. (See above.)
In the code following the switch statement, if computer is not null, then we call computer.makeAMove(). As we know, the makeAMove() method is part of the IPlayer interface. The makeAMove() method takes a String parameter that is meant to serve as a prompt, and returns a String that is meant to represent the IPlayer's move:
public interface IPlayer {
public String makeAMove(String prompt);
}
--------------------------------------------------------------------------------
[Page 385]
In OneRowNim the "move" is an integer, representing the number of sticks the player picks. Therefore, in play() OneRowNim has to convert the String into an int, which represents the number of sticks the IPlayer picks up.
On the other hand, if computer is null, this means that it is a human user's turn to play. In this case, play() calls ui.getUserInput(), employing the user interface to input a value from the keyboard. The user's input must also be converted from String to int. Once the value of sticks is set, either from the user or from the IPlayer, the play() method calls takeSticks(). If the move is legal, then it changes whose turn it is, and the loop repeats.
--------------------------------------------------------------------------------
[Page 386]
There are a couple of important points about the design of the play() method. First, the play() method has to know what to do with the input it receives from the user or the IPlayer. This is game-dependent knowledge. The user is inputting the number of sticks to take in OneRowNim. For a tic-tac-toe game, the "move" might represent a square on the tic-tac-toe board. This suggests that play() is a method that should be implemented in OneRowNim, as it is here, because OneRowNim encapsulates the knowledge of how to play the One-Row Nim game.
Encapsulation of game-dependent knowledge
--------------------------------------------------------------------------------
[Page
2. Re: Abstract Class & Interface807600 Jul 3, 2007 5:47 AM (in response to 807600
3. Re: Abstract Class & Interface807600 Jul 3, 2007 5:52 AM (in response to 807600)
I have a fundamental doubt regarding Abstract Class &
Interface!!!
What is their real benefit...whether we implement anYou mean the only benefit is by writing less code? :D
interface or extend an Abstract class we have to
write the code for the abstract method in the
concrete class.Then where the benefit remained....
Interface have the purpose of providing a contract for the developers without knowing the implementation details. Abstract class is similar except that it provides some default behaviour.
And it is said that Abstract class provide defaultAn abstract class can have some concrete implementation in it. That is called the default behaviour because that's the behaviour when you don't override the implementation.
behaviour...what is the actual meaning of that?
4. Re: Abstract Class & Interface807600 Jul 3, 2007 5:53 AM (in response to 807600)
i posted a part of a pdf file for understanding aboutYou could have posted a complete link to this PDF.
abstract classes and interfaces.
5. Re: Abstract Class & Interface807600 Jul 3, 2007 5:55 AM (in response to 807600)
i can't copyrights issues.
i posted a part of a pdf file for understandingabout
abstract classes and interfaces.You could have posted a complete link to this PDF.
6. Re: Abstract Class & Interface807600 Jul 3, 2007 6:00 AM (in response to 807600)
You're violating it already. ;)
i can't copyrights issues.
i posted a part of a pdf file for understandingabout
abstract classes and interfaces.You could have posted a complete link to this PDF.
7. Re: Abstract Class & Interface807600 Jul 3, 2007 6:01 AM (in response to 807600)
Darn can someone tell me how to delete my msg :P
understanding
i posted a part of a pdf file for
PDF.
about
abstract classes and interfaces.You could have posted a complete link to this
i can't copyrights issues.You're violating it already. ;)
8. Re: Abstract Class & Interface807600 Jul 3, 2007 7:23 AM (in response to 807600)Click on the edit button and remove the copyrighted material
before the cops knock on your door ;-)
9. Re: Abstract Class & Interface807600 Jul 3, 2007 12:38 PM (in response to 807600)Hai Ram! It is the batameeze leading the the batameeze!
10. Re: Abstract Class & Interface807600 Jul 3, 2007 12:56 PM (in response to 807600)
Hai Ram! It is the batameeze leading the theYou mean budding budtameezee? ;)
batameeze!
11. Re: Abstract Class & Interface807600 Jul 3, 2007 1:09 PM (in response to 807600)
Yes budding. As in fungus that reproduce by budding. :-)
Hai Ram! It is the batameeze leading the theYou mean budding budtameezee? ;)
batameeze!
|
https://community.oracle.com/message/5010053
|
CC-MAIN-2016-44
|
refinedweb
| 4,749
| 53.71
|
is an array of list used to store the information in a more synchronized way. It uses the key-value pairs to store the information within a table. To locate an element in the Hash table, first object is specified which is used as key. The key is then hashed which generates an index at which value is stored within the table.
HashTable can store only those objects which can implement both hashcode() and equals() method. When a key is hashed it generates the hash code which generates an index in the table at which value is stored. Method equals() compares the two objects. For example String implements both hashcode() and equals() method.
also read:
HashTable Features
- Hash table stores the information in a more synchronized way.
- Hash table does not allow the duplication of values. It can associate only one value to a key.
- It cannot store a null key or a null value.
- Applicable only to those objects which can implement both hashcode() and equals() method.
- Every hashing function generates a hash code which specifies an index in the table at which value is stored.
HashTable Syntax
class Hashtable<K, V>
- Where K specifies the type of key element, and V specifies the type of value element.
- We can create HashTable as shown in the following way, where key type is string and value of key is integer.
HashTable<String, Integer> ht=new HashTable<String, Integer> () ;
HashTable Constructors
HashTable Methods
Simple Example for HashTable
package hash_table; import java.util.Hashtable; import java.util.Enumeration; public class Update_hash { public static void main(String[] args) { Hashtable<String, String> ht = new Hashtable<String, String>(); ht.put("player 1", "sachin"); ht.put("player 2", "sehwag"); ht.put("player 3", "dhoni"); Enumeration<String> values = ht.keys(); while (values.hasMoreElements()) { String str = (String) values.nextElement(); System.out.println(str + ":" + ht.get(str)); } } }
- Hashtable ht = new Hashtable(); line creates the instance ht of the hash table which accepts the key value pair of string type.
- ht.put (“player 1”, “sachin”); inserts the key value pairs in the table. Here key=”player1” and value=” sachin”. Similarly the other two keyValue pairs are inserted into the hash table.
- Enumeration values = ht.keys(); creates the enum type for key , which enables to access only one key at a time.
- String str = (String) values.nextElement(); as the enum type is defined for the key the next element from the hash table is accessed using this method.
- System.out.println(str + “:” + ht.get(str)); statement prints the result on the output screen.
When you run the above example, you would get the following output:
Example using HashTable Methods
In the below example we have shown the use of few HashTable methods .These are size, clone, remove, clear and empty.
package hash_table; import java.util.Hashtable; public class hashtable_methods { @SuppressWarnings("unchecked") public static void main(String[] args) { Hashtable<String, String> ht = new Hashtable<String, String>(); ht.put("player 1", "sachin"); ht.put("player 2", "sehwag"); ht.put("player 3", "dhoni"); System.out.println("Size of the table: "+ ht.size()); Hashtable<String, String> htclone = new Hashtable<String, String>(); htclone=(Hashtable<String, String>)ht.clone(); System.out.println("Clone of the table: "+ ht); System.out.println("Before removing: "+ ht); ht.remove("player 2"); System.out.println("After removing: "+ ht); ht.clear(); System.out.println("Table elements after clear: "+ ht); boolean data=ht.isEmpty(); System.out.println("Is hash table empty: " + data ); } }
- ht.size ();returns the size of the table, where ht is the instance of the HashTable.
- ht.clone(); creates the clone of the table.
- ht.remove(“player 2”); removes the specified key value from the HashTable.
- ht.clear();clears the content of the HashTable.
- boolean data=ht.isEmpty();specifies whether HashTable is empty or not. In this case it returns true as Hashtable is empty.
When you run the above example, you would get the following output:
I need your help. I am working on groovy in soap UI
HashTable ht= new HashTable()
ht.put = (“Name”,”SoapUI”)
ht.put = (“Name”, “It”)
An exception is thrown like
“groovy.syntax.SyntaxException: expecting ‘)’, found ‘,’ at line 68
|
http://www.javabeat.net/java-util-hashtable/
|
CC-MAIN-2016-07
|
refinedweb
| 682
| 61.33
|
So why an another SerialPort when we already have two perfectly functional, battle tested implmementations already?
This project started of as a proof of concept that spiraled out of hand. There was some discussion on the rxtx@qbang.org mailing list about rewriting the implementation, and while I disagreed with the idea of rewriting I suggested that if such a rewrite would take place it should aim to implement as much of the code as possible in Java not C. And since JNA allows you to call any (well almost) shared/dynamic C-library from Java without writing a single line of C-code I suggested that it [rewrite of SerialPort] could all be done in Java with no C in sight.
This was met with the usual sketsism, so to prove my point and to research the issue (this has some interesting techincal challenges) I sketched non-functional prototype for SerialPort. I'm sceptical of rewrites because they throw away years of experience and debugging, so originally my plan was to leave it at that [non functional prototype], but the idea kept haunting me and I finally put everything else aside for a few days and made working prototype.
My reasoning was that giving the design and implementation challenges a priority is putting your self first instead of your users and customers.
If we take the users point of view, Sun's JavaComm is abandoned and not supported on many platforms and RXTX has its own issues, someone discribed them as paper cuts, which have not been addressed for years. So users are left with the choice of fixing any issues themselfs. And therein lies the problem with C which also bogs down the RXTX development.
The issue is C and C-tool chain.
Most people wanting to use JavaComm SerialPort have a strong Java background and have the Java tools and skills at hand. They can desing, code, debug and test Java code.
But C is a different beast.
It is a nice language, don't get me wrong, but it leaves maddening number of things 'implementation defined', which makes it hard to write portable code. Did you know that the type 'char' in C is not a byte by definition. Did you know that the size of types is defined in terms of the non-byte char? Did you know that the the standard only quarantees that sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long). And so on and so on.
But that is not the worst part. There are heaps of people with the C-skill set who know those things by heart, I'm one of them, so that is not the big issue. By far the worst part of C is the tool chain, ie compiler linker and system headers.
If you are developing in Mac OS X and want to do Windows stuff you need a computer for it and you are in for a day or two installing Windows and you can spend a few more days installing the tool chain, be it Visual Studio, MinGW or Cygwin and all the necessary SDKs and what not.
Multiply that by Linux.
Building binary distributions of any cross platform C-based library is a nightmare.
And you want do that because you want to deliver your library 'batteries included' so that it will work out of the box, instead of asking your users to install this and compile that.
Ok, so you are a developer, you are supposed to know this stuff and take the punishment.
Fair enough, but consider the average JavaComm user (putting the customer first!) who needs to and is willing to fix the paper cuts. How can you expect him/her to set up all this tool chain for one platform, let alone all the platforms (and RXTX supports an amazing number of platforms, hats off!).
With everything in Java, maintenance and debugging is so much easier, you just step into the code with your debugger, see problem, fix it in your platform and can be reasonably confident that you can, if you want to, share the fix and get it incorporated to the code base. Something that has not happened for years in RXTX.
Not to mention deployment issues.
Imaging a world with no DLLs or shared libraries, with no architecture issues,with no dependcies. That is what JNA is all about.
Just a single 'jna.jar' to rule them all.
And this is exactly the same what PureJavComm aspires someday to be: just include purejavacomm.jar in your Java class path and you are done.
Like Timothy Wall, the father and creator of JNA, wrote: "those of you who've never built multi-lingual projects involving C don't know what you're missing".
The project is a work in progress and some of the interfaces 'below' JavaComm level are subject to change without notice.
At this point in time this is probably not for the casual user but the enterpricing coders who are not afraid to get their hands dirty.
The code is functionally close to complete but has as seen very limited testing.
The code passes a simple/naive loopback test purejavacomm.testsuite.TestSuite that uses an event listener on Linux, Windows and Mac OS X.
purejavacomm.testsuite.TestSuite
This project will never, realistically thinking, support such a variety of platforms as RXTX, which seems to support: Mac OS X, Linux, Windows, Solaris and them some, if I'm not mistaken.
Like I mentioned before I have no illusions about re-writes -- it will be years, if ever, that this project will reach the quality and breath of something like RXTX.
On the otherhand it can be useful already.
So if the operating system allows, you can open to SerialPorts to the same actual serial port.
What happens if you do is your headache.
The original project 'push' is also here as zip-file: purejavacomm.zip for reference only, for latest and greates go to the github.
There is now also a maven repository at:
To use the maven repository just add this to you '.pom' file:
<dependency>
<groupId>com.sparetimelabs</groupId>
<artifactId>purejavacomm</artifactId>
<version>xxx</version>
</dependency>
<dependency>
<groupId>com.sparetimelabs</groupId>
<artifactId>purejavacomm</artifactId>
<version>xxx</version>
</dependency>
where 'xxx' is the version number, currently (23.1.2015) it is 0.0.0.22 but it will change at some point, you need to look it up from the github.
Namely '#define' constants, macros and global variables, and to a lesser extent the varying native structures and type sizes.
Constants are not a big issues. You just have to look up the value of the constant and declare it in Java. For example:
This C-define :
#define O_NOCTTY 0x00020000
becomes in Java:
final static int O_NOCTTY 0x00020000;
The downside is that it is a bit fragile in that if the constant value changes (what a concept!) the Java code will not know about this whereas the C code will get the new value when it gets recompiled. If it gets recompiled. But for the kind of well established constants we are talking about here, this is a non-issue for me.
The other thing is that the values maybe architecture and/or platform dependent. The C-code again gets the correct values automatically, assuming you have get your tool chain properly configured! But the Java code needs to resort to simulating the constants with 'static ints' without 'final' and they need to be set at runtime based on the platform and architecture. Not a big deal, though it means you cannot use them as case-labels in switch statements.
Conseptually a bigger hurdle are macros, especially the FD_SET -family of macros.
The worst of those is FD_SET macro itself, which actually masquarades as a 'type'.
Posix nor any other standard does not define what the structure actually is, but looking at
the headers it is clear that in most (in every?) case it is just a integer array, in other words a chunck of memory.
As this needs to be platform and architecture dependent with possibly endian issues, and we might as well be prepare for any implementation too, the actual allocation of the FD_SET and the associated SET/CLR operations are abstracted away in the interface.
In practice this means that all those #defines and macros that C-programmers get from the headers Java programmers get from the static class members and methods of jtermios.JTermios class. See below, Using the JTermios Library.
jtermios.JTermios
For global variables I see no solution but there is only one crucial that we need here: 'errno'. I've circumvented that problem by using the 'perror()' function to output the error code to the console. At least you can now see it even if you cannot access it from Java.
One is to take the SerialPort as the main interface and then create an implementation for each platform. Obviously this has the disadvantage that there is potentially a lot of code and functionality duplication. The other route, which I've taken, is to implement the SerialPort in terms of some idealized serial port API.
It makes sense to model this idealized serial port API according to some existing operating system interface and write a compatibily layer for the others. As Windows is the odd man out and all the unixes are more or less the same it is natural to model the idealized serial port API along the POSIX standard and write an 'impedance matcher' for Windows.
So, without furter ado, jtermios.JTermios is that idealized interface.
Static methods and fields in that class serve as the POSIX serial API functions and defined constants. This makes it possible to use them using the Java static import like this
import static jtermios.JTermios.*;
after which their usage is about as close to C-usage as it can, leveraging on existing C-knowledge, examples and documentation.
In that same package (jtermios) there are four more idealized classes that
serve the same purpose as their POSIX namesakes:
jtermios
Termios
TimeVal
FDSet
Pollfd
JTermios delegates the actual implementation of 'termios' functionality to the platform and architecture specific JTermiosImpl classes.
JTermios
JTermiosImpl
When JTermios class loads it instantiates one of those implementing classes
which you can find in the jtermios.macosx,jtermios.windows and ,linux packages.
jtermios.macosx
jtermios.windows
linux
Each of the JTermiosImpl classes
need to implement the jtermios.JTermios.JTermiosInterface.
So a pretty architecture picture would look something like:
jtermios.JTermios.JTermiosInterface
This could be the end of the story, here you have a well known (look-a-like) cross platform serial port API, admittedly missing a few features, like port enumeration, but still.
Mission accomplished.
Well, not quite, PureJavaComm wants to build on the existing skills and knowledge of its users and POSIX style termios I/O may not be the most familiar API for Java programmes. But JavaComm SerialPort is a Java programmer friendly API and that is what purejavacomm.PureJavaSerialPort delivers.
purejavacomm.PureJavaSerialPort
To use it you just need to import definitions from purejavacomm names space, like this:
purejavacomm
import purejavacomm.*;
And use it like Sun's javax.comm.JavaComm, in fact, there is no 'javadoc' for PureJavaComm, so you need to refer to
the Sun Javacomm API documentation!
javax.comm.JavaComm
Of course you need to include the jna.jar and purejavacomm.jar jars in your classpath.
jna.jar
purejavacomm.jar
If you want, you can also use the jtermios.JTermios library to access the serial port very much in the same way you can from C.
Just do a static import for the class JTermios and you get access to many of the functions and constants that
you normallye get from "fcntl.h", "termios.h" and related C-headers.
"fcntl.h"
"termios.h"
Here is an apetizer:
import termios.*;
import static jtermios.JTermios.*;
public class JTermiosDemo {
public static void main(String[] args) {
port = "/dev/tty.usbserial-FTOXM3NX";
int fd = open(port, O_RDWR | O_NOCTTY | O_NONBLOCK);
if (fd == -1)
fail("Could not open " + port);
fcntl(fd, F_SETFL, 0);
Termios opts = new Termios();
tcgetattr(fd, opts);
opts.c_lflag &= ~(ICANON | ECHO | ECHOE | ISIG);
opts.c_cflag |= (CLOCAL | CREAD);
opts.c_cflag &= ~PARENB;
opts.c_cflag |= CSTOPB;
opts.c_cflag &= ~CSIZE;
opts.c_cflag |= CS8;
opts.c_oflag &= ~OPOST;
opts.c_iflag &= ~INPCK;
opts.c_iflag &= ~(IXON | IXOFF | IXANY);
opts.c_cc[VMIN] = 0;
opts.c_cc[VTIME] = 10;
cfsetispeed(opts, B9600);
cfsetospeed(opts, B9600);
tcsetattr(fd, TCSANOW, opts);
tcflush(fd, TCIOFLUSH);
byte[] tx = "Not so very long text string".getBytes();
byte[] rx = new byte[tx.length];
int l = tx.length;
int n = write(fd, tx, l);
if (n < 0) {
System.out.println("write() failed ");
System.exit(0);
}
System.out.println("Transmitted '" + new String(tx) + "' len=" + n);
FDSet rdset = newFDSet();
FD_ZERO(rdset);
FD_SET(fd, rdset);
TimeVal tout = new TimeVal();
tout.tv_sec = 10;
byte buffer[] = new byte[1024];
while (l > 0) {
int s = select(fd + 1, rdset, null, null, tout);
if (s < 0) {
System.out.println("select() failed ");
System.exit(0);
}
int m = read(fd, buffer, l);
if (m < 0) {
System.out.println("read() failed ");
System.exit(0);
}
System.arraycopy(buffer, 0, rx, rx.length - l, m);
l -= m;
}
System.out.println("Received '" + new String(rx) + "'");
int ec = close(fd);
}
}
If you are familiar with termios you can see that you have to look carefully to distinquish this Java code from C!
jtermios.windows.WinAPI
Or use it as an example code on how to access
those Windows API functions from Java, as such code is hard to find, especially concerning the assynchronous or overlapped I/O.
import com.sun.jna.Memory;
import static jtermios.windows.WinAPI.*;
import jtermios.windows.WinAPI.*;
public class TestSuite {
public static void main(String[] args) {
String COM = "COM5:";
HANDLE hComm = CreateFileA(COM, GENERIC_READ | GENERIC_WRITE, 0, null, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, null);
check(SetupComm(hComm, 2048, 2048), "SetupComm ");
DCB dcb = new DCB();
dcb.DCBlength = dcb.size();
dcb.BaudRate = CBR_1200;
dcb.ByteSize = 8;
dcb.fFlags = 0;
dcb.Parity = NOPARITY;
dcb.XonChar = 0x11;
dcb.StopBits = ONESTOPBIT;
dcb.XonChar = 0x13;
check(SetCommState(hComm, dcb), "SetCommState ");
COMMTIMEOUTS touts = new COMMTIMEOUTS();
check(SetCommTimeouts(hComm, touts), "SetCommTimeouts ");
check(!INVALID_HANDLE_VALUE.equals(hComm), "CreateFile " + COM);
String send = "Hello World";
int tlen = send.getBytes().length;
int[] txn = { 0 };
Memory txm = new Memory(tlen + 1);
txm.clear();
txm.write(0, send.getBytes(), 0, tlen);
int[] rxn = { 0 };
Memory rxm = new Memory(tlen);
OVERLAPPED osReader = new OVERLAPPED();
osReader.writeField("hEvent", CreateEventA(null, true, false, null));
check(osReader.hEvent != null, "CreateEvent/osReader");
OVERLAPPED osWriter = new OVERLAPPED();
osWriter.writeField("hEvent", CreateEventA(null, true, false, null));
check(osWriter.hEvent != null, "CreateEvent/osWriter");
first = false;
check(ResetEvent(osWriter.hEvent), "ResetEvent/osWriter.hEvent");
boolean write = WriteFile(hComm, txm, tlen, txn, osWriter);
if (!write) {
check(GetLastError() == ERROR_IO_PENDING, "WriteFile");
System.out.println("Write pending");
}
while (!write) {
System.out.println("WaitForSingleObject/write");
int dwRes = WaitForSingleObject(osWriter.hEvent, 1000);
switch (dwRes) {
case WAIT_OBJECT_0:
if (!GetOverlappedResult(hComm, osWriter, txn, true))
check(GetLastError() == ERROR_IO_INCOMPLETE, "GetOverlappedResult/osWriter");
else
write = true;
break;
case WAIT_TIMEOUT:
System.out.println("write TIMEOT");
break;
default:
check(false, "WaitForSingleObject/write");
break;
}
}
System.out.println("Transmit: '" + txm.getString(0) + "' , len=" + txn[0]);
check(ResetEvent(osReader.hEvent), "ResetEvent/osReader.hEvent ");
boolean read = ReadFile(hComm, rxm, tlen, rxn, osReader);
if (!read) {
check(GetLastError() == ERROR_IO_PENDING, "ReadFile");
System.out.println("Read pending");
}
while (!read) {
System.out.println("WaitForSingleObject/read");
check(ResetEvent(osReader.hEvent), "ResetEvent/osReader.hEvent");
int dwRes = WaitForSingleObject(osReader.hEvent, 1000);
switch (dwRes) {
case WAIT_OBJECT_0:
if (!GetOverlappedResult(hComm, osReader, rxn, false))
check(GetLastError() == ERROR_IO_INCOMPLETE, "GetOverlappedResult/osReader");
else
read = true;
break;
case WAIT_TIMEOUT:
System.out.println("WAIT_TIMEOUT");
break;
default:
check(false, "WaitForSingleObject/osReader.hEvent");
break;
}
}
System.out.println("Received: '" + rxm.getString(0) + "' , len=" + rxn[0]);
check(CloseHandle(osWriter.hEvent), "CloseHandle/osWriter.hEvent");
check(CloseHandle(osReader.hEvent), "CloseHandle/osReader.hEvent");
check(CloseHandle(hComm), "CloseHandle/hComm");
}
private void check(boolean ok, String what) {
if (!ok) {
System.err.println(what + " failed, error " + GetLastError());
System.exit(0);
}
}
}
jtermios.JTermios.JTermiosLogging.setLogLevel(1);
At level 0 (almost) nothing is logged. At level 4 you'll see all the calls to Windows functions
and their parameters.
The logging uses a cute little idiom in the code:
log = log && log(3,"lazyly evalueted printf like logging text %s\n","here");
What the above does it evaluates the printf like logging text only if loggin is turned on. It uses a global
variable log, short circuit evaluation using '&&' and a
static function log() that takes the log level as a parameter and variable number of 'printf'
like arguments.
log
log()
Now how cute is that!
I know, not everyone likes it but think it is easily memorable, sort of mnemonic, and
does not clutter the source code much and at level 0 has very little effect on performance.
To implement more platforms, say for example FreeBSD, do the following.
Implement a new class jtermios.freebsd.JTermiosImpl and implement the jtermios.JTermios.JTermiosInterface
in that class. It is probably easiest if you copy/paste the code from the jtermios.linux.JTermiosImpl class and
take it from there.
jtermios.freebsd.JTermiosImpl
jtermios.linux.JTermiosImpl
Your new class will be pretty much just a very thin wrapper around the JNA calls to the FreeBSD API.
In the constructor of that class initialize the correct values for all the static constants in the jtermios.JTermios such as O_RDWR,O_NONBLOCK etc etc.
You need to look up those values from the C include headers for the correct architecture and your platform. This maybe a bit tedious and error prone, so you may want to utilize the "c-linux.c" program in the "c" directory which prints out most of the values when compiled with the correct architecture with something like:
gcc -arch i386 -c c-linux.c && ./a.out
Note that you can find out the architecture of an executable with:
file a.out
To implement the FDSet family of functions for your jtermios.freebsd.JTermiosImpl you need to dig deeper into the header files and look how they are implemted in C and come up with something compatible in Java, taking your queue from the jtermios.linus.JTermiosImpl class.
gcc -arch i386 -c c-linux.c && ./a.out
gcc -arch i386 -c c-linux.c && ./a.out
file a.out
file a.out
FDSet
jtermios.freebsd.JTermiosImpl
jtermios.linus.JTermiosImpl
Lastly you need to add instantiation code to the static initializer block in jtermios.JTermios, look for
static { // INSTANTIATION
if (Platform.isMac()) {
m_Termios = new jtermios.macosx.JTermiosImpl();
} else if (Platform.isWindows()) {
and add your instantiation code there. Note that you may have to implement and instantiate code not only depending
on the platform (Java system property "os.name") but also on the architecture (Java system property "os.arch").
"os.name"
"os.arch"
The code is Copyrighted by me and is licensed under "Simplified BSD License".
I spent a fair amout of time thinking about the license and in the end I chose BSD license as it hopefully creates the least
amount of trouble for the users. Of course it can generate issues down the line with forks and contributions that want to add their own
licenses.
But there it is, the cat is out of the bag and I little control over what happens next.
I have two concerns.
I would hope that the project will not be immediately forked but that contributions, if any, would be concentrated on this project.
If forking or modifications happen they need to be clearly indentifiable as forks or modifications.
I'm not especially looking for contributions and it maybe that this project will never amount to much more
than what is available today, like said, this is a project that spiralled out of hand and at the moment I just want to
get this off my chest.
If you insist on contributing please note that I may insist on getting the copyright of the contributions transferred to me, to keep
future licensing options free.
I can be contacted at feedback2(@)sparetimelabs.com
with best regards,
Kustaa "Kusti" Nyholm
|
http://www.sparetimelabs.com/purejavacomm/purejavacomm.php
|
CC-MAIN-2016-07
|
refinedweb
| 3,312
| 57.47
|
.
Detection of Face using OpenCV
Import OpenCv module:
import cv2
A Haar Cascade is basically a classifier which is used to detect the object for which it has been trained for, from the source.The Haar Cascade is trained by superimposing the positive image over a set of negative images. The training is generally done on a server and on various stages. Better results are obtained by using high quality images and increasing the amount of stages for which the classifier is trained. You can download this file from here
# Load the cascade face_cascade = cv2.CascadeClassifier('aman.xml')
To read the input image:
# Read the input image img = cv2.imread('aman.jpg')
To convert the image into grayscale:
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
To detect faces:
# Detect faces faces = face_cascade.detectMultiScale(gray, 1.1, 4)
To draw the rectangle around faces:
for (x, y, w, h) in faces: cv2.rectangle(img, (x, y), (x + w, y + h), (50, 50, 4), 10)
To display the output:
# Display the output cv2.imshow('img', img) cv2.waitKey()
|
https://thecleverprogrammer.com/2020/05/07/face-detection-in-python/
|
CC-MAIN-2022-33
|
refinedweb
| 175
| 56.96
|
Version: 0.1.4
Author: Baruch Sterin
lektor-npm-support makes it easy to use Parcel, webpack, browserify, or any other tool to build assets for Lektor projects.
To enable the plugin, run this command while inside your Lektor project directory:
lektor plugins add lektor-npm-support
Create a
parcel/ folder and inside that folder create the following files:
configs/npm-support.ini
This file instructs the plugin how to generate the assets:
[parcel] npm = yarn watch_script = watch build_script = build
[parcel]is the name of the folder where the Parcel project is located.
npmis the package manager command used to build the project. This example will use Yarn.
watch_scriptis the npm script used in
lektor server -f npm,
build_scriptis the npm script used in
lektor build -f npm.
This plugin supports more than one such entry.
parcel/package.json
This is a standard
package.json file. It should contain two entries in the
scripts section. The
build script is used during
lektor build -f npm, and the
watch script is used during
lektor server -f npm.
{ "name": "my-parcel-project", "version": "1.0.0", "scripts": { "watch": "NODE_ENV=development parcel --out-dir=../assets/static/gen --out-file=main.js --public-url=./assets/ main.js", "build": "NODE_ENV=production parcel build --out-dir=../assets/static/gen --out-file=main.js --public-url=./assets/ main.js" }, "private": true }
Now we can use
yarn add to add Parcel, Babel and Sass:
$ cd </path/to/your/lektor/project>/parcel $ yarn add parcel-bundler babel-preset-env node-sass
parcel/babelr.rc
Next up is a simple Babel config file, using the recommended
env preset.
{ "presets": ["env"] }
parcel/main.scss
A simple SCSS file.
body { border: 10px solid red; }
parcel/main.js
A simple Javascript file that imports the SCSS file so that Parcel will know to include it as well.
import './main.scss';
Now you're ready to go. When you run
lektor server nothing wil happen,
instead you need to now run it as
lektor server -f npm which
will enable the Parcel build. Parcel will automatically build your files
into
assets/static/gen and this is where Lektor will then pick up the
files. This is done so that you can ship the generated assets
to others that might not have these tools, which simplifies using a
Lektor website that use this plugin.
To manually trigger a build that also invokes webpack you can use
lektor build -f npm.
Now you need to include the files in your template. This will do it:
<link rel="stylesheet" href="{{ '/static/gen/main.css'| asseturl }}"> <script type=text/javascript</script>
The
examples folder of this repository contains working projects.
This plugin is based on the official lektor-webpack-support Lektor plugin by Armin Ronacher.
|
https://www.getlektor.com/plugins/lektor-npm-support/
|
CC-MAIN-2019-09
|
refinedweb
| 457
| 57.37
|
PowerShell Saturday: The Iron Scripter
Ed
Summary: Microsoft Scripting Guy, Ed Wilson, shares information about the Windows PowerShell Saturday #002 Iron Scripter Event.
Microsoft Scripting Guy, Ed Wilson, is here. Today, I will share the Iron Scripter event and winning script from Windows PowerShell Saturday #002 in Charlotte, NC. Jim Christopher, Windows PowerShell MVP and leader of the Charlotte PowerShell Users Group, provided the scenario and source files. Jonathon Tyler won the event, and we will hear his thoughts later in this post. This is timely because Jim will host another Iron Scripter event at Windows PowerShell Saturday #003 in Atlanta (Alpharetta, Georgia) this Saturday, October 27, 2012. Tickets are still available—but going fast. The trophy for the first Iron Scripter is shown here (and, yes, it is as hefty as it looks. Solid steel—it checks in at nearly 25 pounds).
The scenario, according to Jim:
Correct data field formatting in XML files
Your script must process a collection of XML files. Each file contains information about a single user. The phone field has been manually entered and is not always in the correct format. Your script must validate the format of the phone field in each file, fix misformatted phone fields, and update the invalid XML files on disk.
The required format of the phone field is:
(###) ###-####
The XML files contain phone fields in a wide variety of formats. Some contain parenthesis, hyphens, spaces, periods, etc, and others do not.
In all cases, the digits of the phone number are in the correct order.
Example of an invalid phone field:
<user>
<id>User98</id>
<name>Dacia Charbonneau</name>
<phone>2389101449</phone>
</user>
Example of a corrected phone field:
<user>
<id>User98</id>
<name>Dacia Charbonneau</name>
<phone>(238) 910-1449</phone>
</user>
The sample data provided to the users during the event can be found at the Script Gallery.
Judges are instructed to rate your scripts on the following criteria:
1) Functionality: Does your script solve the problem?
2) Usability: Does your script apply to new instances of the problem?
3) Readability: Is your script easily understandable?
Judges ratings are final. No complaining allowed.
For posterity (and comparison), my 2-minute solution (typed at the console, not in a script file) is below:
dir *.xml | foreach {
$x = [xml](get-content $_);
if( $x.user.phone -notmatch ‘\(\d{3}\) \d{3}-\d{4}’ )
{
$p = $x.user.phone -replace ‘\D+’,”;
$p = $p -replace ‘(\d{3})(\d{3})(\d{4})’,'($1) $2-$3′;
$x.user.phone = $p
$x.save( $_.fullname );
}
}
In the photo shown here, Jonathan Tyler receives the Iron Scripter trophy from the Scripting Wife at the packed house awards ceremony at the end of Windows PowerShell Saturday in Charlotte, North Carolina. The competition was hard fought and a fraction of a point separated Jonathan from multiple Scripting Games winner Glen Sizemore.
The solution as supplied by Jonathan
Originally, I had not planned to go to the Windows PowerShell Saturday event, although I really wanted to go. My wife was scheduled to be out of town, and I don’t think the Microsoft Campus would be a good place for three small children while trying to learn some new techniques—much less at a scripting competition. However, as time progressed, schedules changed, and I was able to attend.
When I saw the presentation schedule, I saw the Iron Scripter! competition and thought it might be fun. The more I thought about it, the more I realized I wanted to try it. I had competed in the Official Scripting Games two years ago and had fun and learned a lot. (I do plan on entering the 2013 Games, by the way.)
When Jim Christopher (@beefarino) presented the scenario, I was a little relieved. I had recently done some work with Windows PowerShell and XML, so it was somewhat fresh in my mind. I have even written a couple of blog posts about XML with SharePoint by using Powershell. The scenario was to read in a series of XML files, validate the phone number field for a specific format, and then write the file back with the corrected format. This had to be completed within one hour.
Unfortunately, when I saw that, my mind went completely blank on how to work with XML—we’ll call it stage fright. To begin, I opened the Windows PowerShell ISE and loaded one of the sample data files. I began to play with the XML file in the console to see that the properties lined up from the elements in the XML file. When starting a new script, I normally will run through a series of piece-meal steps in the console window to make sure I am thinking correctly as I am writing snippets into the code editing window. It took a couple of minutes to figure out a direction, and the coding began.
I began scripting furiously and quickly developed a solid base from which to work. I felt pretty pleased with my initial work, and my script worked. However, it was not exactly what the scenario required. I had run through several tests, restored the test files from the zip file, and run the script again to make sure I was getting valid results. I was. I took some time to accept pipeline input as well as a string filename input. I had even taken some time to write in comment-based help for some extra brownie points. Then it happened. I looked up and re-read the requirements. “Validate” and “correct” were the keywords (they were in bold type) that jumped off the screen. My initial solution simply forced all of the phone number fields to the correct format, even if they were already in the correct format. It worked, but the requirements were not yet met.
I looked at my watch and saw that I had about 20 minutes left. I opened up my Regular Expression editor and began plugging in a quick format for the phone number. Once I got the format test ready, I modified my code to check the Phone field for the proper format. If it failed, I sent the Phone string up to a secondary function that did the conversion. It stripped the Phone string of any non-numeric characters, converted that resulting numeric string to Int64, and then formatted it by using the ToString() method with the required format. That newly formatted string was assigned back to the Phone element, and the file was saved. After a few more tests against the data (and some debugging strings output to the console for verification that were later removed), I submitted the script to the server.
During the first half of the competition, I wasn’t as worried about the time. My first thoughts were to get something saved to the server. Once I got rolling with a workable solution, I spent all my time trying to get it formatted properly. I realized at one point that I was doing the same “work” in two different places. This prompted me to write the secondary function that I put in the BEGIN block of the script. Once I did that, it made the code look a lot cleaner, which is always helpful when you need to figure out what is going on in your scripts. I was able to submit my final version with about five to ten minutes to spare.
Time was a factor in the solution. If I had more time to work on the script, I would have done a few more things:
- Passed the XML data and the file name to another function that would have done nothing but validate, update, and save the information back.
- Set up file testing (Test-Path) to verify that the file actually exists.
- Might also have included a parameter set to handle in-memory XML as well.
I had a blast with this competition. A friend and colleague of mine, David Mitchell (@surgeterrix), was able to go with me. I had mentioned to him about the competition before the event. He told me that he didn’t feel like he was knowledgeable enough about Windows PowerShell to be able to enter the competition. I finally talked him into entering. After the submissions were closed, he told me that he had fun working on it as well…and that he learned something new about Windows PowerShell. I believe this is the biggest key to these types of competitions. The things you learn from working out the scenarios well outweigh the prizes at the end of the competition. Don’t get me wrong—I am enjoying the bragging rights (Glenn Sizemore – @glnsize), but it is just plain fun to compete and learn at the same time. I dig in a little to Glenn, but he was formidable competition to say the least. The difference in our scores was only 0.1 point.
================ SOURCE CODE =====================
Function Update-PhoneNumber
{
<#
.SYNOPSIS
Updates the phone number format to a pre-defined format in an XML document.
.DESCRIPTION
This function will read a single or multiple XML files of user objects. The phone number field will be read and forced to the format: (###) ###-####. The XML data will be saved back to the original file when complete.
.PARAMETER File
Accepts System.IO.FileInfo objects from the pipeline to process multiple files.
.PARAMETER FileName
Loads the specified file and processes the phone number field.
.INPUTS
System.IO.FileInfo – Using the Get-ChildItem cmdlet, you can specify a group of XML files to process.
String – Using a single fully qualified file name to process a single file.
.OUTPUTS
XML document saved back to the original file name.
.EXAMPLE
c:\PS> Update-PhoneNumber -FileName c:\temp\users\user0.xml
This example will read in the user0.xml file from the specified directory and force the format for the phone number.
.EXAMPLE
c:\PS> Get-ChildItem -Path C:\Temp\Users -filter *.xml | Update-PhoneNumber
This example will read all XML files in the C:\Temp\Users directory and update the phone number format and save the files back to the same location, overwriting the original data.
#>
[CmdletBinding()]
Param(
[Parameter(Mandatory=$true,Position=0, ParameterSetName=”FileInfo”, ValueFromPipeline=$true)]
[System.IO.FileInfo]$File,
[Parameter(Mandatory=$true,Position=0, ParameterSetName=”FileName”)]
[string]$FileName
)
BEGIN
{
Function Convert-PhoneString
{
Param([string]$phoneString)
$phoneNumber = “”
foreach ($char in $xmlFile.user.phone.ToCharArray())
{
if ($char -match “\d”)
{
$phoneNumber += $char
}
}
return ([int64]$phoneNumber).ToString(“(###) ###-####”)
}
}
Process
{
switch ($PSCmdlet.ParameterSetName)
{
“FileInfo”
{
$xmlFile = [xml](get-content $file.FullName)
if (-not ($xmlFile.user.phone -match “^\(\d{3}\)\s\d{3}-\d{4}$”))
{
$xmlFile.user.phone = Convert-PhoneString $xmlFile.user.phone
$xmlFile.Save($file.FullName);
}
}
“FileName”
{
$xmlFile = [xml](get-content $FileName)
if (-not ($xmlFile.user.phone -match “^\(\d{3}\)\s\d{3}-\d{4}$”))
{
$xmlFile.user.phone = Convert-PhoneString $xmlFile.user.phone
$xmlFile.Save($FileName);
}
}
}
}
}
Jonathan’s complete script is available through the Scripting Guys Script Repository.
Well, that is it. The first ever Iron Scripter event was a success, and, as you can see, the competition was tremendous. Jim will be hosting the second Iron Scripter event at Windows PowerShell Saturday #003 in Atlanta (Alpharetta, Georgia) this Saturday, October 27, 2012. There are still tickets available. Come check it out—it will be a
|
https://devblogs.microsoft.com/scripting/powershell-saturday-the-iron-scripter/
|
CC-MAIN-2019-47
|
refinedweb
| 1,874
| 65.22
|
XMLC
Over the last few months, we have looked at a variety of methods for creating web applications using server-side Java. We started with simple servlets and then moved onto JavaServer Pages (JSPs). In order to remove Java code from our JSPs, we began to use JavaBeans, objects whose methods are automatically available to our pages.
But you can only go so far with JavaBeans, which is where custom actions come in. These actions, which look like XML tags and attributes in our JSPs, are tied to the methods of a Java class. In other words, placing a tag in our JSP can effectively invoke one or more methods. Combining custom tags with beans allows us to remove quite a bit of the Java code from our JSPs.
But in the end, what have we accomplished? As we saw last month, intelligent use of custom actions means creating our own mini-language, with its own loops, conditionals and variables. Writing our own tags saves graphic designers from having to use Java and allows us a greater separation between form and content. But it does not go nearly far enough in solving problems.
One clever solution is part of the Enhydra application server, about which I will be writing over the next few months. XMLC, or the XML compiler, turns XML files (including HTML and XHTML files) into Java objects. By invoking methods on these objects, we can modify the HTML that is eventually produced.
XML, as you have probably heard by now, is the extensible markup language. What began as a simple and small standard several years ago has ballooned into a veritable alphabet soup of standards and proposed standards.
But the core of XML has remained the same, allowing people to create their own markup languages using a uniform syntax. XML is not meant to be used directly; rather, it is meant to let you create your own markup languages. Because those markup languages are based on XML, they have a well-understood syntax that can be verified by any XML parser. Moreover, if you define a data type definition (DTD) for your markup language, a verifying parser can ensure that the elements and attributes are within accepted norms.
HTML and XML are both standards of the World Wide Web Consortium (W3C), have a similar syntax and are often discussed in the same breath. But in fact, HTML is just one markup language, while XML allows you to create your own languages. More significantly, HTML has a much looser syntax than XML, thanks in no small part to historical factors. The following is thus legal HTML: <img src="foo.png">.
But because every tag must be explicitly closed in XML-derived languages, this would be illegal in an XML document. Instead, we would have to say: <img src="foo.png"/>.
In order to bridge the gap between HTML and XML, the W3C has issued a recommendation known as XHTML, the XML implementation of HTML. While there are indeed various benefits to the use of XHTML, the biggest one is that XML tools will now work on our HTML documents.
Of course, this means that our XHTML documents will look a bit more formal than the HTML documents we might be used to writing. While HTML allows us to be sloppy, using <P> to separate paragraphs, XHTML is much stricter, forcing us to begin paragraphs with <P> and end them with </P>. Attributes must also appear in double quotes, which many people fail to do when working with straight HTML.
While XHTML might be a pain for humans, it actually reduces the load on programs by making the syntax more regular, and thus easier to read and write. But the biggest benefit is the fact that XHTML documents can now be treated as XML documents.
XML documents are trees, which should ring a bell for those of you who studied computer science in college. Trees are remarkably easy to work with in theory, but the practice can be a bit tricky sometimes, depending on the way in which the interface is implemented.
There are two popular and cross-platform APIs for working with XML: SAX (the Simple API for XML) is designed to work with incoming streams of XML data, allowing it to be small and efficient. The DOM (document object model), by contrast, gives us access to the entire document tree at once. This allows us to traverse and modify nodes, including adding new nodes and removing old ones. However, it also means that the entire document must be loaded into memory before we can begin to work with documents using the DOM. This makes it more powerful than SAX, but also slower and more resource-intensive.
XMLC works by converting an XML file, normally written in HTML or XHTML, into a Java class that creates and manipulates a DOM tree. You can use standard DOM methods to add, modify and remove nodes on the tree, thus changing the document that will eventually be output.
But the truly clever idea in XMLC is the use of HTML “id” attributes. When the XMLC complier sees an id attribute, it creates methods that allow us to retrieve and modify the text contained within that attribute. The site designers thus work with HTML, identifying areas of dynamic text by giving them unique identifiers. When the designers have finished with their mockup of the original HTML page, they compile it (using XMLC) into a Java class. Developers then create servlets that instantiate that class, use methods to replace the mockup text with dynamically generated content and send the document to the user's browser.
The basic idea is that the designers do not work on hybrids of text and HTML, but rather on mockups of the final output. So long as the id attributes do not change, the HTML file and servlet can evolve in parallel, with neither designers nor developers waiting for their counterparts.
As I mentioned above, XMLC is one element of the Enhydra application server. The 3.x version of Enhydra is considered to be production-ready and includes a copy of XMLC that most users will find more than adequate. Because I am particularly interested in Enhydra for working with Enterprise JavaBeans (EJB), I have been working with the beta version of 4.x, otherwise known as Enhydra Enterprise. By the time you read this, the final release of Enhydra Enterprise should be available, giving web developers an open-source, production-quality J2EE-compliant application server.
To work with XMLC, I downloaded the Enhydra Enterprise beta, a 15.7MB file named enhydra4.0.tar.gz. Open this file, and you will find a wealth of libraries, applications and documentation for the Enhydra application server. We will ignore much of this for now, concentrating on XMLC for the time being.
Almost all of Enhydra is written as Java classes invoked from shell scripts. In order for the shell scripts to find the Java classes, they must be configured for your particular installation. You can do this by entering the Enhydra directory (enhydra4.0 on my system) and running the configure script:
./configure /usr/java/jdk1.3
configure normally takes a single argument—the root directory of your JDK 1.3 installation. While earlier versions of Enhydra (and particularly earlier versions of Enhydra Enterprise) wouldn't work with JDK 1.3, current versions will only work with 1.3. Since JDK 1.3 has a number of other benefits, and a Linux version is supported by Sun, it is probably a good idea to install it.
If you have installed Enhydra somewhere other than /usr/local/enhydra, you should probably set the ENHYDRA environment variable to your installation directory.
Full use of XMLC depends on placing three different .jar files in your CLASSPATH. Since we will be concentrating on XMLC for the rest of this article, we should probably add them now, using bash syntax:
export CLASSPATH=$ENHYDRA/lib/xmlc.jar:\ $ENHYDRA/lib/enhydra.jar:\ $ENHYDRA/lib/xmlc-support.jar
If you're like me, you will want to have a number of items in your CLASSPATH in addition to Enhydra-related items. Here is how I set my CLASSPATH, for instance:
export CLASSPATH=$ENHYDRA/lib/xmlc.jar:\ $ENHYDRA/lib/enhydra.jar:\ $ENHYDRA/lib/xmlc-support.jar:\ $TOMCAT_HOME/classes:\ $TOMCAT_HOME/lib/servlet.jar:\ /usr/share/pgsql/jdbc7.1-1.2.jar:\ .Notice how I placed the Enhydra .jar files before the others on my system in order to avoid potential problems with conflicts. Since Enhydra has the newest versions of some classes, such as those having to do with the DOM, they should take precedence.
Note that not all three Enhydra-provided .jar files are necessary for each stage of working with XMLC. However, I found it convenient to include all of them at all stages in order to avoid unpleasant surprises later on.
Now that we have installed everything we need to work with XMLC, let's try it with a simple HTML file:
<html> <head><title>This is a title</title></head> <body> <h1>This is a headline.</h1> <p id="firstpara">This is a paragraph.</p> <img src="foo.gif"/> <p>This is a second paragraph.</p> </body> </html>
While XMLC works just fine with straight HTML files, XHTML is a better idea because it stops us from generating files that the DOM cannot represent. For example, XML forbids overlapping tags:
<i><p>Wow</i>, he thought.</p>The above is tolerable HTML but is illegal XML and XHTML. So while your web browser can somehow handle this HTML and make sense of it, XMLC will generate a warning indicating that it is discarding what it considers to be a useless closing tag. XMLC will often warn you when your HTML is not well formed, helping you to identify potential problems. While you might not have to consider your document's structure when you are writing simple HTML documents, the manipulations that you can perform with XMLC require that you have a clear understanding of how your document will be rendered.
The first paragraph in the previous sample statement is identified with the id attribute “firstpara”. We will soon see how we can manipulate that text from within a Java program, using the id as a lever into the document.
To turn our document into a Java class, we invoke the xmlc program. Assuming that our above HTML file was called foo.html, we can say:
$ENHYDRA/bin/xmlc -parseinfo -verbose -keep foo.html
This turns foo.html into a Java source file called foo.java, which is in turn compiled into foo.class. The -keep argument retains foo.java, rather than deleting it once it has been compiled into foo.class. And while they are unnecessary, I like to use -parseinfo and -verbose when working with xmlc, if only to get some visual feedback on the compilation process.
The Java source code created by XMLC is fairly long and boring, if well-commented. For those of us who want to modify foo.html, the most important parts of foo.java are the getElementFirstpara() and setTextFirstpara() methods. The former returns the text associated with the id “firstpara”, while the latter allows us to swap that text with an arbitrary string.
Listing 1 contains the source code to a small command-line Java class (PrintFoo.java) that prints the contents of the Java-ized version of foo.html. Before printing it, it uses setTextFirstpara() to modify the output:
myfoo.setTextFirstpara("This has been changed");
Once we have made that change, we can display the document:
System.out.print(myfoo.toDocument());We could traverse the DOM tree ourselves, looking for nodes with a certain id and then modify it manually. However, XMLC's convenience methods make it extremely easy and straightforward to modify such text.
If you have just run PrintFoo, you will notice that the output HTML is displayed without any of the original white space. The resulting document is harder for humans to read but is rendered identically by browsers. That said, I have always tried to keep my HTML documents formatted correctly for easier debugging, and it would be nice for XMLC to include a -preserve-whitespace option.
From what we have seen so far, it would seem that XMLC makes it easy to modify entire paragraphs but difficult to change a single word. However, XMLC takes advantage of the HTML “span” tag, which takes an id attribute and allows us to identify individual words, characters and images that we might want to modify. For example:
<P id="para">This is a paragraph, <span id="phrase">and this is a phrase</span>. </P>
When we compile this HTML using XMLC, we will be able to modify the contents of the entire paragraph using the SetTextPara() method and the individual phrase using the SetTextPhrase() method.
Now that we have seen how to work with XMLC from the command line, let's look at a servlet that accomplishes the same task. For starters, our simple PrintFooServlet servlet will receive an HTTP request and will return a copy of the document.
Listing 2 contains a copy of the servlet that displays a foo.html. Like its command-line counterpart, it creates an instance of our “foo” class, modifies some of its text and then writes a textual representation of the XML tree to an output stream. In this particular case, however, the output stream is connected to the user's browser. The user thus sees the modified template without knowing that two Java classes (and an original HTML document) were involved.
Listing 2. PrintFooServlet.java
For our servlet to work, I needed to put a copy of foo.class in a directory located under the Jakarta-Tomcat servlet engine's CLASSPATH environment variable. I chose to put it in $TOMCAT/classes, at the top level. If this were a production class, I would undoubtedly want to put it in a more intelligent place, taking advantage of Java's hierarchical namespace. However, I executed xmlc without specifying a package, meaning that foo.class must be put in the top-level namespace. In order to place foo.class in the il.co.lerner namespace, I would have had to use the -class option:
$ENHYDRA/bin/xmlc -class il.co.lerner.foo\ -parseinfo -verbose -keep foo.html
With foo.class in $TOMCAT/classes, I was able to compile PrintFooServlet.java successfully. Now the only remaining challenge was to execute this servlet and display my modified HTML page. Once again, I needed to modify the CLASSPATH, but this time the CLASSPATH in need of change was that of the Tomcat servlet engine, which executes servlets on our behalf. I modified $TOMCAT/bin/tomcat.sh such that just before it exports its CLASSPATH, we add the three Enhydra-supplied .jar files and restarted Tomcat. Moments after pointing my browser at the servlet, I was delighted to see a modified version of my original HTML file on my screen.
It is easy to see how we could populate a page with information taken from a relational database. For example, here is a small PostgreSQL table that we can use to store a different saying for each calendar day:
CREATE TABLE DailySayings ( date TIMESTAMP NOT NULL, saying TEXT NOT NULL, UNIQUE(date) )
Now let's insert a number of sayings into our system:
INSERT INTO DailySayings(date, saying) VALUES (CURRENT_DATE, 'A bird in the hand is worth two in the bush.'); INSERT INTO DailySayings(date, saying) VALUES (CURRENT_DATE+1, 'A penny saved is a penny earned.'); INSERT INTO DailySayings(date, saying) VALUES (CURRENT_DATE+2, 'The rain in Spain falls mainly in the plain.');To retrieve today's saying, we merely need the following query:
SELECT saying FROM DailySayings WHERE date = CURRENT_DATEIn order to write a servlet that displays today's saying, we will need two classes: a template that we will create with XMLC (saying.html, which will be compiled into saying.class) and another that will load and manipulate the template (DailySaying.java). We will agree in advance of writing our XMLC document and our manipulation class that the id “saying” will link the two together.
Our XMLC document is fairly straightforward:
<html> <head><title>Today's saying</title></head> <body> <h1>Today's saying</h1> <p>And now, as you requested, today's saying: <span id="saying">Saying Goes Here</span>.</p> </body> </html>
I compiled this HTML document into the Java class il.co.lerner.saying, keeping around the .java file just for fun:
$ENHYDRA/bin/xmlc -class il.co.lerner.saying\ -parseinfo -verbose -keep saying.htmlI then copied the resulting saying.class file into $TOMCAT_HOME/classes/il/co/lerner, where I keep my servlet-related classes.
Once I installed my document, I had to write a manipulation class. This class executes the SQL query that we saw above, retrieving the results and sticking them into our compiled XMLC document. Listing 3 [see Listing 3 at] contains the source code for our servlet, which I compiled and put into an active servlet context on my Tomcat server. After restarting Tomcat and Apache, I was able to retrieve today's saying via my web browser, with the SQL results instantiated into the HTML document.
When I first began to look into XMLC, I had my serious doubts about its viability. After years of working with hybrid templates, it just seemed too weird to turn an HTML file into a Java class, only to manipulate that class using the DOM. And indeed, it takes significantly greater resources to fire up a DOM parser than to simply display a file.
As I have begun to work with XMLC, however, I am increasingly aware of its advantages over such templates. In essence, XMLC forces designers and developers to create a contract, or API specification, between their documents and programs. Once this API is in place, it cannot easily be changed, which is not necessarily a bad thing. Most importantly, the stability of the API between designers and developers allows them to work in parallel, barely interfering with each other's work.
Because a Java manipulation class can modify the HTML of a compiled document in any way it chooses, we can easily imagine a situation in which we bring in three classes at a time: a header file, the main body of the document and a footer file. Our class could then use DOM methods to attach the header to the beginning of the document and the footer to the end. In such a way, we could add global formatting to our site without having to copy boilerplate text to the top of each file.
There are, of course, a number of irksome details when working with XMLC. One is that it quickly gets boring and frustrating to write one servlet per HTML file. True, we could write a single servlet that takes the name of a file in its query string, acting almost as a document template for a variety of classes created by XMLC. Perhaps I have not yet explored Enhydra enough to have discovered the answer to this question, and perhaps Enhydra developers quickly get used to creating two Java classes for each page they wish to display. Regardless, this can quickly create an overwhelming number of classes, even on a small- to medium-sized site.
The biggest problem that I see with XMLC is the lack of a high-level API to manipulate HTML (and XML, for that matter). One of the FAQs for XMLC is “How do I add a row to an HTML table?” Such a task, which is trivial to accomplish with standard HTML, quickly becomes a burden with XMLC. You must first find the bottom of the table to which you want to add rows and then add individual nodes (and attributes) to that node. It has a very non-HTML feel to it and forces the developer to think of nodes when he or she would prefer to think in terms of HTML. Given that Enhydra includes an API to create SQL queries using Java methods, I would imagine that a similar API for HTML manipulation wouldn't be too difficult.
XMLC is an intriguing technology that sits at the heart of the Enhydra application server. XMLC forces developers (and designers) to consider how they will interact before they begin working and then allows them to work independently. While this mode of operation might throw experienced template users off balance, it quickly becomes second nature and feels more natural than I ever expected.
Indeed, the fact that Zope's ZPT uses a similar method for separating form from content probably points to a trend within the web development community. We can expect to see more XMLC-like systems in the near future. If we're lucky, perhaps there will even be some standardization of these templates, so that designers can move across systems without having to learn the subtle differences between them.
While XMLC is important, Enhydra has many other features that make it worth investigating. Next month we will continue to look into Enhydra, looking at ways in which it speeds up the writing of server-side database applications.
|
https://www.linuxjournal.com/article/4783
|
CC-MAIN-2020-40
|
refinedweb
| 3,542
| 62.17
|
In Java Spark, I could use either keyBy() or mapToPair() to create some key for a JavaRDD. Using keyBy() makes my intentions more clear and takes an argument function with a bit less code (the function returns a key rather than a tuple). However is there any improvement in performance in using keyBy() over mapToPair()? Thanks
You can browse the difference in the source:
def mapToPair[K2, V2](f: PairFunction[T, K2, V2]): JavaPairRDD[K2, V2] = { def cm: ClassTag[(K2, V2)] = implicitly[ClassTag[(K2, V2)]] new JavaPairRDD(rdd.map[(K2, V2)](f)(cm))(fakeClassTag[K2], fakeClassTag[V2]) }
And:
def keyBy[U](f: JFunction[T, U]): JavaPairRDD[U, T] = { implicit val ctag: ClassTag[U] = fakeClassTag JavaPairRDD.fromRDD(rdd.keyBy(f)) }
Which calls:
def keyBy[K](f: T => K): RDD[(K, T)] = withScope { val cleanedF = sc.clean(f) map(x => (cleanedF(x), x)) }
They basically both call
map and generate a new
RDD. I see no significant differences between the two.
|
https://codedump.io/share/eYjhXghSqXhe/1/spark-keyby-vs-maptopair
|
CC-MAIN-2017-09
|
refinedweb
| 159
| 55.24
|
Adds the specified path to the Python system path if it is not already there. Takes into account terminating slashes and case (on Windows).
Returns -1 if the path does not exist, 1 if it was added, and 0 if it was not (because it is already present).
Discussion
Modules must be on the Python system path before they can be imported. But we don't always want a huge permanent path, because that slows things down. This simple function dynamically adds a path.
It has been corrected to meet all shortcomings addressed in the comments.
Improvements. There are two problems with this code: you should call os.path.abspath and os.path.exists on the argument, and also make a second test with os.sep added (sys.path sometimes contains paths with or without a trailing [back]slash).
Windows is not case-sensitive, and accepts forward and backward slashes. Extending the wish to avoid duplication, you might want to standardize the case and the separator you use before committing the change to sys.path in Microsoft environments.
Updated. I have updated the code to reflect the shortcomings addressed in the previous two comments. It has been tested on Linux and Win32. Thanks!
Dynamic or Static? First off, thanks for posting the code. Now, unto my gripe. The concept of dynamically adding a path suggests adding a path expression on-the-fly, i.e., prompting a user for a path expression.
How does this function dynamically add a path as you suggest? The code suggests a static reference toward a file system folder/sub-directory within the namespace of a given file, either MS Windows or a Unix variant.
The explanation of your function might confuse first-time Python developers. Perhaps the following explanation might improve the situation:
For Python to import a module, the Python interpreter requires a reference to the location of the module within the namespace of your file system, i.e., Python needs to know the sub-directory/folder where a module exists. One method that informs the Python interpreter regarding the whereabouts of a module is to use the sys.path.append('your_path_here') function of the sys module that is part of the Python environment.
|
http://code.activestate.com/recipes/52662/
|
crawl-002
|
refinedweb
| 370
| 59.3
|
Patches item #1093253, was opened at 2004-12-30 13:50 Message generated for change (Comment added) made by theller You can respond by visiting: Category: Core (C code) Group: Python 2.5 Status: Open Resolution: None Priority: 5 Submitted By: Thomas Heller (theller) Assigned to: Nobody/Anonymous (nobody) Summary: Refactoring Python/import.c Initial Comment: This patch refactores Python/import.c. find_module() was changed to return an PyObject* pointer which contains the module's pathname, instead of filling out a char* buffer. load_module() accepts the PyObject* pathname instead of a char*. The patch is probably missing some error checking, and the 8 character hack for loading extensions on OS2 is not implemented, but the test case runs without errors on Windows XP pro. If a change in spirit of this patch is accepted, I'm willing to further work on it so that eventually unicode entries on sys.path, which can not be encoded with the default file system encodings, will work as expected (currently they don't). See also: ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2005-01-06 21:17 Message: Logged In: YES user_id=11105 For easier reading, I've attached the complete, new Python/import.c file. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2005-01-04 20:20 Message: Logged In: YES user_id=11105 New patch attached with multiple implementations of case_ok, and more error checking: import.c.patch2 Slightly tested on OSX, Linux, Windows. The case_ok function still needs to be fixed for RISCOS (which I cannot test). ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-12-31 15:17 Message: Logged In: YES user_id=6656 Perhaps there should be multiple implementations of case_ok ... i.e. #if PLAT1 int case_ok(...) { ... } #elif PLAT2 int case_ok(...) { ... } #endif the current spaghetti is confusing, even by the standards of import.c... ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2004-12-31 15:11 Message: Logged In: YES user_id=11105 Yes, I overlooked that the initialization of the variables is inside an #if defined(MS_WINDOWS) block. Probably it would be better to leave the signature of case_ok() as before and call it through a wrapper which converts the arguments. I will prepare a new patch in a few days. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2004-12-31 14:57 Message: Logged In: YES user_id=6656 Applied the patch and built on OS X. This was the result: $ ./python.exe 'import site' failed; use -v for traceback ../Python/import.c:1496: failed assertion `dirlen <= MAXPATHLEN' Abort trap dirlen is 796092779, which seems fishy :) An uninitialized variable, maybe? Haven't looked, really... ---------------------------------------------------------------------- You can respond by visiting:
|
https://mail.python.org/pipermail/patches/2005-January/016657.html
|
CC-MAIN-2016-50
|
refinedweb
| 434
| 64.2
|
[
]
Sameer Paranjpye updated HADOOP-3002:
-------------------------------------
Fix Version/s: (was: 0.16.2)
> HDFS should not remove blocks while in safemode.
> ------------------------------------------------
>
> Key: HADOOP-3002
> URL:
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Reporter: Konstantin Shvachko
> Priority: Critical
> Fix For: 0.17.0
>
>
> I noticed that data-nodes are removing blocks during a rather prolonged distributed upgrade
when the name-node is in safe mode.
> This happened on my experimental cluster with accelerated block report rate.
> By definition in safe mode the name-node should not
> - accept client requests to change the namespace state, and
> - schedule block replications and/or block removal for the data-nodes.
> We don't want any unnecessary replications until all blocks are reported during startup.
> We also don't want to remove blocks if safe mode is entered manually.
> In heartbeat processing we explicitly verify that the name-node is in safe-mode and do
not return any block commands to the data-nodes.
> Block reports can also return block commands, which should be banned during safe mode.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
|
http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/200803.mbox/%3C1066129547.1206130045279.JavaMail.jira@brutus%3E
|
CC-MAIN-2014-52
|
refinedweb
| 193
| 56.55
|
Hello,
in the user interface of Subclipse there's no distinction between
connecting to an existing project in a repository and doing the initial
import of a project. I just checked in a bogus directory because I made
a typo in the path.
Before presenting the dialog with the commit message for an Import,
Subclipse should check if the path already exists in the repository. If
it doesn't, continue with Import. If it does, present the "already
exists, doing checkout" dialog.
Alternatively, add a separate action for connecting to an existing repo
location, which fails if it doesn't exist (accidental commit impossible).
Also it would be nice if the checkout could run in the background.
Cheers
Andreas
------------------------------------------------------
To unsubscribe from this discussion, e-mail: [users-unsubscribe_at_subclipse.tigris.org].
Received on 2010-09-18 08:24:24 CEST
This is an archived mail posted to the Subclipse Users
mailing list.
|
https://svn.haxx.se/subusers/archive-2010-09/0022.shtml
|
CC-MAIN-2017-09
|
refinedweb
| 151
| 55.64
|
Portal: Event Handling
Overview
This document gives an overview of the event handling of the portal engine.
The sample portal that comes with the Cocoon distribution contains several working samples for event handling.
Introduction
The portal engine uses an event based mechanism for communication. Each and every change or action is propagated through an corresponding event. Examples are changes in status, changes in the layout or the user clicking a link/submitting a form. The portal uses the publisher/subscribe paradigm: each component interested in a specific event can subscribe itself for this type of event. And of course each component is able to send out events.
The processing of a portal request (a request send to the Cocoon portal) is divided into two phases: event handling and rendering. In the first phase all events are processed. For example if the user clicks a link this triggers an event that is published. Any receiver of this event might in turn fire new events that are published as well.
When all events are processed, the first phase is finished and the second phase, the rendering, is started. At this point of time all event handling and all information exchange should be finished.
Events and the request/response cycle
In the Portal, an event is represented by a Java object. This event object contains all necessary information to process the event. So, in most cases an event contains the object to modify, what to modify and the value to set. For example, the minimize event for minimizing a coplet, contains the coplet, the information to change the window state and the value "minimize" to set.
There are different types of events: a type for changing the window state, a type for removing a coplet, a type for links that are clicked by the user etc. Each event type is represented by a Java class (or interface).
A component that processes for example the window state request is subscribed to this minimize event (or: the corresponding class/interface) and when such an event is fired, it changes the window state of the coplet to minimize. Every data this component needs is stored in the event. This is a very important detail: the event is not directly processed (in this case) by the object that is changed (the coplet) but by a central subscribed component that changes the coplet. This is because of the publisher/subscribe mechanism used: many components in the portal can subscribe to the same event type if they are interested. So, each component that is interested in an event needs all information about this event. That's why all data is stored in the event itself.
Let's have a look how such an event is created:
Event event; event = new ChangeCopletInstanceAspectDataEvent( copletInstanceData, "size", SizingStatus.STATUS_MINIMIZEDD);
Event is just a marker interface, the concrete implementation ChangeCopletInstanceAspectDataEvent implements this interface and requires three pieces of information: the CopletInstanceData, the information about what to change (size) and the new value.
All events must implement the marker interface Event, so if a component is interested in all Events it could subscribe itself using this event type.
If you want to fire an event, you have to publish it. Therefore you need the event manager, a central portal component. You can lookup this component, fire the event and release the manager again. If you fire the event, the event is directly published to all subscribed components.
EventManager manager = null; try { manager = serviceManager.lookup(EventManager.ROLE); manager.send(event); } finally { serviceManager.release(manager); }
As noted above, the event will be directly fired. But usually in a portal application, events are not fired directly but are invoked by some user action. This means, the user clicks on a link in the browser, the request is targetted at Cocoon and the portal invokes (fires) the correct events.
For this, a link (or a form action) must know, which event it should fire, if it is clicked. So, in other words, a link is associated with a concrete event. But on the other site, an event is a Java object and we can only use strings in URLs. So how does this work?
The part of the portal that generates the link, creates the event object with all necessary data, transforms this event into a usable URI and this URI is the target of the link. When the user clicks on this link, the portal transforms the URI back into the Java event object and fires the event.
The transformation Event->URI->Event is done by another portal component, the link service. Most portal components (apart from the event manager) are available through another central component, the portal service. So you need to have access to the portal service component. (Renderers e.g. don't have to lookup the service by itself, they get it as a method parameter).
PortalService service = null; try { service = serviceManager.lookup(PortalService.ROLE); LinkService ls = service.getComponentManager().getLinkService(); String uri = getLinkURI( event ); // create a link that references the uri } finally { serviceManager.release(service); }
That's all you have to do: create the event object, get the link service, transform the event into a URI using the service and then create the (html) link using the URI. Everything else is handled by the portal for you.
In addition, you can transform several events into one single link. So you can create links, the user can click, that do several things at the same time (minimizing one coplet and maximizing another one etc.). The link service offers corresponding methods for this.
Changing the State of a Coplet
In most cases you want to change the state of a coplet because the user performed an action. The portal engine provides you with some events that you can directly use.
- CopletJXPathEvent
- ChangeCopletInstanceAspectDataEvent
The CopletJXPathEvent requires again three pieces of information: the coplet instance data to change, the JXPath expression that defines the data to change and the value:
Event event = new CopletJXPathEvent(copletInstanceData, "attributes/username", username);
In the previous chapter, we already saw an example of the usage of the ChangeCopletInstanceAspectDataEvent.
It is of course possible that you write your own events for changing the state of a coplet. But in this case make sure that your own event implements the interface CopletInstanceEvent. This helps the portal engine in tracking if a coplet has been changed.
Subscribing to Events
If you are interested in events, you can subscribe to a specific event type. As events are Java objects, you subscribe for all events of a specific interface or class (and all of the subclasses). Subscribing is done using the event manager:
EventManager manager = null; try { manager = serviceManager.lookup(EventManager.ROLE); manager.register( myComponent ); } finally { serviceManager.release(manager); }
The component you subscribe must implement the Receiver interface which is just a marker interface! But as the component is just interested in some particular events, the portal uses a reflection based mechanism to query the component: whenever a component registeres itself, the event manager uses reflection to search for all methods with the name inform and a signature with two parameters: the first being the event class and the second the portal service. This is an example:
public class MyComponent implements Receiver { public void inform(CopletInstanceEvent event, PortalService service); }
Now each time a CopletInstanceDataEvent occurs, this method is call. If the component is interested in different events, it can implement several inform methods each with a different signature.
An example for such a component, is a available component in the portal subscribing for all events dealing with coplets, so it returns uses CopletInstanceEvent as the class in the inform method.
Inter Coplet Communication
A very interesting feature of the portal is inter-coplet communication. The demo portal already has a simple sample where the name of an image selected in an image gallery is transfered to a different coplet.
Now, there is only one (minor) problem: in the cocoon portal coplets (or more precisly CopletInstanceData objects) are not components but just data objects. So, a coplet can't directly register itself as a subcriber for events.
Remember that we mentioned earlier on a central component that processes the change events for coplets? So, this is basically one possibility: if you want to pass information from one coplet to another one, create a CopletJXPathEvent and pass the information to the other coplet.
Imagine a form coplet where the user can enter a city. When this form is processed by the form coplet, it can generate one (or more) CopletJXPathEvents and push the entered city information to a weather coplet and a hotel guide coplet. So, these two coplets display the information about the selected city.
The Coplet Transformer
Apart from the possibility to create events from within your Java code, it's also possible to create events from within a pipeline by using for example the coplet transformer. It listens for elements with the namespace "".
The coplet element
The coplet element has nothing to do with events :) It can be used to include information about the current coplet in the SAX stream:
... <coplet:coplet ...
The coplet element can only be used inside a coplet pipeline, but not in the main portal pipeline. The select attribute defines an JXPath expression that is used to fetch the value that is included in the stream.
The link element
The link element creates a link that will trigger an event if the user clicks this link:
... <coplet:link <coplet:link ...
This element generates an HTML link which will eiter trigger an event to change a coplet instance data or a layout based on the JXPath and the value provided.
Configuring Subscribers
In the previous chapters we saw one possibility to subscribe: dynamically in some Java code. This requires that - of course - this code is executed at some point of time. This is a solution for dynamic subscribers, which means a subscriber that is only "available" if a specific feature of the portal is used. If the feature is available, the "feature" subscribes itself (or another component).
However, this adds an exta burdon to the development of own events and their subscribers. Therefore it is possible to configure subscribers in the cocoon.xconf. These subscribers are instantiated by the portal engine on startup of Cocoon and subscribed by the portal engine.
You have two possibilites, you can either subscribe Avalon components or classes. In the first case, you configure the role of the component. Then the portal engine looks up this component and subscribes it.
If you configure a class, the portal engine creates an instance of this class using the no-argument constructor and subscribes this instance. For convenience, this instance can implement the Avalon lifecycle interface like LogEnabled or Serviceable.
The configuration takes place in the cocoon.xconf as a configuration for the event manager:
... <component class="org.apache.cocoon.portal.event.impl.DefaultEventManager" logger="portal" role="org.apache.cocoon.portal.event.EventManager"> ... <!-- add a new instance of each class as a receiver: --> <receiver-classes> <class name="org.apache.cocoon.portal.event.subscriber.impl.DefaultJXPathEventReceiver"/> </receiver-classes> <!-- add each component as a receiver (the component should be thread safe): --> <receiver-roles> <role name="org.apache.cocoon.portal.samples.location.LocationEventReceiver"/> </receiver-roles> </component> ...
In the sample configuration above, one class is subscribed (the DefaultJXPathEventReceiver) and one Avalon component (the LocationEventReceiver).
So, if you write your own events and your own receivers you can either dynamically add them during execution or statically add them by configuration as shown above.
Further Information
The event.impl package contains all currently processed events, so you can study the events and see how to create them. In general most events are created inside the renderers, especially the renderer aspects that render specific details (e.g. the sizing buttons for a coplet). So, you can have a look at the code as well.
There are several transformers that help in creating events inside a Cocoon pipeline. For example the coplet transformer can be used to create links that contain events to change the status of a coplet or a layout object. The gallery sample uses this transformer as a demo.
Errors and Improvements? If you see any errors or potential improvements in this document please help us: View, Edit or comment on the latest development version (registration required).
|
http://cocoon.apache.org/2.1/developing/portal/events.html
|
CC-MAIN-2015-06
|
refinedweb
| 2,049
| 54.22
|
could someone give me an example of how to display a picture on the screen in Linux (jpg, bmp, whatever)
I found the example in the "examples" portion of the documentation but it doesn't seem to work for me
the first statement in the example:
#include "/usr/local/include/freebasic/FreeImage.bi"
throws an error:
ld: cannot find -lfreeimage
and I've checked "FreeImage.bi" is there...so
what is the simplest way...thankx
display JPG or BMP problem
Linux specific questions.
4 posts • Page 1 of 1
Re: display JPG or BMP problem
Try to install: libfreeimage-dev
- Posts: 8301
- Joined: May 28, 2005 3:28
Re: display JPG or BMP problem
We have fbimage it doesn't need any extra runtime libray it's a static lib !
Programming->Libraries->fbimage
Joshy
Programming->Libraries->fbimage
Joshy
Re: display JPG or BMP problem
If only 'bmp' is enough:
BTW: Why is this in section linux?
Code: Select all
#include "fbgfx.bi"
function load_bmp(file_name as string) as fb.image ptr
dim as long file_num = freefile()
dim as ulong image_width, image_height
dim as fb.image ptr pImage
if open(file_name, for binary, access read, as file_num) = 0 then
get #file_num, 18+1, image_width
get #file_num, 22+1, image_height
close #file_num
pImage = imagecreate(image_width, image_height) 'allocate image memory
bload(file_name, pImage) 'bitmap into memory
return pImage
else
return 0
end if
end function
screenres 800,600,32
dim as string bmp_file_name = "test.bmp"
dim as fb.image ptr pImage = load_bmp(bmp_file_name)
if pImage <> 0 then
put(20, 20), pImage, pset
imagedestroy(pImage)
else
print "Error. No such file?: " & bmp_file_name
end if
getkey()
BTW: Why is this in section linux?
4 posts • Page 1 of 1
Who is online
Users browsing this forum: No registered users and 3 guests
|
https://www.freebasic.net/forum/viewtopic.php?f=5&p=283307&sid=5eabb956823de3112d83739e0d5f1fed
|
CC-MAIN-2021-31
|
refinedweb
| 297
| 63.29
|
Cloud Jump Game
-.
OK guys... One issue remains :-(
The image frame is larger than the cloud itself. See:
I found a solution that may work. @ccc What do you think?
import Image, ImageDraw, random, scene import numpy as np class Cloud(scene.Layer): def __init__(self, parent = None): cloud_image = self.create_image() super(self.__class__, self).__init__(scene.Rect(*cloud_image.getbbox())) if parent: parent.add_layer(self) self.image = scene.load_pil_image(cloud_image) def generate_shapes(self, num_circles): shapes = [] for i in xrange(num_circles): x = (i * 20 - ((num_circles/2)*30))+90 y = ((random.random()-0.5) * 30)+15 rad = random.randint(50, 100) shapes.append([x, y, rad]) return shapes # found on '' def crop_image(self, img): image_data = np.asarray(img) image_data_bw = image_data.max(axis=2) non_empty_columns = np.where(image_data_bw.max(axis=0)>0)[0] non_empty_rows = np.where(image_data_bw.max(axis=1)>0)[0] cropBox = (min(non_empty_rows), max(non_empty_rows), min(non_empty_columns), max(non_empty_columns)) image_data_new = image_data[cropBox[0]:cropBox[1]+1, cropBox[2]:cropBox[3]+1, :] img = Image.fromarray(image_data_new) return img def create_image(self): num_circles = random.randint(5, 6) image_size = (220, 140) theImage = Image.new('RGBA', image_size) draw = ImageDraw.Draw(theImage) circles = self.generate_shapes(num_circles) for i in circles: r = i[2] bbox = (i[0], 40-i[1], i[0]+r, 40-i[1]+r) draw.ellipse(bbox, fill='rgb(90%,90%,90%)') for i in circles: r = i[2] bbox = (i[0], 40-i[1]-10, i[0]+r, 40-i[1]+r-10) draw.ellipse(bbox, fill='white') del draw return self.crop_image(theImage) class MyScene(scene.Scene): def setup(self): self.cloud = Cloud(self) self.cloud.frame.x = self.bounds.w * 0.5 self.cloud.frame.y = self.bounds.h*0.8 def draw(self): scene.background(0.40, 0.80, 1.00) scene.fill(0,0,0) scene.rect(*self.cloud.frame) self.root_layer.update(self.dt) self.root_layer.draw() def touch_began(self, touch): self.root_layer.remove_layer(self.cloud) self.cloud = Cloud(self) self.cloud.frame.x = self.bounds.w * 0.5 self.cloud.frame.y = self.bounds.h*0.8 scene.run(MyScene())
@Sebastian thank you so much for your help with this project. Will add you to the contributors list.
@techteej No worries! That's what I love about this forum; you help out if you can, and in return people will help you when you need it :)
A study on player deathwas created for those interested to helps us give the game an arcade-like feel. A great solution would close out issue #9.
Link updated from ccc's post:
Any animation experts out there with scene that can help us out?
- JonsonAbigaile
Can hack or not?
After a period of learning I was able to hack this game with Lucky Patcher application. Please use Lucky Patcher Playstation with many features from this address:
I just made a
from six import StringIOchange to the code at to get the Travis CI tests to pass. I have not run this in a long time so if you find that other changes are needed, please open a pull request.
|
https://forum.omz-software.com/topic/898/cloud-jump-game/24
|
CC-MAIN-2020-10
|
refinedweb
| 509
| 53.88
|
JSF 2 GETs Bookmarkable URLs
JSF 2 GETs Bookmarkable URLs
Join the DZone community and get the full member experience.Join For Free.
Read the other parts in this article series:
Part 1 - JSF 2: Seam's Other Avenue to Standardization
Part 2 - JSF 2 GETs Bookmarkable URLs
Part 3 - Fluent Navigation in JSF 2
Part 4 - Ajax and JSF, Joined At Last
Part 5 - Introducing JSF 2 Client Behaviors.
Describing view metadata with UI componentsThere are two important benefits to defining the metadata within the view template. First, it circumvents introducing yet another XML file with its own schema that developers would have to learn. More importantly, it allows us to reuse the UI component infrastructure to define behavior, such as registering a custom converter or validator, or to extract common view parameters into an include template.
Since we're using UI components to describe the view metadata, then it makes sense to treat the UIViewParameter like any other input component. In fact, it extends UIInput. That allows us to register custom converters and validators on a UIViewParameter without any special reservations. Here's an example:
<f:viewParam
<f:validateLongRange
</f:viewParam>
Note: Later in this series you'll learn that like input components, view parameters can enforce constraints defined by Bean Validation annotations (or XML), making the explicit validation tags such as this unnecessary.
But there is one caveat to embedding the view metadata in the template. Without special provisions, extracting the metadata would require building the entire view (i.e., UI component tree). Not only would this be expensive and unnecessary if the intent is not to render the view, it could also have side effects. When the component tree is built, value expressions in Facelets tag handlers get evaluated, potentially altering the state of the system.
To prevent these counteractions, the view metadata facet is given special treatment in the specification. Specifically, it must be possible to be extract and built it separately from the rest of the component tree. Earlier, I mentioned that view parameters are only available in Facelets, and not JSP, because of an executive decision. There's also a technical reason why view parameters rely on Facelets. Only Facelets can provide the necessary separation between template parsing and component tree construction that allows a template fragment to be processed in isolation. The result of this operation is a genuine UI component tree, represented by UIViewRoot, that contains only the view metadata facet and its children. For all intents and purposes, it's as though the view template only contained this one child element.
Using the following logic, it's possible to retrieve the metadata for an arbitrary view at any point in time. This data mining will come in to play later when we talk about view parameter propagation.
String viewId = "/your_view_id.xhtml"At this point you could retrieve the UIViewParameter components, which are children of the facet, to perhaps access the view parameter mappings. More likely, though, you'll be looking for your own custom components so you can execute custom behavior before the view is rendered (e.g., view actions).
FacesContext ctx = FacesContext.getCurrentInstance();
ViewDeclarationLanguage vdl = ctx.getApplication().getViewHandler()
.getViewDeclarationLanguage(ctx, viewId);
ViewMetadata viewMetadata = vdl.getViewMetadata(ctx, viewId);
UIViewRoot viewRoot = viewMetadata.createMetadataView(ctx);
UIComponent metadataFacet = viewRoot.getFacet(UIViewRoot.METADATA_FACET_NAME);
The extraction of the view metadata is very clever because, while it only builds a partial view, it still honors Facelets compositions. That means you can put your metadata into a common template and include it. Using some creative arrangement, you can apply common metadata to a pattern of views. Here's an example:
<f:view>
<f:metadata>
<ui:include
...
</f:metadata>
...
</f:view>
You've learned that defining a view metadata facet provides the following services for JSF:
- Arbitrarily complex metadata, which can reuse existing component infrastructure
- Metadata is kept with the view, or in a shared template, instead of in an external XML file
- Can be extracted and processed without any side effects (idempotent)
- Common metadata declarations can be shared across multiple views
Now that you are well versed in the view metadata facet, it's time to work out a concrete example of view parameters in practice. We'll look at how to enforce preconditions and load data on an initial request using information from the query string. Then you'll learn how that information gets propagated as the user navigates to other views.
Weaving parameters into the life cycleThis article has alluded several times to the use case of loading a blog entry from a URL by passing the value of the id parameter to our managed bean on an initial request. Let's allow this scenario to play out. Here's the URL the user might request coming into the site:
We'll start by asking what we do with the value once it is assigned to the entryId property of the blog managed bean. One approach is to load the entry lazily as soon as it's referenced in the UI.
<h2>#{blog.blogEntry.title}</h2>Here's what the managed bean would look like to support this approach:
<p>#{blog.blogEntry.content}</p>
@ManagedBean(name = "blog")
public class Blog {
private Long entryId;
private BlogEntry blogEntry;
public Long getEntryId() { return entryId; }
public void setEntryId(Long entryId) { this.entryId = entryId; }
public BlogEntry getBlogEntry() {
if (blogEntry == null) {
blogEntry = blogRepository.findEntry(entryId);
}
return blogEntry;
}
}
Of course, it doesn't make any sense to display an entry without an id (and could even lead to a NullPointerException). So we should really make the id request parameter required. We'll also add a message if it is missing.
<f:metadata>
<f:viewParam
<f:validateLongRange
</f:viewParam>
...
</f:metadata>
In the case a required request parameter is missing, you can display the error message using the <h:messages> tag. Conversion and validation failures are recorded as global messages since there's no view element with which to associate.
<h:messagesBut these preconditions still don't stop the view from being rendered if a request parameter is missing or invalid. What we need is a way to execute an initialization method that parallels an action invocation on a postback. That would allow us to get everything sorted before the user sees a response.
View initializationWhile view parameters provide the processing steps from retrieving the request value to updating the model, they do not furnish the action invocation and navigation steps that are part of the faces request life cyle. That means you have to fall back to lazy loading the data as the view is being rendered (i.e., encoded). You are also missing a definitive point to fine tune the UI component tree programmatically before it's encoded. Fortunately, another new feature in JSF 2, system events, makes it possible to perform a series of initialization steps before view rendering begins.
Systems events notify registered listeners of interesting transition points in the JSF life cycle at a much finer-grained level than phase listeners. In particular, we are interested in the PreRenderViewEvent, which is fired immediately after the component tree is built (but not yet rendered). If the word "registered" evokes dreadful memories of XML descriptors, fear not. Observing the event we are interested in is just a matter of appending one or more <f:event> elements to the view metadata facet.
The <f:event> tag has two required attributes, type and listener. The type attribute is the name of the event to observe derived by removing the Event suffix from the end of the event class name and decaptializing the result. We are only interested in one event, preRenderView. The listener attribute is a method binding expression pointing to either a no-arguments method or a method that accepts a SystemEvent.
<f:metadata>
...
<f:event
</f:metadata>
We can use this method to retrieve the blog entry before the view is rendered.
public void loadEntry() {
blogEntry = blogRepository.findEntry(entryId);
}
If the entry cannot be found, you could add conditional logic to the view to display an error message:
<c:if
The blog entry you requested does not exist.
</c:if>
Ideally, it would be better not to display the view at all. You can force a navigation to occur using the NavigationHandler API.
public void loadEntry() {
try {
blogEntry = blogRepository.findEntry(entryId);
} catch (NoSuchEntryException e) {
FacesContext ctx = FacesContext.getCurrentInstance();
ctx.getApplication().getNavigationHandler()
.handleNavigation(ctx, "#{blog.loadEntry}", "invalid");
}
}
The only problem is that the listener method is going to be invoked even if the view parameter could not be successfully converted, validated and assigned to the model property. Once again, there's a JSF 2 feature to the rescue. You can use the new isValidationFailed() method on FacesContext to check whether a conversion or validation failure occurred while processing the view parameters.
public void loadEntry() {
FacesContext ctx = FacesContext.getCurrentInstance();
if (ctx.isValidationFailed()) {
ctx.getApplication().getNavigationHandler()
.handleNavigation(ctx, "#{blog.loadEntry}", "invalid");
return;
}
// load entry
}
So far we have dealt with a trivial string to long conversion. But view parameters allow you to represent more complex data, as long as you have a converter that can marshal the value from (and to) a string. Let's assume that we want to allow the user to look at blog entries that fall within a range of dates. The before and after dates can be encoded into the URL as follows:
/entries.jsf?after=2007-12-31&before=2009-01-01
Those values can then be converted to Date objects using the <f:convertDateTime> converter tag and assigned to Date properties on a managed bean as follows:
<f:metadata>
<f:viewParam
<f:convertDateTime
</f:viewParam>
<f:viewParam
<f:convertDateTime
</f:viewParam>
<f:event
</f:metadata>
We again use a PreRenderViewEvent listener to load the data before the page is rendered, in this case filtering the collection of blog entries to be displayed.
Emulating the behavior of an action-oriented framework, which the previous examples have demonstrated, is one use of the PreRenderViewEvent. Another is to act as a life cycle callback for programmatically creating or tweaking the UI component tree after it is "inflated" from the view template. Perhaps you want to build part of the tree dynamically from a data structure. Accomplishing this in JSF would require "binding" a bean property to an existing UI component, declared using an EL value expression in the binding attribute of the tag. But this approach is really ugly because you have to put the tree-appending logic in either the JavaBean property getter or setter, depending on whether the view is being created or restored. The PreRenderViewEvent offers a much more definitive and self-documentating hook.
As you've seen, it's finally possible to respond to a bookmarkable URL in JSF (without pain or brittle code). But, up to this point, all we've done is take, take, take. For bookmarkable support to be complete, we need to be able to create bookmarkable URLs. That brings us to the topic of parameter propagation.
Push the parameters on If view parameters were only capable of accepting data sent through the query string of the URL, even considering the built-in conversion and validation they provide, they really wouldn't be all that helpful. What makes them so compelling is that they are bi-directional, meaning they are also propagated to subsequent requests, and rather transparently. The subsequent request may be a faces request, which targets the current view, or a non-faces request to a view that has view parameters, which translates into a bookmarkable URL. A request for a bookmarkable URL can come from either a link in the page or a redirect navigation event. We'll look at how view parameters get propagated in all of these cases in this section.
Saved by the component treeLet's return to the blog entry view and consider what happens if we have a comment form below the post. The comment form might be defined as follows:
<h:form>
<h:inputText
<h:inputTextarea
<h:commandButton
</h:form>
Notice that there is no reference to the id of the blog entry in this form. Assuming that the blog entry is not stored in session scope (or a third-party conversation), how will the handler know which entry the comment should be linked? This is where view parameter propagation blends with component tree state saving.
When encoding of the view (i.e., rendering) is complete, the view parameter values are tucked away in the saved state of the UI component tree. When the component tree is restored on a postback, such as when the comment form is submitted, the saved view parameter values are applied to the model. This allows view parameters to tie in nicely with the existing design of JSF. The initial state supplied to the view parameters by the URL can be maintained as long as the user interacts with the view (e.g., triggers faces requests through user interface events). You can think of view parameters as an elegant replacement for hidden form fields in this case.
If the user bookmarked the URL after posting a comment, however, the reference to the blog entry would be lost. That's because after a POST request, the browser location does not contain a query string. Here's what the user would see:
If we are following best practices, we'll want to implement the Post/Redirect/Get pattern anyway. That gives us a opportunity to repopulate the query string of the URL. In the past, this would have required an explicit call to the redirect() method of ExternalContext inside the action method.
FacesContext.getCurrentInstance().getExternalContext().redirect("/entry.jsf?id=" + blog.getEntryId());
This explicit (and intrusive) call was necessary because the navigation case did not provide any way to append parameters to the query string. Now, view parameters can take care of this for us. We can tell JSF to encode the view parameters of the target view ID into the redirect URL by enabling the include-redirect-params attribute on the <redirect> element.
<navigation-rule>
<from-view-id>/entry.xhtml</from-view-id>
<navigation-case>
<from-action>#{commentHandler.post}</from-action>
<to-view-id>#{view.viewId}</to-view-id>
<redirect include-
</navigation-case>
</navigation-rule>
We'll get into navigation more in the next article in this series. Let's talk about those regular old hyperlinks in the page. We want those to be bookmarkable as well. That means the state needs to be encoded into the URL they point to. Once again, view parameters come into play.
Bookmarkable linksLet's now assume we want to create a bookmarkable link (permalink) to the current blog entry. You can link directly to another JSF view using the outcome attribute of the new hyperlink-producing component tags, <h:link> and <h:button>. These component tags are represented by the component class javax.faces.component.UIOutcomeTarget. (The reason the attribute is named outcome and not viewId will be explained in the next article. For now, just know that the value of the outcome attribute can be a view ID). Both of these component tags support encoding the view parameters into the query string of the URL as signaled by the includeViewParams attribute. Here's how the permalink is defined:
<h:linkThe default value of includeViewParams is false. Since it's set to true, the view parameters are read in reverse and appended to the query string of the link. Here's the HTML that this component tag generated, assuming an entry id of 9:
<a href="/blog/entry.jsf?id=9">Permalink</a>
The link is produced using the new getBookmarkableURL() method on the ViewHandler API. This method calls through to the encodeBookmarkableURL() on the ExternalContext API to have the session ID token tacked on, if necessary. These methods complement the getRedirectURL() and encodeRedirectURL() methods on ViewHandler and ExternalContext, respectively. In a servlet environment, the implementations happen to be the same, but the extra methods serve as both a statement of intent and an extension point for environments where a link URL and a redirect URL are handled differently, such as a portlet.
Notice that the context path of the application (/blog) is prepended to the path, the extension is changed from the view suffix (.xhtml) to the JSF servlet mapping (.jsf) and the query string contains the name and value of the view parameter read from the model. If you had used an <h:outputLink> tag, you would have had to do all of these things manually. That's exactly why the EG felt it was necessary to introduce this component.
We can do one better. If the outcome attribute is absent, the current view ID is assumed. So we can shorten the tag to this:
<h:link
If you want the link to appear as a button, you can use the <h:button> component tag instead.
<h:button
However, note that JavaScript is required in this case to update the browser location when the button is clicked, as you can see from the generated HTML:
<button onclick="window.location.Permalink</button>
View parameters come in especially handy when the number of parameters to keep track of increases. For instance, let's consider the case when a user is searching for entries using a query string in a particular category and wants to paginate through the results. In this case, we are dealing with at least three parameters:
<f:metadata>
<f:viewParam
<f:viewParam
<f:viewParam
</f:metadata>
Yet the link to these search results still remains as simple as the permalink to an entry:
<h:link
This component tag will produce HTML similar to this:
<a href="/blog/entries.jsf?category=JSF&query=features&page=2">Refresh</a>
What if we want to link back to the previous page? In that case, we cannot allow the view parameter named page be automatically written into the query string since that will just link us to the current page. We need an override. Fortunately, it's easy to override an encoded view parameter. You simply use the standard <f:param> tag, just as you would if you were defining a new query string parameter:
<h:link
<f:param
</h:link>
View parameters that are encoded into links to the current view ID are pretty intuitive. Where things get tricky is when we use view parameters on a link to a different view ID. This requires putting on your thinking cap and doing some reasoning.
View parameter handoffWhen a request is made for a URL, and in turn a JSF view ID, the view parameters defined in that view are used to map request parameters to model properties. But when the view parameters are encoded into a bookmarkable URL, the mappings are read from the target view ID. That's why it's especially important to be able to extract the view metadata from a template without having building a full component tree, as mentioned earlier. Otherwise, you would end up building a component tree for every view that is linked to in the current view. That would be very costly.
Let's consider a use case. Suppose that we want to create a link from the search results to a single entry. We would define the link as follows:
<ui:repeat
<h2>#{_entry.title}</h2>
<p>#{_entry.excerpt}</p>
<p>
<h:link
<f:param
</h:link>
</p>
</ui:repeat>
The question to ask yourself is this. "Are the search string, category and page offset included in the URL for the entry?". I hope you said "No". The reason is because when the URL for the entry link is built, the component tag reads the view parameter mappings defined in the /entry.xhtml template. The only parameter mapped in that template is entry id. In order to preserve the filter vector, the view parameters defined in the /entries.xhtml view need to also be in the /entry.xhtml template. Aha! Since these are shared view parameters, we should define them in a common template:
<ui:composition xmlns=""
xmlns:ui=""
xmlns:
<f:viewParam
<f:viewParam
<f:viewParam
</ui:composition>
We can then include that template in each view that needs to preserve these view parameters:
<f:view>
<f:metadata>
<ui:include
<f:viewParam
...
</f:metadata>
</f:view>
Keep in mind that if the user navigates to the entry after performing a search, the URL for the entry shown in the browser's location bar will now contain the filter vector. But if you want the user to be able to return to the search filter (without using the back button), that's what you want. You can always provide a simple permalink to bookmark just the entry.
Even though you are now defining view parameters in each of the views, that doesn't mean the URL will become littered with empty query string parameters when they are not in use. View parameters are only encoded (i.e., added to the query string) if the value is not null. Otherwise, there is no trace of the view parameter.
You have now learned how view parameters are propagated during a postback, on a redirect and into a bookmarkable URL. The main benefit of this process is that it is transparent. You don't have to worry about each and every request parameter that comprises the state in the query string of the URL. Instead, JSF interprets the view parameter metadata defined in the template of the target view and automatically appends those name/value pairs to the URL when you activate this feature.
Bookmark itView parameters serve as an alternative to storing state in the UI component tree, provide a starting point for the application, help integrate with legacy applications, assert preconditions of views, make views bookmarkable, and, with help of the new UIOutcomeTarget components or the enhancement to the redirect navigation case, produce links to those bookmarkable views.
This article began by introducing you to the view metadata facet, which is a general facility for defining a view's metamodel that reuses the existing UI component infrastructure. You learned that view parameters and PreRenderViewEvent listeners are the first standard implementations of view metadata. You saw how the combination of these two features allow you to capture initial state from URL query string, validate preconditions and load data for a view all before the view is rendered. Finally, you learned how view parameter values are propagated to subsequent requests.
This series continues by taking a deeper look at the navigation enhancements made in JSF 2 and explaining how those changes tie into the bookmarkability that you learned about in this article. So bookmark this series and check back again soon!
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/bookmarkability-jsf-2
|
CC-MAIN-2020-34
|
refinedweb
| 3,794
| 52.7
|
double strtod ( const char * str, char ** endptr );
<cstdlib>
Convert string to double
Parses the C string str interpreting its content as a floating point number and returns its value as a double. If endptr is not a null pointer, the function also sets the value pointed by endptr to point to the first character after the number. A pointer to the rest of the string after the last valid character is stored in the object pointed by endptr.A valid floating point number for strtod is formed by a succession of:
/* strtod example */
#include <stdio.h>
#include <stdlib.h>
int main ()
{
char szOrbits[] = "365.24 29.53";
char * pEnd;
double d1, d2;
d1 = strtod (szOrbits,&pEnd);
d2 = strtod (pEnd,NULL);
printf ("The moon completes %.2lf orbits per Earth year.\n", d1/d2);
return 0;
}
The moon completes 12.37 orbits per Earth year.
|
http://www.cplusplus.com/reference/clibrary/cstdlib/strtod/
|
crawl-002
|
refinedweb
| 143
| 74.9
|
Can anyone recommend a good Javascript calendar library for selecting a date?Can anyone recommend a good Javascript calendar library for selecting a date?
Thanks very much in advance.
Can anyone recommend a good Javascript calendar library for selecting a date?Can anyone recommend a good Javascript calendar library for selecting a date?
Thanks very much in advance.
I wrote one. Dunno whether it qualifies as "good" in your eyes. But try it.
It's the second one of the demos. No, it's nothing to do with ASP.
NOTE: It appears to have problems positioning the year and month dropdowns in some browsers. I'm looking into that now. Check for updates.
Another possibility:
Script is set-up for two calendars, but is fairly easy to remove one.Script is set-up for two calendars, but is fairly easy to remove one.Code:<html> <head> <title>Dual Calendar</title> <style type="text/css"> .cal { width:240px; display:none; border:1px solid black; /* following 2 lines are optional, depending of type of display */ z-index:10; position:fixed; } .cal th { border:1px solid #AAAAAA; color: #000000; font: 70% Verdana, Geneva, Arial, Helvetica, sans-serif; background-color: #FFFF00; cursor:pointer; margin: 0px 0px 0px 0px; padding: 0px 0px 0px 0px; } .cal td { border: 1px solid #AAAAAA; font: 70% Verdana, Geneva, Arial, Helvetica, sans-serif; background-color: #FFFF00; margin: 0px 0px 0px 0px; padding: 1px 2px 1px 2px; } btn { font: 70% Verdana, Geneva, Arial, Helvetica, sans-serif; } </style> <script language="javascript" type="text/javascript"> function calendar(ids,txt) { this.Txt = txt; this.Ids = ids; this.Cdate = new Date(); this.monthM = this.Cdate.getMonth()+1; this.dayM = this.Cdate.getDate(); this.yearM = this.Cdate.getFullYear(); this.monthC = this.monthM; this.dayC = this.dayM; this.yearC = this.yearM; this.CalendarRedisplay = function(M,Y) { if ((M==0) && (Y==0)) { M = this.monthM; Y = this.yearM; } else { M = this.monthC+M; Y = this.yearC+Y; if (M < 1) { M = 12; Y--; } if (M > 12) { M = 1; Y++; } } this.monthC = M; this.yearC = Y; document.getElementById(this.Ids).innerHTML = this.displayCalendar(M,Y); } this.displayCalendar = function(month, year) { month = parseInt(month); year = parseInt(year); var i = 0; var days = getDaysInMonth(month,year); var firstOfMonth = new Date (year, (month-1), 1); var startingPos = firstOfMonth.getDay(); days += startingPos; var'; page += (i-startingPos+1) + '</th>'; } for (i=days; i<42; i++) { if ( i%7 == 0 ) page += "</tr><tr>"; page += "<th> </th>"; } page += '</tr></table>'; return page; } this.display = function() { this.toggle(); document.getElementById(this.Ids).innerHTML = this.displayCalendar(this.monthC,this.yearC); } this.toggle = function() { var sel = document.getElementById(this.Ids); if (sel.style.display != 'block') { document.getElementById(this.Ids).style.display = 'block'; } else { document.getElementById(this.Ids).style.display = 'none'; } } this.update = function(info) { this.dayC = info; this.toggle(); document.getElementById(this.Txt).value = pad(this.monthC)+'/'+pad(this.dayC)+'/'+this.yearC; } } cal1 = new calendar('cal1','StartDate'); cal2 = new calendar('cal2','StopDate'); var MonthOfYear = ['','Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec']; var DaysInMonth = ['',31,28,31,30,31,30,31,31,30,31,30,31]; function getDaysInMonth(month,year) { var days = DaysInMonth[month]; if ((month == 2) && isLeapYear(year)) { days=29; } return days; } function isLeapYear (Year) { if (((Year % 4)==0) && ((Year % 100)!=0) || ((Year % 400)==0)) { return (true); } else { return (false); } } function pad(value) { return value=(value < 10)?'0'+value:value; } </script> </head> <body><!-- onload='cal1.update(cal1.update(0))' --> <input type="text" id="StartDate"> <img src="cal.gif" onclick="cal1.display()"> <div id="cal1" class="cal"></div> <br> <input type="text" id="StopDate"> <img src="cal.gif" onclick="cal2.display()"> <div id="cal2" class="cal"></div> <br> </body> </html>
You can play with the CSS to change colors, size, positions, etc.
Good Luck!
Last edited by jmrker; 07-29-2009 at 03:47 AM. Reason: Added cal.gif
Click on "Demos" on that page and then click on the second demo in the list.
But w.t.h.:
and I did fix the minor bugs it had in XHTML traditional. Seems to work fine in MSIE/FF/Chrome..
Thanks!
Script to check that a date is in the future:-
It would help if you indicate whether the dates to be selected are a few days or many days/months/years apart. This will populate a select box with the next ten days, and could easily be modified to say 31 days.It would help if you indicate whether the dates to be selected are a few days or many days/months/years apart. This will populate a select box with the next ten days, and could easily be modified to say 31 days.Code:<script type = "text/javascript"> function checkFutureDate() { var end_year = 2009; var end_month = 6; // months are 0-11 var end_day = 13; var now = new Date().getTime(); var d = new Date(); d.setFullYear(end_year, end_month, end_day); // YYYY,MM(0-11),DD var selectedDate = d.getTime(); // today or after if (selectedDate <= now) { // valid after today's date //if (selectedDate < now) { // valid on today's date or after alert ("Date must be (on or) after today's date!"); return false; } alert ("Date is valid"); return true; } checkFutureDate(); </script>
Code:<body onload = "populate()"> <form name= "myform"> <select name = "datelist"> <option value = "d0"></option> <option value = "d1"></option> <option value = "d2"></option> <option value = "d3"></option> <option value = "d4"></option> <option value = "d5"></option> <option value = "d6"></option> <option value = "d7"></option> <option value = "d8"></option> <option value = "d9"></option> </select> </form> <script type = "text/javascript"> function populate() { var months=['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec'] var days = ["Sun","Mon","Tues","Wed","Thur","Fri","Sat"]; for (var i =0; i<10; i++) { var nd = new Date(); nd.setMonth(nd.getMonth()); nd.setDate(nd.getDate() + i); var mn = nd.getMonth(); var dy = nd.getDate(); var yy = nd.getFullYear(); var day = nd.getDay(); var d = days[day] + ", " + months[mn] + " " + dy + ", " + yy; document.myform.datelist.options[i].text = d; } } </script>
Last edited by Philip M; 07-29-2009 at 08:16 AM.
What does "not for data validation" mean???
A date picker pretty much automatically does "data validation", since the user *CAN'T* pick a bad date. So maybe your comment doesn't matter. But I don't understand what it's supposed to mean.
Altering most date pickers to not allow dates in the past is pretty easy, but with my particular picker it would be a tiny bit tougher. Because I allow the user a <select> for both the month and year, I'd have to disallow a prior month unless a future year had already been chosen. But that might be annoying to the person who wanted to click on (say) "January" and then "2012". He/she would be forced to pick the "2012" first, else he'd never see "January" as an option.
How far in the future do you want to allow the dates to go??? If never more than one or two years, I'd obviate all this by making the <select> choose both month and year at the same time.
1. You want the first date to be today or later. TRUE?
2. You want the second date to be always be equal to or later than the first date. TRUE?
3. But you say the second date can't be later than the first date.
What does that mean? Must only be the first date, but not later?
If true then why have two calendars then?
What are the real specifications?
|
http://www.codingforums.com/javascript-programming/173074-looking-good-javascript-date-picker.html
|
CC-MAIN-2016-22
|
refinedweb
| 1,245
| 59.5
|
I've got some nice Cache Filter for ye!
Ivan Jouikov
Ranch Hand
Joined: Jul 22, 2003
Posts: 269
posted
Jun 29, 2004 03:28:00
0
Hey guys!
I was reading some material about filters and stuff, and I came upon a cache filter, that would store data in temporary files, and I didn't like it very much (I found it to be inefficiant, and code was a female dog to maintain).
What I was thinking is that as far as I know,
Tomcat
doesn't cache anything (which is its major disadvantage and the reason people use it with apache, which I also hate - 2x bugs). Also, I was thinking, that my browser has cache enabled, and so do most other browsers. So, why in the first place have server handle cahing, when the client can do it?
So, I wrote this little filter, which seems to work perfectly, already saving my server Gigs of bandwidths.
Let me know what you think, and if people seem to like it, maybe it's a good idea to ask Tomcat administration to ship this server with Tomcat?
Basically, here's the filter itself:
/* * Created on 29.06.2004 at 0:32:23 * Author: Ivan Jouikov (ivj@comcast.net) * Project: abLogic */ package org.ablogic.web; import java.io.*; import java.util.*; import javax.servlet.*; import javax.servlet.http.*; import org.apache.log4j.*; import org.ablogic.*; import org.ablogic.misc.*; /** * This class will utilize HTTP 1.0 and 1.1 headers in order to make * static web-resource cachable by the client, rather than having server * cache them. See /WEB-INF/Cache.properties for list of rules that define * what should and what should not be cached. Also see /WEB-INF/web.xml for * definition of the filter mapping. * * @author Ivan Jouikov (ivj@comcast.net) * @version 0.1 */ public class CacheFilter implements Filter { Logger debugger; /** Where we keep /WEB-INF/Cache.properties. **/ Properties cacheProperties; /** * Load /WEB-INF/Cache.properties */ public void init(FilterConfig config) throws ServletException { debugger = Logger.getLogger("debug"); String appRoot = config.getServletContext().getRealPath("/"); File propFile = new File(appRoot + "/WEB-INF/Cache.properties"); try { cacheProperties = new DynamicProperties( propFile ); debugger.debug("Loading properties from \"" + propFile + "\"..."); } catch( Exception e ) { throw new ServletException("Couldn't load \"" + propFile + "\"!",e); } } /** * This is where all the magic is done. */ public void doFilter(ServletRequest req, ServletResponse res, FilterChain filterChain) throws IOException, ServletException { HttpServletRequest request = (HttpServletRequest) req; HttpServletResponse response = (HttpServletResponse) res; // This will return URI relative to the app root, e.g. /ablogic/index.jsp String URI = request.getRequestURI(); debugger.debug("Request arrived for \"" + URI + "\""); // What we want to do is to cycle through all the rules inside // /WEB-INF/Cache.properties, and set the approporiate headers. // There might be more than one "rule" (regex pattern) matching // certain requests. For instance, there might be a rule such as // *. Enumeration keys = cacheProperties.keys(); String lastHeader = null; debugger.debug("Going through the elements inside Cache.properties..."); loop: while( keys.hasMoreElements() ) { String regexKey = (String)keys.nextElement(); debugger.debug("Found key \"" + regexKey + "\"..."); if ( URI.matches(regexKey) ) { lastHeader = cacheProperties.getProperty(regexKey); debugger.debug("URI \"" + URI + "\" matches the key. Associated header: \"" + lastHeader + "\"."); if ( lastHeader.indexOf("no-cache") != -1 ) { // No point in searching any further - we found the negative // header debugger.debug("Header is negative, so we set it."); } break loop; // header will be set to lastHeader later } else { debugger.debug("URI \"" + URI + "\" doesn't match the key..."); } } debugger.debug("Finished searching, " + (lastHeader==null?"no matching keys":"at least one matching key") + " was found..."); // Finished searching - if we found the header - lets set it if ( lastHeader != null ) { debugger.debug("Setting last positive header \"" + lastHeader + "\"."); if ( request.getProtocol().indexOf("1.0") != -1 ) { // Older browsers - HTTP 1.0 and below response.addHeader("Pragma",lastHeader); debugger.debug("Header set for HTTP 1.0 and below browsers."); } else { // Modern browsers - HTTP 1.1 and above response.addHeader("Cache-Control",lastHeader); debugger.debug("Header set for HTTP 1.1 and above browsers."); } } debugger.debug("Contining up the filter chain..."); filterChain.doFilter(request,response); } public void destroy() { cacheProperties = null; } }
Here's the mapping that you should have in your web.xml:
<!-- This filter will handle caching. --> <filter> <filter-name>CacheFilter</filter-name> <filter-class>org.ablogic.web.CacheFilter</filter-class> </filter> <!-- Map for the entire server. Rules are defined in /WEB-INF/Cache.properties --> <filter-mapping> <filter-name>CacheFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping>
And here is the /WEB-INF/Cache.properties:
# This file defines cache rules for specific object of the web site. # It contains pairs of values separated by = # such as .*\.jsp=no-cache # where the first part is used to match the request that a client would make # relative to the root of the application. As you can tell, regex is used. # The second value is the Cache-Control (HTTP 1.1) # (or pragma for HTTP 1.0) parameter, that has one of the following values: # no-cache no-storage max-age etc... # For a full list see <a href="" target="_blank" rel="nofollow"></a> # There might be more than one "rule" (regex pattern) matching # certain requests. For instance, there might be a rule such as # (regexs are converted to wild cards to ease understanding) # *. # If there are more than 1 positive rules, then whichever rule's # found first, that rule's header will be used. (It's a bad practice to # have two matching good rules). # This is rule for all .jsp's - they're dynamic so they shouldn't be cached .*\.[jJ][sS][pP]: no-cache # This is when the user request a directory, such as /admin/, when an index # file would have to be fetched from web.xml. In this case, we automatically # Assume its dynamic .*/: no-cache # These are rules for all kinds of static media, which should always be cached # because it's (duh) static. # html/htm .*\.([hH][tT][mM][lL]?): max-age=3600 #1 hour should be enough for a session # gif .*\.([gG][iI][fF]): max-age=3600 # jpg/jpeg .*\.([jJ][pP][eE]?[gG]): max-age=3600 # bmp .*\.([bB][mM][pP]): max-age=3600 # png .*\.([pP][nN][gG]): max-age=3600 # css .*\.([cC][sS][sS]): max-age=3600
So, what you think?
[ June 29, 2004: Message edited by: Ivan Jouikov ]
I agree. Here's the link:
subject: I've got some nice Cache Filter for ye!
Similar Threads
Browsing with the back button
Sitemesh with Websphere6.1
Log redirection
HTTP Status 404 - /Struts2Application1/clientAction + Please help
File upload
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/358606/Servlets/java/ve-nice-Cache-Filter-ye
|
CC-MAIN-2015-22
|
refinedweb
| 1,086
| 53.37
|
import "github.com/djhworld/strex"
Package that adds a few extra features to the strings package.
These functions are very high level and take inspiration from the Data.List package found in the Haskell programming language
With thanks to reddit users `rogpeppe`, `DavidScone`, `DisposaBoy` who helped me in making this library better and quicker
Daniel Harper (djhworld) 2012
All applied to a predicate p and a string s, determines if all elements of s satisfy p
Distinct removes duplicate elements from a string. In particular, it keeps only the first occurrence of each element.
Drop returns the suffix of s after the first n runes, or "" if n > len([]rune(s))
DropWhile returns the suffix remaining after TakeWhile
Filter, applied to a predicate and a string, returns a string of characters (runes) that satisfy the predicate
Code:
//Haskell type signature (polymorphic): - // filter :: (a -> Bool) -> [a] -> [a] var isNotPunctuation func(rune) bool = func(a rune) bool { return !strings.ContainsRune("!.,?:;-'\"", a) } fmt.Println(Filter(isNotPunctuation, "he said \"hello there!\"")) //strips all punctuation
Output:
he said hello there
Group takes a string and returns a slice of strings such that the concatenation of the result is equal to the argument. Moreover, each sublist in the result contains only equal elements.
GroupBy is the non-overloaded version of Group.
Code:
//Haskell type signature (polymorphic): - // groupBy :: (a -> a -> Bool) -> [a] -> [[a]] var isDigit func(rune) bool = func(a rune) bool { return strings.ContainsRune("0123456789", a) } var input string = "02/08/2010" fmt.Println(GroupBy(func(a, b rune) bool { return (isDigit(a)) == (isDigit(b)) }, input)) //Ouput: [02 / 08 / 2010]
Head returns the first rune of s which must be non-empty
Init returns all the elements of s except the last one. The string must be non-empty.
IsEmpty tests whether the string s is empty
Last returns the last rune in a string s, which must be non-empty.
Reverse returns the string s in reverse order
Span, applied to a predicate p and a string s, returns two strings where the first string is longest prefix (possibly empty) of s of characters (runes) that satisfy p and the second string is the remainder of the string
Tail returns the the remainder of s minus the first rune of s, which must be non-empty
Take returns the n rune prefix of s or s itself if n > len([]rune(s))
TakeWhile, applied to a predicate p and a string s, returns the longest prefix (possibly empty) of s of elements that satisfy p
Package strex imports 2 packages (graph). Updated 2016-07-26. Refresh now. Tools for package owners.
|
https://godoc.org/github.com/djhworld/strex
|
CC-MAIN-2018-26
|
refinedweb
| 436
| 54.36
|
Details
Description
combo_handler require ESI's code. Before make ESI work as a lib, you can try it this way:
make "esi/lib" and "esi/fetcher" the subdir of combo_handler and use the makefile.
combo_handler |____combo_handler.cc |____fetcher |____lib |____LICENSE |____Makefile |____README
Activity
Moving all unassigned bugs out to 3.3.0. Move back and assign as necessary.
Moving to 3.3.2.
What's the status on this? Is this still relevant? Conan, if so, would you mind making a new patchset for current master, and I can review / commit that.
Thanks!
– Leif
Ping ?
So, I have some patches for this. but, I'm also thinking we should move this plugin in under the esi plugin source tree. Such that it builds both esi.so and combo_handler.so. They both share code, and it makes sense to combine them I think.
Sorry, I miss your comments. +1 for combination.
I've made some changes after those old attachments. Some of them may be not general for our users. So I have no ideas how to update this ticket.
Changes I made (separately pasted on pastebin for convenience and most are minor changes):
- use HOST header as default bucket
Original code use the first segment of Host as the default bucket and it's not that expandable (two different combo domain may have same leading segment). Moreover, the initial default bucket("l") will not be used, because all requests should have a HOST.
- sub-file's path need to contain querystring, i.e. question mark("?") is part of the file path, not the delimiter
We use querystring to version each single sub-file in the combined url. If we want to update/purge one of them, it can be simply accomplished by changing the version of sub-file. (If not, you have to purge both the combined url and sub-file url which is relatively hard to know the latter one when you are not very familiar with ATS. Of course you can alter the filename if possible in your site.)
Then he combo url could be like
- request hangs when combo url has no querystring
- make plugin per-remap enabled/disabled
It's implemented by adding some remap code and make global part "intercepted" in TS_EVENT_HTTP_OS_DNS instead of TS_EVENT_HTTP_READ_REQUEST_HDR in order to read the flag set in TSRemapDoRemap.
So remap.config will be:
map @plugin=combo_handler.so map map # combo for this channel is disabled
- limit sub-file max count and querystring length for potential problems
- log failed url
They were tested on 3.0.x. Suppose you've made it compiled in lastest master. You can select to review.
Just took a glance, my code compiled failed with lastest esi(but I'm sure it compiles last winter). So I have no appropriate patchset and sorry that I'm too busy to look into it these days.
Cool! I haven't landed your fixes for this yet, however, I hope to do so just after we release v3.3.2 (next week). Once that's done, lets land all these changes. Would you mind filing either one bug for all of the above, or individual bugs? Either is fine.
Commit 21515f600a33fd4b4e1e28bd88b0b69854180c5b in branch refs/heads/master from Leif Hedstrom
[;h=21515f6 ]
TS-1053 Move the README for combo handler
Commit 0012eee3b43f736737062e10267f423a156c35d5 in branch refs/heads/master from Leif Hedstrom
[;h=0012eee ]
TS-1053 Cleaning this up, since I was mucking around in it.
Commit 27fb7b7a1d720b86a418c820a216c0ffe31411ae in branch refs/heads/master from Leif Hedstrom
[;h=27fb7b7 ]
Commit ee04a10dc592065abf534d408b2685e4c588a19a in branch refs/heads/master from Leif Hedstrom
[;h=ee04a10 ]
TS-1053 Move combo_handler to ESI, also change plugin.cc to esi.cc
Commit 7f7eddf57065996df936a372681341ce7c637dd8 in branch refs/heads/master from Conan Wang
[;h=7f7eddf ]
TS-1053 Make combo_handler compiler.
Commit da80e34b6aaf442c481d477aa2f55a07bf7b7806 in branch refs/heads/master from Leif Hedstrom
[;h=da80e34 ]
TS-1053 Make combo_handler compile, also sanitize proper usage of ink_port.h
Leif, after making esi and combo_handler share code, I need to write both esi.so and combo_handler.so in plugins.config to make the combo_handler work normally.
If only enable combo_handler.so, it'd end up with:
ERROR: unable to load '/Users/conan/box/ts-trunk/libexec/trafficserver/combo_handler.so': dlopen(/Users/conan/box/ts-trunk/libexec/trafficserver/combo_handler.so, 2): Symbol not found: _threadKey Referenced from: /Users/conan/box/ts-trunk/lib/libesi.0.dylib Expected in: flat namespace in /Users/conan/box/ts-trunk/lib/libesi.0.dylib
1. If we have to configure like this(enable both though only need combo's feature), do we need a way to disable the function of esi?
2. configuring both still doesn't work for my remap patch in
TS-1827(as I commended there).
Automake (at least some versions) gets confused because there is a module named 'esi' and a convenience library named 'libesi'. I have a patch pending.
Commit f3369df37419e77fb4bcbd26183cdd4fc4de5fb1 in branch refs/heads/master from James Peach
[;h=f3369df ]
TS-1053: fix sdpy and esi plugin convenience libraries
Rename the convenience libraries to avoid potential name conflicts.
Make sure to use noinst_ automake prefixes because we don't want
these to be installed.
Commit c4815b07738746163d1d885091a22b67871d2e0d in branch refs/heads/master from James Peach
[;h=c4815b0 ]
TS-1053: fix missing threadKey symbol in combo_handler
thanks James, it work well by only configuring "combo_handler.so" in plugins.config
Commit e4e911bd5eb6e4837d8568bdfa4889883e2fa1f6 in branch refs/heads/master from James Peach
[;h=e4e911b ]
TS-1053: fix esi plugin unit test linkage
Moved to 3.1.4, please move bugs back to 3.1.3, which you will work on in the next 2 weeks.
|
https://issues.apache.org/jira/browse/TS-1053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
|
CC-MAIN-2015-14
|
refinedweb
| 935
| 66.54
|
You can subscribe to this list here.
Showing
4
results of 4
Hi everyone,
All the talk about .py, .py and .pyw files recently got me thinking
about my little traceback question with py2exe.
When exceptions happen in py2exe'd applications, the tracebacks no
longer include the actual lines of code where the error was raised.
After a little bit of playing around, this appears to be a standard
python trait, rather than anything py2exe is causing. When only pyc
files are available, tracebacks include line numbers, but not the
actual code. I went ahead and tried the easy thing and copied py files
into the library.zip file, but they don't seem to be found.
Oddly enough, if you copy the py files into the current directory, the
py2exe app will include code in the tracebacks. Here is an example:
I have a file called error.py:
raise StandardError
And this setup.py:
from distutils.core import setup
import py2exe
setup(
console = ["error.py"]
)
Here is a console session which shows the issue (watch the 'raise
StandardError' lines):
C:\Data\BrianDocuments\home\python\py2exetest\dist>..\error.py
Traceback (most recent call last):
File "C:\Data\BrianDocuments\home\python\py2exetest\error.py", line 1, in ?
raise StandardError
StandardError
C:\Data\BrianDocuments\home\python\py2exetest\dist>error.exe
Traceback (most recent call last):
File "error.py", line 1, in ?
StandardError
C:\Data\BrianDocuments\home\python\py2exetest\dist>copy ..\error.py .
1 file(s) copied.
C:\Data\BrianDocuments\home\python\py2exetest\dist>error.exe
Traceback (most recent call last):
File "error.py", line 1, in ?
raise StandardError
StandardError
So, my question is really this... is there some easy way to include
the .py files in the library.zip file (and have them be noticed) that
I've missed? And, failing that, does anyone have any suggesstions of
where to start poking around if I wanted to get more detailed
tracebacks?
Take care,
-Brian
I'm afraid I cannot give too much help, but I'll try my best.
Jens Göpfert <Jens.Goepfert@...> writes:
> Hi,
>
> sorry for annoying you again, but its hard to get into the code. i tried to
> modify the py2exe0.53 code to run with python2.2. the result was an empty
> library.zip and no exe file.
That won't work - python2.2 doesn't have the zipimport feature needed by
the exe.
> so i tried to hack the imputil.py, but still looking for the code, that
> handles the import of modules (at runtime of the exe file).
The imported code is in lib\site-packages\py2exe\support.py.
> i tried to modify the constructor of the ImportManager class.
> self.add_suffix(".pyc", py_suffix_importer) -> i thought i have to register
> the pyc file extension. added some print instructions in the
> py_suffix_importer method, but i cant see any changes.
> Is there any hint you can give?
I would first try to get it to work in pure Python.
The imputil module has a _test_revamp() function, this (IIRC) replaces
the normal import mechanism by the one from imputil.
Trying that out (in the interactive interpreter), I'm able to import
an x.py module, but not the x.pyc file when I have deleted the x.py
file - I get an import error.
c:\>py22
Python 2.2.3 (#42, May 30 2003, 18:12:08) [MSC 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import imputil
>>> imputil._test_revamp()
>>> import x
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "c:\python22\lib\imputil.py", line 106, in _import_hook
raise ImportError, 'No module named ' + fqname
ImportError: No module named x
>>> ^Z
I hope that gives you at least an idea where start,
Thomas
"f.kintzel" <f.kintzel@...> writes:
>, I've added this to CVS.
Thomas a lot and even more thanks for this great tool!
bye,
Florian
|
http://sourceforge.net/p/py2exe/mailman/py2exe-users/?viewmonth=200409&viewday=22
|
CC-MAIN-2014-41
|
refinedweb
| 646
| 69.79
|
Besides adding custom stylesheets and scripts into the manager interface you can also add global partial views into the
Layout of the manager. This is useful if you for example want to add a modal for a custom field that should be available on all manager views, or perhaps even a custom toolbar.
The partial collection can be accessed from the
Partials property. This collection holds all custom partials that has been added by your own code, or other modules. Below is an example of how to add a custom partial to the collection.
using Piranha; public void Configure(IApplicationBuilder app, IHostingEnvironment env) { ... App.Modules.Manager() .Partials.Add("Partial/_MyModal"); ... }
All custom partials are rendered at the end of the body tag after the built-in modals have been added.
|
https://piranhacms.org/docs/manager-extensions/partials
|
CC-MAIN-2020-40
|
refinedweb
| 130
| 53.61
|
AWS SDK for JavaScript
The official AWS SDK for JavaScript, available for browsers and mobile devices, or Node.js backends
For release notes, see the CHANGELOG. Prior to v2.4.8, release notes can be found at
If you are upgrading from 1.x to 2.0 of the SDK, please see the upgrading notes for information on how to migrate existing code to work with the new major version.
Installing
In the Browser
To use the SDK in the browser, simply add the following script tag to your HTML pages:
<script src=""></script>
You can also build a custom browser SDK with your specified set of AWS services. This can allow you to reduce the SDK's size, specify different API versions of services, or use AWS services that don't currently support CORS if you are working in an environment that does not enforce CORS. To get started:
The AWS SDK is also compatible with browserify.
For browser-based web, mobile and hybrid apps, you can use AWS Amplify Library which extends the AWS SDK and provides an easier and declarative interface.
In Node.js
The preferred way to install the AWS SDK for Node.js is to use the npm package manager for Node.js. Simply type the following into a terminal window:
npm install aws-sdk
In React Native
To use the SDK in a react native project, first install the SDK using npm:
npm install aws-sdk
Then within your application, you can reference the react native compatible version of the SDK with the following:
var AWS = require('aws-sdk/dist/aws-sdk-react-native');
Alternatively, you can use AWS Amplify Library which extends AWS SDK and provides React Native UI components and CLI support to work with AWS services.
Using Bower
You can also use Bower to install the SDK by typing the following into a terminal window:
bower install aws-sdk-js
Usage and Getting Started
You can find a getting started guide at:
API reference at:
Usage with TypeScript
The AWS SDK for JavaScript bundles TypeScript definition files for use in TypeScript projects and to support tools that can read
.d.ts files.
Our goal is to keep these TypeScript definition files updated with each release for any public api.
Pre
If you are targeting at es5 or older ECMA standards, your
tsconfig.jsonhas to include
'es5'and
'es2015.promise'under
compilerOptions.lib. See tsconfig.json for an example.
In the Browser
To use the TypeScript definition files with the global
AWS object in a front-end project, add the following line to the top of your JavaScript file:
/// <reference types="aws-sdk" />
This will provide support for the global
AWS object.
In Node.js
To use the TypeScript definition files within a Node.js project, simply import
aws-sdk as you normally would.
In a TypeScript file:
// import entire SDK import AWS from 'aws-sdk'; // import AWS object without services import AWS from 'aws-sdk/global'; // import individual service import S3 from 'aws-sdk/clients/s3';
NOTE: You need to add
"esModuleInterop": true to compilerOptions of your
tsconfig.json. If not possible, use like
import * as AWS from 'aws-sdk'.
In a JavaScript file:
// import entire SDK var AWS = require('aws-sdk'); // import AWS object without services var AWS = require('aws-sdk/global'); // import individual service var S3 = require('aws-sdk/clients/s3');
With React
To create React applications with AWS SDK, you can use AWS Amplify Library which provides React components and CLI support to work with AWS services.
With Angular
Due to the SDK's reliance on node.js typings, you may encounter compilation issues when using the typings provided by the SDK in an Angular project created using the Angular CLI.
To resolve these issues, either add
"types": ["node"] to the project's
tsconfig.app.json
file, or remove the
"types" field entirely.
AWS Amplify Library provides Angular components and CLI support to work with AWS services.
Known Limitations
There are a few known limitations with the bundled TypeScript definitions at this time:
- Service client typings reflect the latest
apiVersion, regardless of which
apiVersionis specified when creating a client.
- Service-bound parameters use the
anytype.
Getting Help
Please use these community resources for getting help. We use the GitHub issues for tracking bugs and feature requests and have limited bandwidth to address them.
- Ask a question on StackOverflow and tag it with
aws-sdk-js
- Come join the AWS JavaScript community on gitter
- Open a support ticket with AWS Support
- If it turns out that you may have found a bug, please open an issue
Opening Issues
If you encounter a bug with the AWS SDK for JavaScript.
The GitHub issues are intended for bug reports and feature requests. For help and questions with using the AWS SDK for JavaScript please make use of the resources listed in the Getting Help section. There are limited resources available for handling issues and by keeping the list of open issues lean we can respond in a timely manner.
Supported Services
Please see SERVICES.md for a list of supported services.
License
This SDK is distributed under the Apache License, Version 2.0, see LICENSE.txt and NOTICE.txt for more information.
|
https://docs.amazonaws.cn/AWSJavaScriptSDK/latest/index.html
|
CC-MAIN-2020-40
|
refinedweb
| 871
| 52.49
|
php-sabre-vobject 2.1.7-4 source package in Ubuntu
Changelog
php-sabre-vobject (2.1.7-4) unstable; urgency=medium * PHPUnit's units of code are now namespaced (Closes: #882916) * d/control: Add me to uploaders * d/control: Standards-Version: 4.1.1, no change -- Mathieu Parent <email address hidden> Sat, 09 Dec 2017 15:10:52 +0100
Upload details
- Uploaded by:
- Debian PHP PEAR Maintainers on 2017-12-09
- Original maintainer:
- Debian PHP PEAR Maintainers
- Architectures:
- all
- Section:
- misc
- Urgency:
- Medium Urgency
See full publishing history Publishing
Downloads
Available diffs
- diff from 2.1.7-3 to 2.1.7-4 (3.3 KiB)
No changes file available.
Binary packages built by this source
- php-sabre-vobject: library to parse and manipulate iCalendar and vCard objects
The SabreTooth VObject library allows one to easily parse and
manipulate iCalendar and vCard objects using PHP. The goal of the
VObject library is to create a very complete library, with an easy to
use API.
.
This project is a spin-off from SabreDAV, where it has been used for
several years.
|
https://launchpad.net/ubuntu/+source/php-sabre-vobject/2.1.7-4
|
CC-MAIN-2019-35
|
refinedweb
| 181
| 52.39
|
On Fri, Oct 9, 2009 at 1:35 AM, Masklinn <masklinn at masklinn.net> wrote: >) This is not true - stow solves the problem in a more general way (in the sense that it is not restricted to python), at least on platforms which support softlink. The only inconvenience of stow compared to virtual env is namespace packages, but that's because of a design flaw in namespace package (as implemented in setuptools, and hopefully fixed in the upcoming namespace package PEP). Virtualenv provides a possible solution to some deployment problems, and is useful in those cases, but it is too specialized to be included in python itself IMO. cheers, David
|
https://mail.python.org/pipermail/python-dev/2009-October/092815.html
|
CC-MAIN-2016-40
|
refinedweb
| 110
| 61.19
|
table of contents
NAME¶
MDC2, MDC2_Init, MDC2_Update, MDC2_Final - MDC2 hash function
SYNOPSIS¶
#include <openssl/mdc2.h>
The following functions have been deprecated since OpenSSL 3.0, and can be hidden entirely by defining OPENSSL_API_COMPAT with a suitable version value, see openssl_user_macros(7):¶
All of the functions described on this page are deprecated. Applications should instead use EVP_DigestInit_ex(3), EVP_DigestUpdate(3) and EVP_DigestFinal_ex(3).¶
MDC2() returns a pointer to the hash value.
MDC2_Init(), MDC2_Update() and MDC2_Final() return 1 for success, 0 otherwise.
CONFORMING TO¶
ISO/IEC 10118-2:2000 Hash-Function 2, with DES as the underlying block cipher.
SEE ALSO¶
EVP_DigestInit(3)
HISTORY¶
All of these functions were deprecated in OpenSSL 3.0.
Licensed under the Apache License 2.0 (the "License"). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at <>.
|
https://manpages.debian.org/experimental/libssl-doc/MDC2.3ssl
|
CC-MAIN-2022-33
|
refinedweb
| 149
| 51.55
|
:
script; grep ]0\; typescript:
switchdesk..
Killing a process with Linux is an easy task. As always there is more than one way to do it. There are graphical process managers that can be used to aid in killing a process on Linux. The first method I’ll demonstrate may work depending on your window manager. Either way you can set it up to work the same way if you like it.
The name of the program is xkill. My XFCE has a shortcut of CTRL+ALT+ESC but this may not be the case for every version of XFCE. Basically you press this keyboard shortcut and you get a skull and crossbones. Once you get that you can click on the window of the process you’d like to kill and it kills it.
To use Linux to kill a process from the command line, you can use one of two commands that are pretty standard throughout all Linux Distros the kill, and killall commands. The only real hard part is figuring out what process to kill. To figure out what process I want to kill I use the following command:
owen@linux-blog:~$ ps ax
then to use kill and killall on Linux I use:
owen@linux-blog:~$ kill <processid>
owen@linux-blog:~$ killall <processname>
This is pretty straight forward but if you have say multiple FireFoxes open, you may want to just kill the process by using the kill <processid> command, otherwise all of your FireFox windows will probably close since killall kills all processes that match the name, regardless of if they actually are crashed or not.
If the process won’t die, you can use the following to kill it. Be aware that this is not the best thing to do but it will kill the process.
owen@linux-blog:~$ kill -9 <processid>
owen@linux-blog:~$killall -9 <processname>
Basically instead of killing gracefully you send a SIGKILL to the process which is basically tells it to commit suicide no matter what its currently doing. I’ve listed all of the signals you can send to kill a process at the end of this post.
Another method to kill a process is by using top. Top is an interface that shows you what processes are doing what. You can kill a process (once your in top) by pressing the k key. It then asks you what PID (Process ID) you want to kill. You can figure this out from the list. It then asks what type of signal you want to use. You can use the default first, and then if the process just wont die, you can use 9. Top is useful for killing a bunch of processes in a small amount of time.
List of all signals that you can send:
owen@linux-blog:~$
HAL is short for Hardware Abstraction Layer. Its job is to make hardware work with minimal user interaction.
Unfortunately HAL on Slackware 12 does not work right out of the box.
While playing around trying to get HAL to work I was getting weired error messages such as:
File “/usr/bin/hal-device-manager”, line 7, in
import pygtk
ImportError: No module named pygtk
and
A security policy in place prevent this sender from sending this message to this recipient, see message bus configuration file (rejected message had interface “org.freedesktop.Hal.Device.Volume” member “Mount error name “(unset” destination “org.freedesktop.Hal”)
After doing some research I found that all that is needed to fix this to add your user name to the plugdev group in /etc/group
plugdev:x:83:youruser
If you have multiple users that need access to HAL then add all of those user names to the /etc/group file while your at it. Seperate them with commas as followed:
plugdev:x:83:userone,usertwo
For more information on the HAL project check out the HAL project page.
|
http://www.thelinuxblog.com/tags/interface/
|
CC-MAIN-2014-42
|
refinedweb
| 653
| 77.37
|
In a previous post I talked about how to preprocess and explore image dataset. In this post, I will talk about how to model image data with neural networks having a single neuron, using sigmoid function. Original version of this blog can be found here. This is equivalent to logistic regression. Only difference is the way we estimate weights(coeffcients) of the the inputs. The traditional way of estimating logistic regression weights(coefficients) is to use analytical methods(an optimization technique). But the neural network way of estimating weights(coefficients) is to use gradient descent algorithm.
Before jumping to modeling, I will try to give an intuition about the sigmoid function.
The sigmoid function is given by the formula,
For any input x, a(sigmoid of x) will vary between 0 and 1. When x is positive and large, e^x(numerator) and 1+e^x(denominator) will be approximately same and value of a will be one. Similarly when x is a large negative number, e^x will be approximately zero and value of a will be zero. Let's see two examples.
import os
import numpy as np
from scipy.misc import imresize
import matplotlib.pyplot as plt
%matplotlib inline
x=500
print(1/(1+np.exp(-x)))
1.0
x=-500 print(1/(1+np.exp(-x)))
7.12457640674e-218
Another important aspect of sigmoid function is that it is a non-linear function in x. This fact becomes more powerful in case of multi layered neural networks, as it will help in unlocking many hidden non-linear patterns in the data. A single sigmoid function looks like the following graph, for different values of x.
x=np.linspace(-10,10,100) #linspace generates 100 uniformly spaced values between -10 and 10
plt.figure(figsize=(10, 5)) #Setting up the figure size of width 10 and height 5
plt.plot(x,sigmoid(x),'b') #Plot sigmoid(x) in Y-axis and x in X-axis with line color blue
plt.grid() #Add grid to the plot
plt.rc('axes', labelsize=13) #Set x label abd y label fontsize to 13
plt.xlabel('x') #Label x-axis
plt.ylabel('a (sigmoid(x))') #Label y axis
plt.rc('font', size=15) #Set text fontsize default as 15
plt.suptitle('Sigmoid Function') #Create a supertitle for the plot. You can use title as well
As you can see from the graph, a(sigmoid(x)) varies between 0 and 1. This makes sigmoid function and in turn logistic regression suitable for binomial classification problem. That means we can use logistic regression or sigmoid function when the target varible has only two values(0 or 1). This makes it suitable for our purpose, in which we are trying to predict the gender of the celebrity from images. Gender(our target variable) has only two values in our dataset, male(0) and female(1).
Sigmoid function essentially gives out the probabilty target variable being 1 for a given input. i.e in our case given an image, sigmoid function gives the probability of that image being that of a female celebrity, since in our target variable female gender is indicated as 1. Although, probabilty of an image being male can be easily calculated as 1-sigmoid(input image) will give that.
Another point to remember is that, for our problem input x is a combination of variables or pixels to be precise. Let's denote this combination of input variables as z.
where,
w1 = weight of the first variable (in our case the first pixel)
x1 = first variable (in our case, first pixel) and so on..
b = bias (similar to intercept in linear regression)
where
is the sigmoid function
and a is the predicted values(probabilities)
In matrix notation, the equations can be written as,
where '.' indicates matrix multiplication
W is the row vector of all weights of dimension[1,num_px] num_px is the number of pixels(variables)
X is the input matrix of dimension[num_px,m] m = no.of training examples
A is the array of predicted values of dimension[1,m]
The unknowns in the above equations are weights(w's) and bias(b). The idea of logistic regression or single neuron neural network(from now on I will use this terminology) is to find the best values of weights and bias which gives the minimum error(cost).
So for training the model first we have to define the cost function. We define the cost function for the binomial prediction as
where,
J(a,y) is the cost which is a function of a and y and it is a scalar meaning single value. This cost is called negative log likelihood. Lower the cost, better the model
m = number of training examples
y = array of true labels or actual values
a =
, the predicted values
z = w1x1 + w2x2 +...+w_n*x_n + b
In matrix form we write it as,
where,
m is the number of training examples
is the transpose of A which is the array of predicted values of dimensions [m,1]
Y is the array actual values or true labels of dimensions [1,m]
Now we have to use gradient descent to find the values of W and b that minimizes the cost.
In short, training of single neuron neural network using gradient descent involves the following steps:
In steps 2 and 3, we calculate the values of A and Z as mentioned before and compute the cost. This step is called forward propagation.
dZ =
= A-Y
dW =
=
db =
=
where
is the transpose of X.
In the above diagram, backward propagation is highlighted by red colored line. From the point of view of logical flow of the network, backward propagation starts from the cost and reaches W. The intuition is we need to update the parameters(W and b) of the model to minimze cost, and in order to do that we need to find the derivative of cost w.r.t the parameters we want to update. However, cost is not directly dependent on parameter(W and b) but on functions(A and Z) which uses these parameters. Hence we need to use chain rule to calculate the derivative of cost w.r.t to parameters. Each derivative term in the chain rule happens at a different part in the model, which starts at cost and flows backward.
In step 5, we need to update the parameters as follows
Here
is a parameter called learning rate. It controls how big the update(or step) is in each iteration. If
is too small, it may take a long time to find the best parameters and if
is too big we may overshoot and never reach the optimal parameters.
In step 6, we need to repeat the steps a fixed number of times. There is no rule as such how many iterations we have to run. It varies from dataset to dataset. If we set alpha to a very small value, we may need to iterate more number of times. Generally it's a hyperparameter which we have to tune.
That's all we need to know to implement a single neuron neural network.
So to reiterate the steps involved:
I will continue from where I stopped in the last article. I will continue with the same problem and same dataset.
Our problem statement was to predict the gender of the celebrity from the image.
After preprocessing, our final data sets were train_x(train data input) , y_train(target variable for the training set), test_x(test data input) , y_test(target variable for the testing set).
Let's take a quick look at the data attributes.
m_train = train_x.shape[1].shape))
print ("y_train shape: " + str(y_train.shape))
print ("test_x shape: " + str(test_x.shape))
print ("y_test shape: " + str(y_test.shape))
Number of training examples: m_train = 80
Number of testing examples: m_test = 20
Height/Width of each image: num_px = 64
Each image is of size: (64, 64, 3)
train_x shape: (12288, 80)
y_train shape: (1, 80)
test_x shape: (12288, 20)
y_test shape: (1, 20)
Step 1) Initialize parameters i.e W and b
Let's write a function to initialize W and b. There are different intialization techniques. For this exercise, we will intialize both W and b to zero.
def initialize_with_zeros(dim):
#Function takes in a parameter dim whic is equal to no of columns or pixels in the dataset
w = np.zeros((1,dim))
b = 0
assert(w.shape == (1, dim)) #Assert statement ensures W and b has the required shape
assert(isinstance(b, float) or isinstance(b, int))
return w, b
Steps 2, 3 and 4 Forward Propagation, Cost computation and Backward propagation
We will define a sigmoid function first, which will take any array or vector as an input and returns the sigmoid of the input.
def sigmoid(z):
s = 1/(1+np.exp(-z))
return s
Now let's write a function called propagate, which will take W(weights),b(bias),X(input matrix) and Y(target variable) as inputs. It should return cost and gradients dW and db.
We need to calculate the following:
A=
=
Cost =
dW =
=
db =
=
where '.' indicates matrix multiplication. In python, np.dot(numpy.dot) function is used for matrix multiplication.
def propagate(w, b, X, Y):
"""
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if male celebrity, 1 if female celebrity) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
"""
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
A = sigmoid(np.dot(w,X)+b) # compute sigmoid- np.dot is used for matrix multiplication
cost = (-1/m)*(np.dot(Y,np.log(A.T))+ np.dot((1-Y),np.log((1-A).T))) # compute cost
# BACKWARD PROPAGATION (TO FIND GRAD)
dw = (1/m)*np.dot((A-Y),X.T)
db = (1/m)*np.sum((A-Y))
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost) #to make cost a scalar i.e a single value
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
Steps 5 and 6 Optimization:Update parameters and iterate
Let's define a function optimize which will repeat steps 2 through 5 for a given number of times.
Steps 2 till 4 can be calculated by calling the propagate function. We need to define step 5 here. i.e parameter updates. Update rules are:
were
is the learning rate.
After iterating through the given number of iterations, this function should return the final weights and bias male celebrity, 1 if female celebrity) of size .
"""
costs = []
for i in range(num_iterations): #This will iterate i from 0 till num_iterations-1
# Cost and gradient calculation
grads, cost = propagate(w, b, X, Y)
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule
w = w-learning_rate*dw
b = b-learning_rate*db
# Record the costs for every 100th iteration
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training examples
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
# plot the cost
plt.rcParams['figure.figsize'] = (10.0, 10.0)
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
Prediction using learned parameters
From the previous function we will get the final weights and bias. We can use those weights to predict the target variable(gender) on new data(test data). Let's define a function for prediction capability. If the predicted probability is 0.5 or less, the image will be calssified as 0(male) else 1(female). having a female celebrity in the picture
A = sigmoid(np.dot(w,X)+b)
Y_prediction=np.round(A)
assert(Y_prediction.shape == (1, m))
return Y_prediction
Putting everything together
Let's put training and prediction into a sigle function called model, which will train the model on training data and predict on testing data and return accuracy of the model. Since we have to predict 0 or 1, we can calculate accuray using the formula:
It indicates what percentage of images have been rightly classified or predicted.
You can define any accuracy or evaluation metrics. However, in this series we will use accuracy defined as above.
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
"""
Builds the logistic regression model by calling the function.
"""
# initialize parameters with zeros
m_train=X_train.shape[0]
w, b = initialize_with_zeros(m_train)
# Gradient descent
parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations= num_iterations, learning_rate = learning_rate, print_cost = print_cost)
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples
Y_prediction_test = predict(w, b, X_test)
Y_prediction_train = predict(w, b, X_train)
# Print train/test Errors
print("train accuracy: {} %".format(100*(1 - np.mean(np.abs(Y_prediction_train - Y_train)) )))
print("test accuracy: {} %".format(100*(1 - np.mean(np.abs(Y_prediction_test - Y_test)) )))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
" learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
<Python Code End>
d = model(train_x, y_train, test_x, y_test, num_iterations = 1000, learning_rate = 0.005, print_cost = True)
Cost after iteration 0: 0.693147
Cost after iteration 100: 0.325803
Cost after iteration 200: 0.209219
Cost after iteration 300: 0.159637
Cost after iteration 400: 0.128275
Cost after iteration 500: 0.106781
Cost after iteration 600: 0.091209
Cost after iteration 700: 0.079450
Cost after iteration 800: 0.070282
Cost after iteration 900: 0.062948
train accuracy: 100.0 % test accuracy: 65.0 %
The accuracy of the model is around 65% with learning rate =0.005 and number of iterations =1000. Probably we can achieve bit more better results by tuning these two parameters.
Now, let's take a look at the mis labeled or wrongly predicted images.
def print_mislabeled_images(classes, X, y, p):
"""
Plots images where predictions and truth were different.
X -- dataset
y -- true labels
p -- predictions
"""
a = p + y
mislabeled_indices = np.asarray(np.where(a == 1))
plt.rcParams['figure.figsize'] = (40.0, 40.0) # set default size of plots
num_images = len(mislabeled_indices[0])
for i in range(num_images):
index = mislabeled_indices[1][i]
plt.subplot(2, num_images, i + 1)
plt.imshow(X[:,index].reshape(64,64,3), interpolation='sinc')
plt.axis('off')
plt.rc('font', size=20)
plt.title("Prediction: " + classes[int(p[0,index])] + " \n Class: " + classes[y[0,index]])
print_mislabeled_images(classes, test_x, y_test, d["Y_prediction_test"])
Output:
So now we have completed training a single node neural network. We have achieved an accuracy of 65 %. Not bad for a single neuron or simple logistic regression. It's a bit long post but understanding the basics is the key to understand more complex algorithms. Sigmoid function(or similar functions) is the building block for Neural Networks, Deep learning and AI. I hope this article gave a good intuition about the sigmoid function and neural network approach.
Building on top of this article, in the next post, I will talk about how to train a multi layer neural network.
Before wrapping up, I will try to show what the neuron has learned at the end of training. Now this part is not for weak hearted people. Continue only if you are brave and curious :D.Let's use the final weights to multiply corresponding pixels in training data and scale by a factor 255, since we divided pixels by 255 for standardization.
Now let's plot an image from the reconstructed data.
test=d["w"].T*train_x*255
test=test.T.reshape(80,64,64,3)
plt.rcParams['figure.figsize'] = (10.0, 10.0)
plt.imshow(test[0], interpolation='sinc')
You may either find the image artistic or scary or weird. Neverthless it's still very interesting, atleast for me ;). For plotting the above image I used sinc interpolation. We can try different interpolations and see the effects.):
plt.rc('font', size=15)
ax.imshow(test[0], interpolation=interp_method, cmap=None)
ax.set_title(interp_method)
plt.show()
Output:
Let's create a montage and compare the reconstructed images vs original.
The above function can be used to create montages. Now let's combine some of the reconstructed images and original data and create a montage.
compare = np.concatenate((test[52:54], data[52:54]), axis=0)
compare.shape
(4, 64, 64, 3)
Now let us try to create the montage with two different interpolations.
plt.imshow(montage(compare,saveto='montage.png'),interpolation='spline36')
plt.show()
plt.imshow(montage(compare,saveto='montage.png'),interpolation='bicubic')
plt.show()
Output:
If you look carefully, in the reconstructed image, hair colors of the image have been captured differently. This is an indication that the algorithm has learned some of the facial features from the data.
Also, other thing we can do is to generate the montage with different interpolations for comparison.(montage(compare,saveto='montage.png'), interpolation=interp_method, cmap=None)
ax.set_title(interp_method)
plt.show()
Output:
Images are very interesting. We can find very interesting patterns and visulaize how an algorithm learns to identify patterns in the image. It always amazes me. On that note I am putting my pen down on this article. In the next article, I will talk about multi layer neural networks and try to explore what the neurons have learned from the images.
References:
'Neural Networks and Deep Learning' on Coursera by Andrew Ng
Calculus by Gilbert Strang
Views: 1286
Comment
© 2019 Data Science Central ®
Badges | Report an Issue | Privacy Policy | Terms of Service
You need to be a member of Data Science Central to add comments!
Join Data Science Central
|
https://www.datasciencecentral.com/profiles/blogs/image-processing-and-neural-networks-intuition-part-2
|
CC-MAIN-2019-22
|
refinedweb
| 2,977
| 57.06
|
Interfacing L298N H-bridge motor driver with raspberry pi
First things first, if you have not checked out my previous blog about how to set up a raspberry pi with out an HDMI cable and monitor, then have a quick read this blog.
Now, let’s continue with our journey. At this point, you are probably wondering what journey, is there a destination? Well, there is. I will be explicitly talking about it another separate blog (yeah, I am a little lazy in writing blogs, sorry ;( ).
However, a little sneak peak into this project, this robot has motors (surprise surprise ;p) to power the wheels. To protect the controller from the current usage incompatibility, a motor driver is used. To know more about what are motor drivers and why are they used, please read this amazing blog.
What is a motor driver and why do we need it with the Raspberry Pi
The motors require an amount of current that is larger than the amount of current that the Raspberry Pi could handle…
In this current blog, we will interface the L298N H-bridge with raspeberry pi and run a script on the pi to move the robot. This will be divided into 2 sections.
- Hardware Integration
- Software Program and testing
Let’s go.
- Hardware Integration
First, let’s talk about L298N.
We have 4 6V DC motors, and we have only two motor outputs, therefore, 2 motors will use the same motor output from the H-bridge. Hence, 2 motors connect to Motor A output, the other 2 motors connect to the Motor B output.
Before we connect the motor driver to the pi, let’s take care of the battery pack which would power the motors. The +ve of the batter pack gets connected to the power supply connection from the image and the -ve of the battery pack gets connected to the GND.
Once the battery pack is connected properly to the H-bridge, a red light will start glowing.
Now, let’s connect the H-bridge with raspberry pi.
There can be many combinations for connecting L298N to pi. This is one of them:
2. Software Program and testing
Now that the hardware of H-Bridge has been interfaced with the Pi, let’s write a python program to run this hardware and see some action ;p.
import RPi.GPIO as gpio
import timedef init():
gpio.setmode(gpio.BCM)
gpio.setup(17, gpio.OUT)
gpio.setup(22, gpio.OUT)
gpio.setup(23, gpio.OUT)
gpio.setup(24, gpio.OUT)def forward(sec):
init()
gpio.output(17, False)
gpio.output(22, True)
gpio.output(23, True)
gpio.output(24, False)
time.sleep(sec)
gpio.cleanup() def reverse(sec):
init()
gpio.output(17, True)
gpio.output(22, False)
gpio.output(23, False)7
gpio.output(24, True)
time.sleep(sec)
gpio.cleanup()def left_turn(sec):
init()
gpio.output(17, True)
gpio.output(22, False)
gpio.output(23, True)
gpio.output(24, False)
time.sleep(sec)
gpio.cleanup()def right_turn(sec):
init()
gpio.output(17, False)
gpio.output(22, True)
gpio.output(23, False)
gpio.output(24, True)
time.sleep(sec)
gpio.cleanup()seconds = 3time.sleep(seconds)
print("forward")
forward(seconds)
time.sleep(seconds-2)print("right")
right_turn(seconds)
time.sleep(seconds-2)time.sleep(seconds)
print("forward")
forward(seconds)
time.sleep(seconds-2)print("right")
right_turn(seconds)
time.sleep(seconds-2)
Code is available at
SharadRawat/pi_sensor_setup
You can't perform that action at this time. You signed in with another tab or window. You signed out in another tab or…
github.com
Let’s run this basic program.
Yayee, that feels good. So we have the motors running for this robot now.
For the next post, I will be writing about interfacing of other sensors to the raspberry pi.
Thanks for reading. If you like this blog, please clap.
Next up, read about the two types of perception sensors I am using for my robot. The problems, solutions and procedure is described in the blog below.
|
https://sharad-rawat.medium.com/interfacing-l298n-h-bridge-motor-driver-with-raspberry-pi-7fd5cb3fa8e3?responsesOpen=true&source=user_profile---------4----------------------------
|
CC-MAIN-2021-43
|
refinedweb
| 669
| 69.99
|
The Server
You !
3 Comments
Eder Villanueva
Hi, its a nice example, but if i need use spring:
like this
MathService.groovy (interface)
MathServiceImpl.groovy (implementation)
WSServer.groovy (Publish WS)
how i can do this. Can you put a example please
thanks
Bob Stevenson
I've been trying to get this simple example to work for a while and had no luck... I am a novice to web services and groovy, so I am likely just missing something here.
Here's my MathService class:
package com.groovy.service;
public class MathService { double add(double arg0, double arg1)
double square(double arg0)
}
package com.groovy.service;
public class MathService {
double add(double arg0, double arg1)
double square(double arg0)
}
Here's the simple driver:
package com.groovy.service;
import groovyx.net.ws.WSServer
class MathServiceLauncher {
static main(args)
}
When I launch the driver, I get the following message: java.lang.ClassNotFoundException: MathService
I don't seem to know what I am doing wrong or more likely, what piece of the puzzle I am missing as I am trying to start the server from within Eclipse.
Any help would be greatly appreciated.
Christopher Turner
Looks like you are not using groovy. You can run the sample pretty much as-is. I just moved the import up to the top and put the whole script in as below. The interesting bit was getting Jetty to stop, as the script terminates but jetty keeps on going.
You can test the service simply by putting this in your browser: ""
|
http://docs.codehaus.org/pages/viewpage.action?pageId=120258717
|
CC-MAIN-2014-15
|
refinedweb
| 254
| 67.65
|
Simple BitBound ChEMBL similarity search
This is part of a series of essays on how to write a similarity search program for the RDKit Morgan fingerprints distributed by ChEMBL.
- Simple FPS fingerprint similarity search: variations on a theme
- Simple k-NN FPS Tanimoto and cosine similarity search
- Simple in-memory ChEMBL similarity search
- Simple BitBound ChEMBL similarity search
- Using RDKit BulkTanimotoSimilarity
- Faster in-memory ChEMBL search by using more C
- Faster BitBound ChEMBL search by using more C
- Even faster in-memory search with intersection popcount
- Cache and reuse popcount-sorted ChEMBL fingerprints
In yesterday's essay I changed the scan-based Tanimoto search to an in-memory search and showed that after a start-up cost of about 5 seconds I was able to do about 2 searches per second of the 1.9 million ChEMBL fingerprints.
I ended by pointing out how chemfp was over 100x faster.
How is chemfp so much faster? One clear reason is the search code is all implemented in C. A comparison test like "score >= threshold" can be done in one CPU cycle, but the equivalent in Python takes likely thousands of CPU cycles. Another is Python's function call overhead - I suspect it takes longer to set up the call to byte_tanimoto_256() and convert the result to Python than it does to compute the actual Tanimoto.
The chemfp paper covers all of the techniques I used, and reviews the other ones I know about. In this essay I will implement the BitBound algorithm, which is a simple technique that can often reduce the number of fingerprints to test.
If you want to skip all the discussion, here are the instructions to use the final code, which assumes you have a gcc or clang C compiler and that RDKit's Python bindings are installed:
- Download chembl_27.fps.gz.
- Download the updated popc.py.
- Run
python popc.pyto generate the _popc module.
- Download chembl_bitbound_search.py.
- Run
python chembl_bitbound_search.py --timesto do the default caffeine search and see the internal timings.
- Run
python chembl_bitbound_search.py --helpto see the command-line help, then try your own queries, including with different thresholds.
The new program averages about 0.3 seconds per query with the default threshold of 0.7, and with an initial load time of about 8 seconds.
Update: A week later I implemented faster versions, first by reorganizing the data so I can do more work in C instead of Python, and second by replacing full byte_tanimoto_256() calculation with a pre-computed query and target fingerprints and a byte_intersect_256() calculation.
BitBound algorithm
Swamidass and Baldi pointed out that if a query fingerprint A has a bits set, and the goal is to find target fingerprints Bi which have a Tanimoto score at least T similar to the query, then we can set bounds on the number of bits b in the target fingerprints. More specifically: a T ≤ b ≤ a / T.
(They generalize this calculation to Tversky similarity, and also describe an search method to improve a nearest neighbor search.)
What this means is, if the number of bits in the target fingerprints can be pre-computed then those fingerprints which are out-of-bounds for a given query don't need to be considered.
There are few ways to implement this. One is to add the pre-computed popcount to each record. However, this still means testing every record. Another is to sort the records into different bins, one for each popcount. Then the search need only test the bins which might have a similarity match. I'll implement this seecond approach.
Compute the fingerprint popcount
I don't have a way to compute the popcount of each of the target
fingerprints. I decided
to use byte strings for them and used cffi to write popc.py,
a program to generates the
_popc module with the
Tanimoto similarity and cosine similarity functions, hard-coded for
2048-bit fingerprints. I'll extend that module so it implements
byte_popcount_256(), to compute the popcount of a
256-byte byte string.
The implementation is a straight-forward simplification of the similarity functions, so I'll present it here without further description (see popc.py for the full implementation):
static int byte_popcount_256(const unsigned char *fp) { int num_words = 2048 / 64; int popcnt = 0; /* Interpret as 64-bit integers and assume possible mis-alignment is okay. */ uint64_t *fp_64 = (uint64_t *) fp; for (int i=0; i<num_words; i++) { popcnt += __builtin_popcountll(fp_64[i]); } return popcnt; }
Pre-compute the target fingerprints and put into bins
Just like in yesterday's essay, I'll first read the fingerprints into memory and then search them. However, this time I'll organize the fingerprints by popcount. There are 2049 possible values (from 0 bits set to all 2048 bits set), so I'll create a list of 2049 empty lists. For each fingerprint record, I'll compute the fingerprint popcount and append it to the appropriate list. Here's what that code looks like:
# Load targets into a popcount bins, ordered by the fingerprint popcount. # Each bin contains a list of (id, fingerprint) pairs. t1 = time.time() # Create 2049 bins, each with an empty list, ordered by popcount. target_popcount_bins = [[] for i in range(2049)] for id_fp_pair in read_fingerprints(open(args.targets)): # Place each record into the right bin. popcount = byte_popcount_256(id_fp_pair[1]) target_popcount_bins[popcount].append(id_fp_pair) t2 = time.time() load_time = t2-t1
Determine the bounds
It seems easy to compute the bit bounds with something like:
query_popcount = byte_popcount_256(query_fp) min_popcount = query_popcount * threshold max_popcount = query_popcount / threshold
and then use the integer popcount values between min_popcount and max_popcount. The lowest popcount value is the integer greater than or equal to min_popcount and the highest popcount value is the integer no greater than max_popcount. You might think this would work:
# This is wrong! import math query_popcount = byte_popcount_256(query_fp) min_popcount = math.ceil(query_popcount * threshold) max_popcount = math.floor(query_popcount / threshold) for popcount in range(min_popcount, max_popcount+1): ...
In practice, this is tricky because the calculations will use IEEE 754 doubles. Some operations which should ideally result in an integer value ... don't. Here is an example where the min_popcount would be wrong:
>>> threshold = 0.55 >>> query_popcount = 1580 >>> threshold * query_popcount 869.0000000000001 >>> import math >>> math.ceil(threshold * query_popcount) 870
and here's an example where the max_popcount would be wrong:
>>> threshold = 0.55 >>> query_popcount = 396 >>> query_popcount / threshold 719.9999999999999 >>> math.floor(query_popcount / threshold) 719
Solution #1: change by epsilon
One solution, which will work for normal threshold values, is to add or subtract a small value to ratio before doing the floor or ceiling call.
>>> import math >>> EPSILON = 1e-10 >>> threshold = 0.55 >>> threshold * 1580 869.0000000000001 >>> math.ceil(threshold * 1580 - EPSILON) 869 >>> 396 / threshold 719.9999999999999 >>> math.floor(396 / threshold) 719 >>> math.floor(396 / threshold + EPSILON) 720
This works because 1) it's chemically unreasonable to specify a threshold with more than 3 significant digits, and 2) the differences between fingerprint scores cannot be more than 1/(20482).
Solution #2: use rationals
I think a better solution is to do the calculation using rationals, so the operations can be done with integers. (This is especially important if you want to handle the Tversky BitBounds; in chemfp I gave up trying to get IEEE 754 math to work correctly and ended up multiplying alpha and beta values by 10,000 to work with integers.)
Since I'm using Python, I can use Python's built-in fractions module, which implements a rational data type:
>>> import math, fractions >>> threshold = fractions.Fraction("0.55") >>> math.ceil(threshold * 1580) 869 >>> math.floor(396 / threshold) 720
(I actually used the fractions module to find the examples where the simple application of IEEE 7554 doubles cause problems.)
Only a few minor changes are needed to use a fraction for the threshold, starting with the argparse --threshold parameter:
parser.add_argument("--threshold", "-t", type=fractions.Fraction, default=fractions.Fraction("0.7"), help="minimum threshold (default: 0.7)")For performance reasons, I keep track of the fraction threshold (only used to find the popcount bonds) and the floating point value (used when checking the Tanimoto score):
fraction_threshold = args.threshold threshold = float(fraction_threshold)
The popcount bounds calculation, when using a fraction threshold, is the straight-forward, expected one:
min_popcount = 0 max_popcount = 2048 for query_id, query_fp in query_reader: ... query_popcount = byte_popcount_256(query_fp) if threshold > 0.0: min_popcount = math.ceil(query_popcount * fraction_threshold) max_popcount = math.floor(query_popcount / fraction_threshold) if max_popcount > 2048: max_popcount = 2048
The check for threshold > 0.0 is to prevent divide-by-zero errors. Of course, if the threshold is 0.0 then everything will match, and there will be a lot of overhead just printing the results.
Search the appropriate bins
Now that I have the bounds I can get the bins that are in range, and for each bin check each of the target fingerprints:
for targets in target_popcount_bins[min_popcount:max_popcount+1]: for target_id, target_fp in targets: score = similarity_function(query_fp, target_fp) # If it's similar enough, print details. if score >= threshold: has_match = True print(f"{query_id}\t{target_id}\t{score:.3f}")
Does it work?
The final code is in chembl_bitbound_search.py. But does it work?
I used chembl_in_memory_search.py from yesterday's essay to process the first 1,000 entries from the SMILES downloaded from the Wikipedia Chemical Structure Explorer, then processed the same entries with chembl_bitbound_search.py, then compared the two. The two programs have a different search order so the outputs cannot be compared directly. Instead, I compared the sorted outputs to each other (and excluded the warning and error messages in the following):
% python chembl_in_memory_search.py --times --queries wikipedia_1000.smi | sort > old.txt load: 5.27 search: 491.73 #queries: 994 #/sec: 2.02 % python chembl_bitbound_search.py --times --queries wikipedia_1000.smi | sort > new.txt load: 8.10 search: 297.42 #queries: 994 #/sec: 3.34 % cmp old.txt new.txt
They matched, and it was almost 40% faster. So that's a good first test. I'll also run with a higher threshold to see if the overall performance changes:
% python chembl_bitbound_search.py --times --queries wikipedia_1000.smi --threshold 0.9 & /dev/null load: 5.53 search: 81.74 #queries: 994 #/sec: 12.16
Nice speedup!
A more sensitive test would be to construct fingerprints with bit patterns that trigger edge cases in the code, and to add instrumentation to ensure the BitBound calculation is not accidentally too wide. That's too much for this essay.
Can improve the Tanimoto calculation performance
If the target fingerprints are sorted by popcount then we know that all fingerprint in a given bin have the same count. We also know the number of bits in the query fingerprint. This can be used to improve the search performance by reducing the amount of work to compute the Tanimoto.
The heart of the current code, in byte_tanimoto_256(), is:
for (int i=0; i<num_words; i++) { intersect_popcount += __builtin_popcountll(fp1_64[i] & fp2_64[i]); union_popcount += __builtin_popcountll(fp1_64[i] | fp2_64[i]); } return ((double) intersect_popcount) / union_popcount;
If the query and target popcounts are known then this reduces to:
for (int i=0; i<num_words; i++) { intersect_popcount += __builtin_popcountll(fp1_64[i] & fp2_64[i]); } return ((double) intersect_popcount) / (query_popcount + target_popcount - intersect_popcount);
I'll leave that as an exercise for the student.
Can skip the Tanimoto calculation performance
Another possible speedup is to avoid the division in the Tanimoto calculation altogether. If you work out the math you'll find that the target fingerprint will pass the test if the intersection popcount C is:
C ≥ threshold * (query_popcount + target_popcount) / (1 + threshold)
This calculation is another place where it's easier to calculate using rationals than with IEEE 754 doubles.
chemfp advertisement
I've been trying to make the point that while it's easy to implement a similarity search program, there are a lot of complicated details to make a fast similarity search program.
You could go ahead and spend the days or weeks to implement these features yourself, or you could install and test chemfp chemfp (under the Base License Agreement) by doing:
python -m pip install chemfp -i
The Base License Agreement does not allow you to create FPB files or do an in-memory search with more than 50,000 fingerprints. If you are interested in those features, take a look at the other licensing options then contact me to ask for an evaluation license key.
Andrew Dalke is an independent consultant focusing on software development for computational chemistry and biology. Need contract programming, help, or training? Contact me
|
http://www.dalkescientific.com/writings/diary/archive/2020/10/01/simple_bitbound_search.html
|
CC-MAIN-2021-43
|
refinedweb
| 2,066
| 56.76
|
less-sprites uses ImageMagick, so install it first.
npm install less-sprites
Write a list of source images into a
.json file:
{ "files": ["icon1.png", "icon2.png"] }
Create the sprite:
less-sprites my-sprite.json
There are more options you can specify:
{// Direction of image placement, default "bottom""direction": "right|bottom",// Directory relative to the .json file where source files are located"dir": "icons-sprite",// List of source images (without directory) or "*" to use all PNG files"files": ["icon1.png", "icon2.png"],// Location and name of the final sprite, default is same as the .json file."sprite": "icons-sprite.png",// The http path to the image (default: /images)"httpPath": '/images',// Space between the images in the sprite, default 0"spacing": 50,// Enable retina support, place all retina images in the same directory name with 2x at the end, eg.: icons-sprite2x"retina": true,// Location and name of the final LESS file, default is same as the .json file."less": "../less/icon-sprite.less"}
less-sprites my-sprite.json creates two files:
my-sprite.png- the final sprite image
my-sprite.less- positions of the images inside the sprite
In your stylesheet you target the original image, not the sprite; it will be translated during compilation.
### CSS without
less-sprites
less-sprites
"icons/icons-sprite.less".icon-first.sprite'icon1.png';// enabled auto dimensions.icon-second.sprite'icon2.png', true;
which is later compiled into final CSS:
Now when you need to add a new image to the sprite, you simply it to the
.json file and call
less-sprites.
No extra work is needed in your stylesheets.
less-spritesand enabled retina in the sprite file
"icons/icons-sprite.less".icon-first.sprite'icon1.png';// enabled auto dimensions.icon-second.sprite'icon2.png', true;
which is later compiled into final CSS:
{}
If you
@import several sprites into global namespace there is a possibility of name conflict (imagine referencing two images from two different places as
../image.png). The best way to avoid this is to always import inside a scope:
.my-icons"...";.icon-first.sprite'...';
The MIT License.
|
https://www.npmjs.com/package/less-sprites
|
CC-MAIN-2017-17
|
refinedweb
| 347
| 50.63
|
You can subscribe to this list here.
Showing
20
results of 20
I recently start a new opensource (and very small but usefull) project:
It's purpose is to go beyond limitation of DllImportAttribute, calling =
any function pointer, and not only those returned by GetProcAddress() =
....
the next release of CsGL might drop extension support and advice you to =
use this library instead,
it's while I ask you some feedback and if it looks ok ?
cheers,
Lloyd
a.. Version 1.4.1 released!=20
b.. a new forum! again! this one should have a longer life and is =
quite nice ;-),
please try the mailing list too...=20
c.. the forum: Forum
d.. Basically it's a maintenance release with lot of debugging.=20
e.. The OpenGLContext creation has been rewrite / enhanced to be more =
easily customizable=20
f.. The GL generator.exe generated incorrect code, it has been fixed. =
(and the OpenGL_Extension file regenerated)=20
g.. A rarely see issue with incorrect calling convention on DllImport =
attribute fixed=20
h.. OpenGLContext(s) could now share their display list with other =
context, customize your context to take advantage of this feature and =
save video memory !
of course CsGL.OpenGL.ContextLocal class take advantage of this =
feature.=20
i.. GDITextureFont work now very fine and simply. And you have a basic =
example <csgl>/examples/CS/gdifont.cs=20
j.. mouse & keyboard event management work now very fine and there is =
an associated example <csgl>/examples/CS/event.cs=20
k.. maintenance / compatiblity release of CsGLExample & GLViewer=20
l.. I found this new C#-OpenGL related project =
with some interesting stuff.=20
a.. Version 1.4.0 released!=20
b.. OpenGL 1.1, 1.2, 1.3, 1.4 & 50 additional extensions supported.=20
c.. If you have the latest extension header for =
SomeNiftyNewExtensionFromCompanyX you could now automatically generate =
the CsGL extension file with glgenerator.exe in the =
<CsGL>extras\generator directory. It comes both precompiled and with =
source. Check the README file first if you want to hack it.=20
d.. Mouse (including Cursor utility) and Keyboard class added to =
CsGL.Util=20
e.. New related project, is a =
skin file reader and cloth simulator... very nice demo!=20
f.. The NeHe tutorials have been taken out of the base distribution =
and will be found in a separate package, where a new developer (thanks =
Randy) is doing a really wonderful job and LOT of tutorials.=20
g.. One last thing, I dropped Managed C++, this release is cleaner and =
is only comprised of C# and C. This will make it easier to port to other =
platforms... (provided you have System.Windows.Forms... of course)=20
h.. The regular CsGL forum is broken and until it's fixed we're back =
to using the forum on SourceForge.=20
i.. BUG There is a small pointer corruption somewhere, which I was not =
able to track until now and I know a very few case where glGetString() =
and OpenFileDialog (?!) doesn't work as intended.=20
Hi Folks,
you could find a beta of coming 1.4 at
(browse up to the bottom of the page, you shoul see some csgl.1.4.0.xxx.zip
files)
It won't nearly change up to the release time except this:
a bit of change in CsGL.Util.Mouse
I will add Ridge & Dude NeHe tutorials (Ridge could you send me the last,
compatible version, please)
Some improvement still to do in the new CsGL.OpenGL.OpenGL_Extension class
this is a class of 10915 line of code. (I program fast guys, isn't it ?!)
generated by my new (still in beta) tool glgenerator (lok in
<csgl>/extra/generator)
which create C# extension from header file.
and it contains 1.2, 1.3, 1.4 call along with a lot of extension (thanks to
Lev Povalahev
levp@...
)
- version 1.3.2 released.
- some minor bug fixe
- a simple doc shipped
- extension N_Vertex added
- GLViewer () now read .3ds file and have a
nice light setting tool
- project Lilburn () started, a
3D engine library...
Hi guys there is an update of CsGL (yeah again),
so, what's news ?
a.. the OpenGLFont and OpenGLTexture2D are now ContextLocal, alloc once, =
use on any context easily...=20
a.. the GDITextureFont has a nicer output=20
a.. about 40 new safe function=20
a.. a new related project GLViewer
the new porject (which I write tool) is (in my opinion very nice) a =
generic 3D model viewer, this mean you could either write SimpleGLModel=20
//---
public MyModel : SimpleGLModel
{
public override Draw()
{
glSomething();
}
}
and that's it, the viewer could display it, dynamically loading your =
dll... I think it could help...
you could also write an IGLModelReader interface, put the dll in =
bin/filters, and the viewer will suddenly become
able to open your file format
that's it, enjoy and give me your feedback....
a.. version 1.1 released=20
b.. what you all wait for without daring to ask: a Document-view =
update of CSGL. Commit me to make a lot of change. This new architecture =
lie mainly on 2 new object GLView, an OpenGLControl, displaying the new =
OpenGLModel object.=20
It also have built in navigation feature.=20
c.. A document (that is OpenGLModel) could be see on a lot of =
different view at the same time this lead me to create ContextLocal =
object, an object wich transparently manage OpenGLContext local instance =
of any object, and one of its most valuable subclass: DisplayList.=20
d.. a new example (./examples/OO) which is quite unfinished now, but =
provide a nice skeleton of model viewing application.=20
help you to go very rapidly in OpenGL code test without bothering with =
all init/GUI/etc.. stuff=20
e.. now in binary distribution there is lib and libinstall directory =
(respectively build with make and make install commands). The first one =
is an anonymous assembly (which cannot be installed and should lie in =
you application directory), the other one has a strong name and could be =
installed with the install.bat provided=20
f.. Paul Michael Saunders (nus@...) take the SDL project =
project under his responsability, and could now be found at =
g.. a sample, unfinished, implementation of OpenGL extension=20
h.. Sad news, Jan Nockeman is too busy and leave the project, thank =
you and see you later Jan=20
i.. I remove OpenGLException.AssertGL() and break it into 2 part:=20
a.. OpenGLException.Assert(), which test only OpenGL error=20
b.. SystemError.Assert(), which test win32 error=20
this for maintenance reason.=20
j.. and of course the classical debugging & website updates=20
Hi folks.
I update CSGL just now.
This is a minor update in code quantity but a big one in code quality.
Here are the changes:
1. CSGL is only shipped for OpenGL now.
2. CSGL is now a signed assembly provide with an "install.bat" file which
could install/uninstall it
3. CSGL is shipped with it C# doc (though it is an awfull doc system, in my
humble opinion)
3. lot of debugging, particularly with Dispose() method
4. OpenGLControl is easier to create with a personalized PIXELFORMAT, just
overwrite CreateContext() method
5. ScreenForm work well now, no flickering and cooperate with other of it
kind ! and simpler code
6. of course the traditional site update
for all this reason I call this release "1.0", as it is preety complete.
the next lacking step are (and contributor wellcome as I don;t really work
fulltime on it now...)
1. adding OpenGL extension
2. provide VS.NET and NAnt buid file
Hope you will enjoy.
Cheers,
Lloyd
Hello CsGL users,
CsGL 0.9.1 is out. It is the same as 0.9, but works now=20
with the final release of the .NET SDK.
Also please check out our site, which has been=20
completely redesigned.
[ ]
Greets,
Jan Nockemann
a new version.
few functionality but a big structural change.
SDL and OpenGL wrapper are now completly independant.
and there is also more OpenGL web demo. download again last csgl-native
version, as it have changed.
and a project page with 2 project.
BTW i am pretty happy of the library, i just plan to add OpenGL
extension and Ben Houston work inside the library before close the
version 1.0, but this would take time as i will do a big travel, BTW all
your question should better directed to the list as i am not even sure
to have a computer...
now version 0.2.2 is out.
what's new in a so few times ?
- a minor memory corruption bug fixe in OpenGL and unresolved but
commented bug with multithreaded issue for OpenGLContext
- SDL_mixer working, at last tested with the playwave.cs in examples\SDL
i have reach what i think to be the minimum foundation API
now i just plan bug fixe for it
and new API (& assembly) for 3DS file loading and such stuff...
BTW is there any user feeback on GDITextureFont ?
(just a not i made a mistake while updating yesterday, correcti it now,
but incorrect file could still be on the server for a few minutes...)
2 new interesting improvement for OpenGL:
a new OpenGL font class
a new OpenGLContext class with an interesting ToImage() method
some more method with safe version.
a new Nehe lesson (17) showing this 2 feature in action..
coming: SDL sound (but i have some trouble..)
after that...
CsGL (CSharp Graphic Library) will become
CsGL (CSharp Gaming Library)...
i will post a call to help soon, but i could already inform you, help
would be needed to design a widget API on top of OpenGL (using SDL for
event and the like...), to help for a demo game, and other such idea (IA
? 3D engine ?...)
version of CSGL is out. and CVS is again uptodate (i must admit it was outdated these times...)
you could download release 0.2.0 at:
here a summary of changes as stated on changelog page...
a.. web demo on CsGL page ()
b.. namespace change. OpenGL becomes CsGL.OpenGL, SDL becomes CsGL.SDL
c.. Base class of OpenGL change from GL to OpenGL. from wich inherit GLU, from which inherit a new class : "GL", which provide
safe version of OpenGL call, with array, enumeration, etc...
d.. new objects in OpenGL: Point3D, Transform3D, Quaternion, ..
e.. new active contributor: Ben Houston
f.. new SDL classes (sound & CD-Rom)
g.. new high level class with cleaner code.
h.. NeHe tutorial: example 11 has been improved and it could be transformed in a web demo with a very few change now.
Ok, the problem was, for whose who want to know, like in java , you
cannopt mix host with "C# Applet" and i try download it from brinkster
in a page hosted by sourceforge...
ok, ok, i had to write a little ASP page on brinkster for coming
example (and i have already a volunteer as i do not personally know
ASP).
all of this to say:
go to web demo page now !
or
good week end..
woups, ... excuse for the abusive announce,...
the web demo work perfectly locally (though the DLL is on the web) but
don't work well on the web....
i have to investigate more, it seems....
Anyway code is better and i strongly advice you to check CVS.
i just add a "web demo" page. now there is just one demo and
prerequesite could be awkward, if anyone has idea ? want to improve the
process ?
But this should be nice !
check it now:
there is lot of cleaning in the library, stil not available as a ".tgz"
tarball but you should chech cvs, obviously a (minor) released is
planned soon (1 or 2 weeks)
particulary SDL video code begin to become nicer and i am thinking to
implement SDL sounds code.
|
http://sourceforge.net/p/csgl/mailman/csgl-news/
|
CC-MAIN-2014-23
|
refinedweb
| 1,967
| 67.15
|
Hi Patricia
On 06/06/2016 03:27 PM, Patricia Garcia Cañadilla wrote:
Dear Robert,
I have a 2D body of hyperelastic material which contracts and I would like to compute the total force developed by the body from the cauchy stress. I am trying to follow some of your indications I found in this group, but I still couldn't make it works. Could you please help me to fix the problem? I am getting the following error:
key = (region.name, integral.order, integration) AttributeError: 'dict' object has no attribute 'name'
I am trying to do the following, inside stress_strain post-processing function:
def stress_strain(out, problem, state, extend = False ): from sfepy.base.base import Struct from sfepy.mechanics.tensors import StressTransform from sfepy.mechanics.tensors import transform_data from sfepy.discrete.common.mappings import get_normals
ev = problem.evaluate field = problem.fields['displacement'] region = problem.domain.regions['Gamma'] integral = problem.integrals['i2'] n = get_normals(field,integral,regions)
Here you probably meant using 'region' instead of 'regions', right? regions are the dict in the module name space of the problem description file, so the error is pretty easily overlooked :)
If that does not help, try sending a complete file that reproduces the problem.
r.
|
https://mail.python.org/archives/list/sfepy@python.org/message/SUTCRQU7CBBGEQGSRLJQ3OMOJJY2RMJR/
|
CC-MAIN-2019-26
|
refinedweb
| 203
| 50.33
|
In response to a number of requests for good RNG's in
C, and mindful of the desirability of having a variety
of methods readily available, I offered several. They
were implemented as in-line functions using the #define
feature of C.
Numerous responses have led to improvements; the result
is the listing below, with comments describing the
generators.
I thank all the experts who contributed suggestions, either
directly to me or as part of the numerous threads.
It seems necessary to use a (circular) table in order
to get extremely long periods for some RNG's. Each new
number is some combination of the previous r numbers, kept
in the circular table. The circular table has to keep
at least the last r, but possible more than r, numbers.
For speed, an 8-bit index seems best for accessing
members of the table---at least for Fortran, where an
8-bit integer is readily available via integer*1, and
arithmetic on the index is automatically mod 256
(least-absolute-residue).
Having little experience with C, I got out my little
(but BIG) Kernighan and Ritchie book to see if there
were an 8-bit integer type. I found none, but I did
find char and unsigned char: one byte. Furthemore, K&R
said arithmetic on characters was ok. That, and a study
of the #define examples, led me to propose #define's
for in-line generators LFIB4 and SWB, with monster
periods. But it turned out that char arithmetic jumps
"out of character", other than for simple cases such as
c++ or c+=1. So, for safety, the index arithmetic
below is kept in character by the UC definition.
Another improvement on the original version takes
advantage of the comma operator, which, to my chagrin,
I had not seen in K&R. It is there, but only with an
example of (expression,expression). From the advice of
contributors, I found that the comma operator allows
(expression,...,expression,expression) with the
last expression determining the value. That makes it
much easier to create in-line functions via #define
(see SHR3, LFIB4, SWB and FIB below).
The improved #define's are listed below, with a
function to initialize the table and a main program
that calls each of the in-line functions one million
times and then compares the result to what I got with
a DOS version of gcc. That main program can serve
as a test to see if your system produces the same
results as mine.
_________________________________________
|If you run the program below, your output|
| should be seven lines, each a 0 (zero).|
-----------------------------------------
Some readers of the threads are not much interested
in the philosophical aspects of computer languages,
but want to know: what is the use of this stuff?
Here are simple examples of the use of the in-line
functions: Include the #define's in your program, with
the accompanying static variable declarations, and a
procedure, such as the example, for initializing
the static variable (seeds) and the table.
Then any one of those in-line functions, inserted
in a C expression, will provide a random 32-bit
integer, or a random float if UNI or VNI is used.
For example, KISS&255; would provide a random byte,
while 5.+2.*UNI; would provide a random real (float)
from 5 to 7. Or 1+MWC%10; would provide the
proverbial "take a number from 1 to 10",
(but with not quite, but virtually, equal
probabilities).
More generally, something such as 1+KISS%n; would
provide a practical uniform random choice from 1 to n,
if n is not too big.
A key point is: a wide variety of very fast, high-
quality, easy-to-use RNG's are available by means of
the nine in-line functions below, used individually or
in combination.
The comments after the main test program describe the
generators. These descriptions are much as in the first
post, for those who missed them. Some of the
generators (KISS, MWC, LFIB4) seem to pass all tests of
randomness, particularly the DIEHARD battery of tests,
and combining virtually any two or more of them should
provide fast, reliable, long period generators. (CONG
or FIB alone and CONG+FIB are suspect, but quite useful
in combinations.)
Serious users of random numbers may want to
run their simulations with several different
generators, to see if they get consistent results.
These #define's may make it easy to do.
Bonne chance,
George Marsaglia
The C code follows---------------------------------:
#include <stdio];
/* Use random seeds to reset z,w,jsr,jcong,a,b, and the table t[256]*/
static UL;
}
/* This is a test main program. It should compile and print 7 0);
}
/*-----------------------------------------------------
Write your own calling program and try one or more of
the above, singly or in combination, when you run a
simulation. You may want to change the simple 1-letter
names, to avoid conflict with your own choices. */
/* All that follows is comment, mostly from the initial
post. You may want to remove it */
/* Any one of KISS, MWC, FIB, LFIB4, SWB, SHR3, or CONG
can be used in an expression to provide a random 32-bit
integer.
The KISS generator, (Keep It Simple Stupid), is
designed to combine the two multiply-with-carry
generators in MWC with the 3-shift register SHR3 and
the congruential generator CONG, using addition and
exclusive-or. Period about 2^123.
It is one of my favorite generators..
LFIB4 is an extension of what I C expression..
The classical Fibonacci sequence mod 2^32 from FIB
fails several tests. It is not suitable for use by
itself, but is quite suitable for combining with
other generators..
Finally, because many simulations call for uniform
random variables in 0<x<1 or -1<x<1, I use #define
statements that permit inclusion of such variates
directly in expressions: using UNI will provide a
uniform random real (float) in (0,1), while VNI will
provide one in (-1,1).
All of these: MWC, SHR3, CONG, KISS, LFIB4, SWB, FIB
UNI and VNI, permit direct insertion of the desired
random quantity into an expression, avoiding the
time and space costs of a function call. I call
these in-line-define functions. To use them, static
variables z,w,jsr,jcong,a and b should be assigned
seed values other than their initial values. If
LFIB4 or SWB are used, the static table t[256] must
be initialized.
A note on timing: It is difficult to provide exact
time costs for inclusion of one of these in-line-
define functions in an expression. Times may differ
widely for different compilers, as the C operations
may be deeply nested and tricky. I suggest these
rough comparisons, based on averaging ten runs of a
routine that is essentially a long loop:
for(i=1;i<10000000;i++) L=KISS; then with KISS
replaced with SHR3, CONG,... or KISS+SWB, etc. The
times on my home PC, a Pentium 300MHz, in nanoseconds:
FIB 49;LFIB4 77;SWB 80;CONG 80;SHR3 84;MWC 93;KISS 157;
VNI 417;UNI 450;
*/
I saw your original post; I wasn't aware that there was quite a thread
after it. If I wanted to nitpick about language issues, I'd note that as a
FORTRAN programmer, I am liable to use variables with names like "x" and
"y", and thus I would probably modify your preprocessor functions
accordingly.
2) more elaborate variations on the principle, using the basic
MacLaren-Marsaglia generator as a building block, are possible.
On the other hand, it may be just as well that there is a dichotomy in the
methods used for stream ciphers and those used for random number
generation in the numerical solution of scientific problems, as this has
no doubt contributed to the fact that random number generators are not
affected by export control problems.
John Savard
>[...]
There were two formal articles, the first of which was a complete,
fully-exposed attack on a real encryption system using
MacLaren-Marsaglia:
1. Retter, C. 1984. Cryptanalysis of a MacLaren-Marsaglia System.
Cryptologia. 8(2): 97-108.
The next article addressed the question of the extent to which M-M
combining provides strength to the combined generators:
2. Retter, C. 1985. A Key-Search Attack on MacLaren-Marsaglia
Systems. Cryptologia. 9(2): 114-130.
There were also comments in letters:
3. Letters to the Editor. 1984. Cryptologia. 8(4): 374-378.
>2) more elaborate variations on the principle, using the basic
>MacLaren-Marsaglia generator as a building block, are possible.
Anything is possible. But, by itself, MacLaren-Marsaglia is simply
not a mechanism with significant cryptographic strength.
---
Terry Ritter rit...@io.com
Crypto Glossary
|
https://groups.google.com/g/sci.crypt/c/yoaCpGWKEk0
|
CC-MAIN-2021-31
|
refinedweb
| 1,449
| 60.65
|
',
Binds the application only. For as long as the application is bound to the current context the
flask.current_apppoints to that application. An application context is automatically created when a request context is pushed if necessary.
Example usage:
with app.app_context(): ...
Changelog
New in version 0 lists of functions that should be called at the beginning of the first request to this instance. To register a function here, use the
before_first_request()decorator.
Changelog.
Changelog.
Creates the Jinja2 environment based on
jinja_optionsand
select_jinja_autoescape(). Since 0.7 this also adds the Jinja2 globals and filters after initialization. Override this function to customize the behavior. 0.9: This can now also be called without a request object when the URL adapter is created for the application context.
New in version 0.6.= {'APPLICATION_ROOT': None, 'DEBUG': False, 'EXPLAIN_TEMPLATE_LOADING': False, 'JSONIFY_MIMETYPE': 'application/json', 'JSONIFY_PRETTYPRINT_REGULAR': True, 'JSON_AS_ASCII': True, 'JSON_SORT_KEYS': True, 'LOGGER_HANDLER_POLICY': 'always', 'LOGGER_NAME': None, 'MAX_CONTENT_LENGTH': None, 'PERMANENT_SESSION_LIFETIME': datetime.timedelta(days=31), 'PREFERRED_URL_SCHEME': 'http', 'PRESERVE_CONTEXT_ON_EXCEPTION': None, 'PROPAGATE_EXCEPTIONS': None, 'SECRET_KEY': None, 'SEND_FILE_MAX_AGE_DEFAULT': datetime.timedelta(seconds=43200), 'SERVER_NAME': None, 'SESSION_COOKIE_DOMAIN': None, 'SESSION_COOKIE_HTTPONLY': True, 'SESSION_COOKIE_NAME': 'session', 'SESSION_COOKIE_PATH': None, 'SESSION_COOKIE_SECURE': False, 'SESSION_REFRESH_EACH_REQUEST': True, 'TEMPLATES_AUTO_RELOAD': None, 'TESTING': False, 'TRAP_BAD_REQUEST_ERRORS': False, when an application context is popped. This works pretty much the same as
do_teardown_request()but for the application context.
Changelog.
Changelog
Changed in version 0.9: Added the exc argument. Previously this was always using the current exception information.
endpoint(endpoint)¶
A decorator to register a function as an endpoint. Example:
@app.endpoint('example.endpoint') def example(): return "example"
- Parameters
endpoint – the name of the endpoint a error handler, use the
errorhandler()decorator.
errorhandler(code_or_exception)¶.
Changelog
New in version 0.3.
handle_http_exception(e)¶
Handles an HTTP exception. By default this will invoke the registered error handlers and fall back to returning the exception as response.
Changelog.
Changelog
New in version 0.7.
- property
has_static_folder¶
This is
Trueif the package bound object’s container has a folder for static files.
Changelog
New in version 0.5.
init_jinja_globals()¶
Deprecated. Used to initialize the Jinja2 globals.
Changelog
Changed in version 0.7: This method is deprecated with 0.7. Override
create_jinja_environment()instead.
New in version 0.5._options= {'extensions': ['jinja2.ext.autoescape', 'jinja2.ext.with_']}¶
Options that are passed directly to the Jinja2 environment..
- property')
Changelog
New in version 0.3.
logger_name¶
The name of the logger to use. By default the logger name is the package name passed to the constructor.
Changelog
New in version 0
Converts the return value from a view function to a real response object that is an instance of
response_class.
The following types are allowed for rv:
- Parameters
rv – the return value from the view actual request dispatching and will call each
before_request()decorated function, passing no arguments. If any of these functions returns a value, it’s handled as if it was the return value from the view and further request handling is stopped.
This also triggers the
url_value_preprocessor()functions before the actual
before_request()functions are called.
-
Registers a blueprint on the application.()
Changelog
Changed in version 0.3: Added support for non-with statement usage and
withstatement is now passed the ctx object.
- Parameters
environ – a WSGI environment.
Changelog
Changed in version 0.10: The default port is now picked from the
SERVER_NAMEvariable.
- Parameters
host – the hostname to listen on. Set this to
'0.0.0.0'to have the server available externally as well. Defaults to
'127.0.0.1'.
port – the port of the webserver. Defaults to
5000or the port defined in the
SERVER_NAMEconfig variable if present.
debug – if given, enable or disable debug mode. See
debug.
options – the options to be forwarded to the underlying Werkzeug server. See
werkzeug.serving.run_simple()for more information. exception it will be passed an error object._rule_class¶
alias of
werkzeug.routing class. So: The behavior of the before and after request callbacks was changed under error conditions and a new callback was added that will always execute at the end of the request, independent on if an error occurred or not.)¶
Represents a blueprint. A blueprint is an object that records functions that will be called with the
BlueprintSetupStatelater to register functions or other things on the main application. See Modular Applications with Blueprints for more information.
Changelog
New in version 0.7..
Changelog
New in version 0.11...
- property.
- Parameters
force – if set to
Truethe mimetype is ignored.
silent – if set to
Truethis method will fail silently and return
None.
cache – if set to
Truethe parsed JSON data is remembered on the request.
- property
is_json¶
Indicates if this request is JSON or not. By default a request is considered to include JSON data if the mimetype is application/json or application/*+json.
Changelog
New in version 0.11.
- property
json¶
If the mimetype is application/json this will contain the parsed JSON data. Otherwise this will be
None.
The
get_json()method should be used instead.
- property.
Changelog.
Changelog
New in version 0.6.
- class the
Flask.secret_key set you can use sessions in Flask
applications. A session basically makes it possible to remember
information from one request to another. The way Flask does this is by
using a signed cookie. So the user can look at the session contents, but.
Helpful helper method that returns the cookie domain that should be used for the session cookie if session cookies are used...sessions.
-:
Markupobjects
-
-
-.for.
Example:
import gevent from flask import copy_current_request_context @app.route('/') def index(): @copy_current_request_context def do_some_work(): # do some work here, it can access flask.request defaults to localhost.
.
Changelog.
- Parameters
filename_or_fp – the filename of the file to send in latin-1. filter called
|tojson in Jinja2. Note that inside
script
tags no escaping must take place, so make sure to disable escaping
with
|safe if you intend to use it inside
script tags unless
you are using Flask 0.10 which implies that:
it was not requested with
X-Requested-With: XMLHttpRequestto simplify debugging unless the
JSONIFY_PRETTYPRINT_REGULARconfig parameter is set to false. Compressed (not pretty) formatting currently means no indents and no spaces after separators.
Changelog, default=None)¶
The default Flask JSON encoder. This one extends the default simplejson encoder by also supporting
datetimeobjects,
UUIDas well as
Markupobjects which are serialized as RFC 822 datetime strings
root_path – path to which files are read relative from. When the config object is created by the application, this is the application’s
root_path.
defaults – an optional dictionary of default values.
Changelog
New in version 0.8. is used to implement all the context local objects used in Flask. This is a documented instance and can be used by extensions and application code but the use is discouraged in general.
Works similar to the request context but only binds the application. This is mainly there for extensions
Like a regular class-based view but that dispatches requests to particular methods. For instance if you implement a method called
get()it means it will respond to
'GET'requests and the
dispatch_request()implementation will automatically forward your request to that. Also
optionsis set for you automatically: wil be added.
add_version_option – adds the
--versionoption.
create_app – an optional callback that is passed the script info and returns the loaded app.)¶
Help= <Command run= <Command shell>¶
Runs an interactive Python shell in the context of a given Flask application. The application will populate the default namespace of this shell according to it’s configuration.
This is useful for executing small snippets of management code without having to manually configuring the application.
|
https://flask.palletsprojects.com/en/0.12.x/api/
|
CC-MAIN-2022-27
|
refinedweb
| 1,237
| 51.65
|
Concur UI Lib is a brand new client side Web UI framework that explores an entirely new paradigm. It does not follow FRP (think Reflex or Reactive Banana), or Elm architecture, but aims to combine the best parts of both. This repo contains the Concur implementation for Purescript, using the React backend.
Documentation
Work in progress tutorials are published in the Concur Documentation site
API documentation is published on Pursuit.
Performance
Purescript-Concur is reasonably light. The entire uncompressed JS bundle, including react and all libraries, for the entire example application in this repo clocks in at 180KB. You can build this bundle yourself with the command
npm run prod (currently broken due to the move to spago).
This leads to pretty fast initial load times. Running the Chrome audit on produces -
Ports to other languages
Concur's model translates well to other platforms.
- Concur for Haskell - The original version of Concur written in Haskell.
- Concur for Javascript - An official but experimental port to Javascript.
- Concur for Python - An unofficial and experimental port to Python. Uses ImgUI for graphics. Created and Maintained by potocpav.
Installation
You can quickly get a production setup going (using Spago and Parcel) by cloning the Purescript Concur Starter.
Else, if you use Spago -
spago install purescript-concur-react
Or if you use Bower -
bower install purescript-concur-react
Building examples from source
git clone cd purescript-concur-react npm install # Build library sources npm run build # Build examples npm run examples # Start a local server npm run start # Check examples open localhost:1234 in the browser
External React Components
It's easy to add external React components to Concur. Usually all you would require to wrap an external component is to import it as a
ReactClass, and then wrapping it with one of the
el functions.
For example, let's say you want to wrap the
Button component provided by the material-ui library.
Step 1: First write an FFI module that exposes the
ReactClass component -
// Button.js exports.classButton = require('@material-ui/core/Button').default
And import it into your purescript program
-- Button.purs foreign import classButton :: forall a. ReactClass a
If you are using the Purescript React MUI bindings, then you can simply import the class component from the library without defining the FFI module -
import MaterialUI.Button (classButton)
Step 2: Then wrap up the imported
ReactClass into a widget to make it usable within Concur -
import Concur.React.DOM (El, el') import React (unsafeCreateElement) import React.DOM.Props (unsafeFromPropsArray) button :: El button = el' (unsafeCreateElement classButton <<< unsafeFromPropsArray)
Step 3: Now you can use
button normally within Concur. For example -
import Concur.React.DOM as D import Concur.React.Props as P helloButton = button [P.onClick] [D.text "Hello World!"]
Note that you can mix in the default widgets and props with the MUI ones.
Examples
Demo and Source for composing all the examples in one page.
Individual example sources -
- Hello World! Shows simple effectful widgets with state using StateT. Source.
- A simple counter widget without using StateT. Source.
- Focus counter demonstrates a stateful widget, with multiple event handlers, and no action types needed! Source.
- Virtual Keyboard An onscreen virtual keyboard. Demonstrates FFI as well as handling document level events inside nested widgets. Source.
- A login widget. Source.
- Concur has Signals! Sample counting widget implemented with Signals! Source.
- A Full-featured TodoMVC implementation with LocalStorage Persistence built with Signals. Source.
- A Fully editable tree in ~30 lines of code (with Signals). Source.
- A Postfix calculator. Source.
- Using AJAX and handling JSON responses. Source.
- A small widget to Visualise CSS color codes. Source.
- Asynchronous timers which can be cancelled. Source.
- A Routed widget which demonstrates routing. Source.
- The Elm Architecture example demonstrates how Concur subsumes "The Elm Architecture". Source.
- Performance test - A huge list of 50 thousand parallel buttons. This has two variants, fast (uses slightly lower level interface) and slow (idiomatic concur code). Source.
- Tail Recursion demo - Since Concur is purely functional in nature, its primary mode of iteration is via recursion. Purescript in general is NOT stack stafe with tail recursion; It uses tricks like tailRec and tailRecM. However, Concur performs trampolining to make monadic recursion completely stack safe. This example demonstrates that by making a huge number of tail recursive calls in a short span of time. Source.
|
https://pursuit.purescript.org/packages/purescript-concur-react/0.4.2
|
CC-MAIN-2021-04
|
refinedweb
| 716
| 50.12
|
So for one of my university projects we have been assigned a problem to complete. I have the code working fine for the example output provided however I just need some help regarding a few errors that need fixing with different inputs.
I am not asking for you to do this for me as I have done most if not all of the program but I just need some help with errors and general ways to make the code more presentable.
Here is the problem and my code below:
You are to write an indexing program that will record and print out on which lines particular
words appear in a piece of text supplied as input by the user. Hence, the index you generate
will look like a book index, but each index entry will have a word followed by the line
numbers on which the word appears, rather than the page numbers.
Specifically, your program should:
a) read in lines of text one at a time, keeping track of the line numbers, stopping when a
line is read that contains only a single full-stop;
b) remove punctuation (as specified below) and change all text to lowercase;
c) remove stop words (the stop word list is specified below);
d) stem the words (the common endings to look out for are specified below);
e) add the remaining words to the index – a word should appear only once in the index
even though it may appear many times in the text, and the line numbers on which it
appears (removing duplicates) should be recorded with the word;
f) print the index, using exactly the format below, once all lines have been entered.
import string pMarks = ".,:;!?&'" sWords = ['a','i','it','am','on','in','of','to','is','so', \ 'too','my','the','and','but','are','very','here','even','from' \ 'them','then','than','this','that','though'] endings = ['s','es','ed','er','ly','ing'] def removePunc(text): nopunc = "" for char in text: if char not in pMarks: nopunc = nopunc + char return nopunc.lower().split() def removeStop(text): nostop = [] for word in text: if word not in sWords: nostop.append(word) return nostop def stemWords(words): for wrd in words: for n in range(1,4): if wrd[-n:] in endings: index = words.index(wrd) words.remove(wrd) words.insert(index,wrd[:-n]) return words def removeDuplicates(words): nodupe = [] for wrd in words: if wrd not in nodupe: nodupe.append(wrd) return nodupe def main(): lines = [] textTwo = "" text = raw_input("Indexer: type in lines, finish with a . at start of line only \n") if text == ".": exit() lines.append(text) while textTwo != ".": textTwo = raw_input() lines.append(textTwo) text = text + " " + textTwo if textTwo == ".": lines = lines[:len(lines)-1] text = removePunc(text) text = removeStop(text) text = stemWords(text) text = removeDuplicates(text) print "The Index is:" for word in text: lineNumbers = [] for l in lines: if word in l: lineNumbers.append(lines.index(l)+1) print word, lineNumbers main()
What could be done to ensure that the words are stemmed fully and correctly? For example if i had "annoyingly" or "sings" they contain more than one ending.
Also for the output, my code prints out "wind [1,3,4]" instead of "wind 1, 3, 4".
Also we are not allowed to use any code that we havent covered in the course so far, so just the basic operands can be used.
Any help would be great thanks.
|
https://www.daniweb.com/programming/software-development/threads/243346/text-manipulation-help
|
CC-MAIN-2018-43
|
refinedweb
| 565
| 65.56
|
Creating a PerlQt application - The Perl part
PerlQt is meant to be a very direct mapping of Qt methods to PerlQt. Programming is more than methods, however. This document will show the basic differences you will need to be aware of as you translate programming techniques from C++ Qt to PerlQt.
Including PerlQt is very simple. The top of almost every PerlQt application will be the same.
#!/usr/bin/perl -w use Qt; import Qt::app;
This the #! business which starts Perl on your system. I always recommend using the -w option, because relying on PerlQt to find your errors will leave you stranded. If you intend to distribute your app, make sure it uses /usr/bin/perl as the path for perl.
This causes the PerlQt module to be loaded and the Qt library to be linked to Perl. From that line on, you can call any PerlQt method or access any PerlQt constant.
This line imports the $app variable into your namespace. The $app variable is equivalent to the
qApp variable in Qt. In PerlQt,
qApp is stored as
$Qt::app.
The reason we
import Qt::app instead of using $Qt::app directly is that
import Qt::app will create $Qt::app if it does not already exist.
The reason we
import instead of
use Qt::app is to allow the creation of modules which subclass
Qt::Application and create $Qt::app, without worrying about coding order.
If you want to create your own
$Qt::app, this is how you do it.
use Qt; BEGIN { $Qt::app = Qt::Application->new(\@ARGV) } import Qt::app;
Now that we have included PerlQt into our program, we can call any Qt method we want. Since most of the methods require an object, it would help if we created one.
$widget = Qt::Widget->new;
Please note that every class in Qt has had it's initial Q replaced with Qt:: (
QWidget =>
Qt::Widget).
In Perl,
new is a method, and
Qt::Widget, the class, can be treated as an object. It's weird, but it works. However, if you're inclined, you are also allowed to use more C++-compatible syntax.
$widget = new Qt::Widget;
I use the first way just to show a good example, since the second can lead to ambiguous function-calls under bizzare circumstances.
At this point, we want to call methods on our $widget. I'm going to resize this widget a few times, demonstrating the old perl motto that There Is More Than One Way To Do It.
$widget->resize(100, 100); resize $widget 100, 100; # Yes, this works $widget->resize(Qt::Size->new(100, 100)); $widget->resize(new Qt::Size(100, 100));
I chose this example in particular because it demonstrates a few important principals.
This is a standard method-call in perl. Parentheses are required on methods called with
-> that are called with arguments. On methods without arguments, no parentheses are required. That allows you to access object attributes easily with code like
$widget->size instead of
$widget->size() if that strikes your fancy.
I admit this is weird, and could start looking like tcl after a while. It's interesting and perfectly valid, though. It would also make for an good graphical IRC client interface.
If you look at the Qt documentation,
QWidget::resize has two prototypes.
virtual void resize ( int w, int h ) void resize ( const QSize & )
The
QSize class is supported in PerlQt. We created a new Qt::Size object, and passed it to
resize. The
resize function saw that it was passed one
Qt::Size instead of two numbers, and called the
resize function that accepts
const QSize &.
You can also use a more C++-like syntax. In C++ however, you would never allocate a new QSize object and pass it to
resize since that would leak memory. In PerlQt, that object will be deleted automatically when the function returns. See "Object destruction"
In order to add functionality to your program, it is often a good idea to subclass a Qt widget. If you want to create a custom dialog-based program, you would subclass
Qt::Dialog. If you wanted to write PerlExcel, you would subclass
Qt::TableView. By subclassing, you gain all the functionality of the class you inherit, and can change that functionality by overriding virtual methods from that class.
In Perl, classes are known as packages, because they are declared with the keyword
package. In reality,
package creates a namespace into which you place your methods, and it is through some perl magic that it gains object-oriented status. I prefer calling them classes nonetheless.
There are two ways to define your class. You can define it in its own file "MyClass.pm" and
use it in your application, or you can define it in the main application file.
This is a class defined in MyClass.pm which inherits Qt::Widget:
package MyClass; use Qt; @ISA = qw(Qt::Widget); # your code here 1;
The
1 at the end of the file is required by Perl, and must be at the end of the file. To use that class from your application, add
use MyClass to your application after
use Qt.
This is a class defined in the main application file which inherits Qt::Dialog:
package MyClass; @ISA = qw(Qt::Dialog); # your code here package main;
The
package main does the reverse of
package MyClass and puts you into the global namespace with normal variable access again.
my()will be visible from all classes in a file, but not outside a file.
That was it, you now have a class called
MyClass into which you can write some code.
In Perl, the constructor is a method called
new. There is some tradition for using
$self in Perl as the equivalent to
this in C++, and that's what I will demonstrate. This is not required, and you are free to call it
$this,
$that, or
$my_special_friend.
sub new { my $self = shift->SUPER::new(@_); # your constructor code here return $self; }
The first line of the constructor allows PerlQt to create your object. The one I've shown you passes all the arguments to your constructor to the superclass constructor. If you want to hide some of those arguments from your superclass and use them yourself, this is what you need to do:
sub new { my $class = shift; my $argument = shift; # repeat for each argument my $self = $class->SUPER::new($class, @_); # your constructor code here return $self; }
If you don't want to pass the arguments passed to you to your superclass, omit
@_ and replace it with your own arguments. Every constructor must
return $self in order to work. The variable
$self and the function
&new have no special signifigance to Perl, so there is no reason for Perl to do that for you. Your constructor must return the object it created.
If you want to create a destructor, create a method called DESTROY, and put your destruction code in it. You usually don't need a destructor, because Perl and Qt automatically deallocate everything associated with your object when it's destroyed.
When you call
&SUPER::new, you are calling your superclass constructor. If you use multiple-inheritance, it will call
$ISA[0]->new. Your superclass constructor will create the C++ version of itself, and return a blessed perl hash reference. You cannot inherit two Qt classes properly, since
$self can only represent one Qt object at a time.
All objects in Perl are blessed references. Blessed means it represents a package and methods can be called on it. References are Perl pointers, and can refer to a variable or function. In this case, it refers to a perl associated list, or hash. This allows us to have named member-variables.
sub new { my $self = shift->SUPER::new(@_); $self->{button} = Qt::PushButton->new("OK", $self); return $self; }
We have saved a pushbutton in
$self->{button}.
A large part of the Qt API involves overriding virtual methods. When a widget is resized or hidden, Qt calls a virtual method on your object. If you've reimplemented that virtual method, you get the opportunity to respond to that event.
PerlQt allows you to override Qt virtual functions. The
resizeEvent method is called whenever a widget is resized.
sub resizeEvent { my $self = shift; my $e = shift; printf "Old size: %s, %s\n", $e->oldSize->x, $e->oldSize->y; printf "Current size: %s, %s\n", $self->size->x, $self->size->y; $self->SUPER::resizeEvent($e); }
That reimplementation of
resizeEvent prints out the old widget size and the current widget size. It also calls the superclass
resizeEvent since our implementation didn't actually do anything useful.
The Qt callback mechanism is called signals and slots. It is a way for objects to communicate with each other. When a button is pushed, it emits a signal called
clicked(). If you want to know when the button is pushed, you create a slot and connect the
clicked() signal to it. To see how
connect() works in PerlQt, read "Signals and slots".
Slots are just normal methods which have been registered with Qt. Signals are methods created by PerlQt and given the name you specify. When you
emit (call) that signal, every slot connected to it gets called.
When you create a class, you will need to create slots to pick up signals such as
clicked() from Qt::PushButton, and
returnPressed() from Qt::LineEdit. You can also choose to emit signals, so other classes can be told what you are doing.
use Qt::slots 'buttonWasClicked()', 'returnWasPressed()'; use Qt::signals 'somethingHappened()';
That declares two slots, which you can implement in your class, and a signal which can be connected to slots in C++ or Perl.
sub buttonWasClicked { my $self = shift; # you can use $self # lets emit a signal emit $self->somethingHappened; } sub returnWasPressed { my $self = shift; # you can use $self # your code here emit $self->somethingHappened; }
The
emit keyword does nothing, it's syntactic sugar to make it clear you are emitting a signal.
Signals and slots can have arguments. PerlQt limits signals and slots created by Perl to three arguments. See "Signals and slots".
Right off the bat, Perl eliminates pointers and references. That makes for a great deal of simplification. Also, Perl has garbage collection which means it will automatically free memory for a variable when it's no-longer needed. That eliminates the need for destructors usually.
Global constants in C++ Qt, like
AlignLeft, are accessed as
Qt::AlignLeft in PerlQt. They are constant functions, not variables. Class constants like
QScrollBar::Vertical are predictably accessed as
Qt::ScrollBar::Vertical in PerlQt.
Please note that if you inherit a class that has a constant, you must still use its fully-qualified name. Perl does not allow function inheritance like that. Of course, you could always try calling it as a method and use method inheritance (
$scrollbar->Vertical).
PerlQt supports two-way
NULL pointer conversion to
undef. If you pass
undef to a function accepting a pointer argument, that function receives a
NULL pointer. If a function returns a
NULL pointer, Perl receives
undef.
There are several types of objects in PerlQt when it comes to object destruction.
There are the normal objects, which need to be destroyed when their Perl variable is destroyed.
Widgets are destroyed by their parent. I call this behavior suicidal, because these objects will delete themselves even if I lose track of them. The perl variable for these objects can be destroyed, but the C++ object will still be there. It is important that these objects have been given a parent or have been added to an object which will destroy them from C++.
Virtual objects are from classes which have been implemented in PerlQt with virtual functions. These objects keep a reference to the Perl object, so that even if you, the programmer, have destroyed every reference to the object you can find, PerlQt still keeps one which you cannot get rid of. A virtual object must either delete itself, or be suicidal in order for it to be destroyed.
Every object in PerlQt has three methods to handle object destruction.
This calls the C++ delete operator on the object pointer, and causes immediate object destruction.
An object that would normally be deleted at the end of a method can be told to stick around with
continue. If you pass an object created in Perl to a method which deletes that variable, you must call
continue to stop PerlQt from deleting it.
If you create a suicidal object like a dialog box, and don't give it a parent which will destroy it, you can make it get deleted when Perl destroys its object with
break.
PerlQt allows programmers to create signals and slots in Perl which are usable from Perl and C++. If your subclass inherits
Qt::Object, PerlQt will automatically do the
Q_OBJECT voodoo which Qt does, and you don't have to worry about any of that.
The SIGNAL() and SLOT() macros in C++ are replaced with simple quotes in PerlQt.
In C++: QObject::connect(sender, SIGNAL(didSomething(int)), receiver, SLOT(sawSomething(int)));
In PerlQt: $receiver->connect($sender, 'didSomething(int)', 'sawSomething(int)');
Now, there are a multitude of argument-types which can be passed through a signal, and here is a short translation
Sorry, you cannot connect to a signal or slot that sends an object by value.
'member(Object)'
'member(const Object)'
'member(\Object)'
'member(const \Object)'
'member(int, long, double)'
'member(const string)' OR 'member(cstring)'
'member(const \QString)'
'member(const QString)'
'member($)'
Yes, you can pass perl scalar variables as signal/slot arguments.
These are the functions you use for signals and slots.
Declare all of the signals in LIST. Each signal in LIST will cause a function to be imported into your namespace by that name which calls all the slots connected to it. Each signal name can be listed only once, multiple prototypes are not supported.
Declare all of the slots in LIST. Each slot is a normal method defined in Perl. When it is declared as a slot, you can
connect() signals from Qt or Perl to it. Each slot name can be listed only once, multiple prototypes are not supported.
The
connect() method from Qt works the same way in Perl. Just replace the SIGNAL() and SLOT() macros from Qt with quotes.
In Qt terminology,
$sender emits
$signal,
$receiver has
$slot.
The emit keyword is imported into your namespace whenever you
use signals. It is provided for syntactic sugar to identify that you aren't calling a normal function, but are emitting a signal.
Qt has a massive amount of documentation which I will not attempt to duplicate. Read it.
Perl also has some great manual pages describing object-oriented practices which are particularly relevant.
perlobj Perl objects perltoot Perl OO tutorial
Included with PerlQt is alot of sample code. Look in the tutorials and examples directories of your PerlQt directory.
Much of PerlQt is untested. Signals and slots are limited to 3 arguments. Alot of datatypes are still not supported.
Ashley Winters <jql@accessone.com>
|
http://search.cpan.org/dist/PerlQt/doc/starting.pod
|
CC-MAIN-2015-22
|
refinedweb
| 2,519
| 64.2
|
Alex Angelopoulos (aka at mvps dot org)
There are several ways to exploit the .NET redistributable for scripting use. The two simplest are writing a console application and creating a COM-callable component using the .NET CLR.
Based on prior experience with a few programming languages and a good background with scripting, I think if you are contemplating using .NET for scripting support, you need to ask yourself a few questions first.
One of the values of scripting is the ability to reuse and redeploy solutions. At this point, every commercially installed Microsoft operating system released since 1995 supports WSH and COM objects compiled in Visual Basic 5/6 and Visual C++ 5/6. More importantly, the necessary runtime libraries are installed almost everywhere.
Using .NET requires installation of the 20+ MiB redistributable, and it will NOT install on Windows 95 or on the Terminal Services version of NT4. If those aren't problems for you, then this isn't an issue. If there are concerns about install capability, I would recommend sticking with slightly older tools.
If .NET is an option, the next thing to consider is what language you intend to use; it does indeed make a difference.
If you are a C/C++ user, many people seem to suggest that the leap to C# is not only fairly easy but fun: the tools are much easier to exploit. Getting COM interoperability to work is not incredibly straightforward, but it can be done even with just the redistributable.
By the same token, if you use Jscript for scripting, Jscript.NET can be fun and easy to pick up.
VB.NET is a different story for various reasons. As a heavy user of Visual Basic 5/6 over the last year or so, I have come to appreciate some of the ability to delve into the guts of Windows that .NET can give you. Unfortunately, VB 5/6 code which performs external manipulation does not port to VB.NET easily; anything that uses API calls, for example, will require significant rewriting. After that job is done, you will be in a worse position for COM interoperability!
For the near future, I would recommend sticking with VB5/6 if you're a VB aficionado. In my opinion, it is fun if you're a hard-core programmer and are focused on non-scripting issues, but for scripting support you are talking about learning a less-universal framework, porting working code to it, familiarizing yourself with a more stringent language, and then having less support for scriptability than when you started.
One minor annoyance is that if you are a C++ user, Microsoft does not make a C++ one available for you. (Before .NET we had free no compilers from Microsoft at all, so I'm not going to complain about this...<g>).
It won't give you a free managed C++, but you CAN use Borland's C++ compiler as a free C++ tool. Also, there is a version of LCC which runs with the .NET environment as a back end.
There are quite a few descriptions of the .NET framework available from various resource sites on the Internet, not to mention popular books and articles.
Few approach it from the perspective of a network administrator, who often finds it convenient to begin looking at tools as particular files of particular types installed at a particular location.
The Framework installs itself within the system's Windows directory; the core of it is a set of DLLs along with important associated executables in a version-named subfolder. In my particular case, this path is C:\WINDOWS\Microsoft.NET\Framework\v1.0.3705.
The exciting thing about .NET is this: ALL of the compilers are installed with it. When you install the .NET redistributable, you also get (depending on your version) the following executables:
One of the oldest tricks around for high-level applications on Unix/Linux boxes has been to allows users to extend them by creating compiled applications on the fly.
Since the .NET redistributable include compilers for C# and VB.NET, this trick can be exploited by WSH users as well now. All you need is the .NET redistributable (NOT the SDK or development framework). Below is a simple wrapper class that puts code you give it into a VB.NET console application, compiles it, runs it and captures the output, then deletes the source file and exe created.
VB.NET source code - this is already in the script below in escaped form. It just demos calling the GetTickCount API to give an idea of what can be done with this technique. It DOES make API calls pretty easy...
Module Main Public Declare Function GetTickCount Lib "kernel32" () As Long Sub Main() System.Console.WriteLine(GetTickCount) End Sub End Module
Below is a VBScript which generates a .NET console application on the fly.
' VBNetWrap.vbs ' you can get the code a variety of ways, including reading a file ' this is just to wrap this all into a package..." & Public Function Exec ' writing, execution, and deletion ' all occur in the same call to ensure ' that directory changes don't occur. WriteFile m_base & ".vb", m_code Exec = Cmd(m_cmd) fso.DeleteFile m_base & ".vb", true fso.DeleteFile m_base & ".exe", true End Function Private Sub WriteFile(FilePath, sData) 'writes sData to FilePath With fso.OpenTextFile(FilePath, 2, True) .Write sData .Close End With End Sub private function RandomName8 ' Returns unique 8-character name ' Following sequence allows 5,352,009,260,481 unique items chrList = _ "abcdefghijklmnopqrstuvwxyz0123456789~-_" uLimit = Len(chrList) Randomize For i = 1 To 8 sTmp = sTmp & Mid(chrList, ((uLimit) * Rnd + 1), 1) Next RandomName8 = sTmp End Function End Class
This is one of the cleaner methods for creating COM-accessible components written in VB.NET, C#, or Jscript.NET.
The process involves compilation of a simple class file, then registering its
codebase. This means there is minimal name protection, so be certain to use
a long, unique class name! The lack of strong naming means this is best
targeted at on-the-fly creation of "anonymous" COM classes which will be removed
after use; in fact, as I try to generalize the technique, it will be best to
include strong naming as a feature.
In any case, below is a generic batch file for creating the DLLs, followed by brief demos in each of the 3 compilers mentioned.
There are two simple steps: compile to a library, and then register the
codebase. The batch file shown below will do this given a source file name as
an argument; it determines which compiler to call based on the source file's
extension. The compiled DLL will have the same base name as the source file.
@echo off :: generic .NET library compiler script :: get the extension into %ext% set srcfile=%1 set ext=%srcfile:~-2% :: get the basename into %basename% CALL SET basename=%srcfile:~0,-3%% echo basename is %basename% :: set the correct compiler set exc=%ext%c %exc% /nologo /t:library %basename%.%ext% regasm /nologo /codebase %basename%.dll
If the class is named "thisismyclass" and it exposes a public function "testfunction", you can run it from VBScript like this:
set cls = CreateObject("thisismyclass")
rtn = cls.testfunction
Demo classfile sources and VBScript demos are included below for the Jscript.NET, C#, and VB.NET.
class testcompiledjscriptnet { public function testreturn() { // display the number of commandline items return("this is a string returned from a Jscript.Net library."); } }
set js = CreateObject("testcompiledjscriptnet") wscript.echo js.testreturn
public class testcompiledcsharpnet { public string testreturn() { return("this is a string returned from a C# library."); } }
set cs = CreateObject("testcompiledcsharpnet") wscript.echo cs.testreturn
' Test public class testcompiledvbnet public function testreturn() as string return("this is a string returned from a VB.Net library.") end function end class
set vb = CreateObject("testcompiledvbnet") wscript.echo vb.testreturn
|
http://www.mvps.org/scripting/dotnet/index.htm
|
CC-MAIN-2015-22
|
refinedweb
| 1,305
| 63.9
|
It fails to properly build when using NSS for the SSL support (see attached build log, there's a warning about it); when using GnuTLS instead it fails to link to libgcrypt (yes it works transitive, but it's better to link to it directly since it uses it directly!). Finally, you *have* to rebuild pycurl whenever you switch between openssl/gnutls/nss/nossl in curl itself...
Have fun.
Created attachment 240227 [details]
Build log
Since this bug is named "multiple problems" and not "multiple ssl problems", here is another one:
For me pycurl did not compile with net-misc/curl[-static-libs]. So this dependency needs to be fixed.
(In reply to comment #2)
It's unrelated and already fixed.
Created attachment 302299 [details]
added gcrypt linking dependency
net-libs/gnutls-2.12.16 when nettle use flag is used, is not linked against libgcrypt.so so pycrypt needs to do it in linkage time.
Hi,
I modified pycurl-7.19.0-linking.patch to add -lgcrypt to compile commands of pycurl.
I got the problem when running virt-install:
goofy # virt-install
Traceback (most recent call last):
File "/usr/bin/virt-install-2.7", line 31, in <module>
import urlgrabber.progress as progress
File "/usr/lib/python2.7/site-packages/urlgrabber/__init__.py", line 54, in <module>
from grabber import urlgrab, urlopen, urlread
File "/usr/lib/python2.7/site-packages/urlgrabber/grabber.py", line 427, in <module>
import pycurl
ImportError: /usr/lib/python2.7/site-packages/pycurl.so: undefined symbol: gcry_control
I'm not sure if this should be changed in gnutls or here in pycurl.
But for now this patch solves the pycurl undefined symbol.
Regards,
Kfir
modified pycurl-7.19.0-linking.patch seems to fix the bug in a appropriate way.
If possible this patch should be pushed to portage ASAP, it is causing bug #407073 and #410885 and also the related forum thread
I can confirm this bug is still present and attachment 302299 [details] fixes it.
+ 18 Apr 2012; Jesus Rivero <neurogeek@gentoo.org>
+ files/pycurl-7.19.0-linking.patch:
+ Modified patch to account for curl USE flag mixture: ssl nss gnutls. Closes
+ bug #329987
+
I've modified existing patch to account for some things. Added -lgrypt when curl is compiled with gnutls support, also, added an extra check for -lssl3 as dev-libs/nss provides libssl3 and not libssl, which was making pycurl not recognize the library used when curl was compiled with the nss USE flag.
I think this closes this bug. Just REOPEN if problem persists.
Problem is persisting for me.
I have adapted the patch to permit the compilation of printing stuff.
Then we really need the gcrypt library within pycurl.
Please Re open it.
Reopened patch was changed without revision bump :-/ see bug #415251.(In reply to
comment #10)
> Problem is persisting for me.
> I have adapted the patch to permit the compilation of printing stuff.
> Then we really need the gcrypt library within pycurl.
> Please Re open it.
And request for re open additionally.
curl-7.19.0-r1 should improve matters, can you please test and reopen if it does not?
*** Bug 408821 has been marked as a duplicate of this bug. ***
|
https://bugs.gentoo.org/show_bug.cgi?id=329987
|
CC-MAIN-2021-39
|
refinedweb
| 537
| 67.55
|
Red Hat Bugzilla – Bug 207737
An ISS scan killed NFS servers
Last modified: 2010-03-17 08:24:59 EDT
Description of problem: Today we requested our
IT Security Office to perfoem security ISS scan
to some of our servers running RHEL 4, Update 4
and it killed several machines with the error
messages attached at the end of this message.
Version-Release number of selected component (if applicable):
How reproducible: Not sure.
Steps to Reproduce:
1.
2.
3.
Actual results:
Machines crashed
Expected results:
Do not crash. :-)
Additional info:
Sep 22 13:38:17 newman kernel: lockd: cannot monitor 129.79.246.27
Sep 22 13:38:17 newman kernel: lockd: failed to monitor 129.79.246.27
Sep 22 13:38:56 newman kernel: lockd: server 129.79.247.1 OK
Sep 22 13:40:28 newman kernel: ------------[ cut here ]------------
Sep 22 13:40:28 newman kernel: kernel BUG at fs/locks.c:1798!
Sep 22 13:40:28 newman kernel: invalid operand: 0000 [#1]
Sep 22 13:40:28 newman kernel: SMP
Sep 22 13:40:28 newman kernel: Modules linked in: ipmi_devintf ipmi_si
ipmi_msghandler mptctl mptbase nfs nfsd exportfs lockd nfs_acl md5 ipv6
parport_pc lp parport autofs4 i2c_dev i2c_core sunrpc iptable_filter ip_tables
dm_mirror dm_mod button battery ac uhci_hcd ehci_hcd hw_random e1000 floppy sg
ext3 jbd megaraid_mbox megaraid_mm sd_mod scsi_mod
Sep 22 13:40:28 newman kernel: CPU: 0
Sep 22 13:40:28 newman kernel: EIP: 0060:[<c016e904>] Not tainted VLI
Sep 22 13:40:28 newman kernel: EFLAGS: 00010246 (2.6.9-42.0.2.ELsmp)
Sep 22 13:40:28 newman kernel: EIP is at locks_remove_flock+0xa1/0xe1
Sep 22 13:40:28 newman kernel: eax: f6434eac ebx: e74af73c ecx: 00000000
edx: 00000001
Sep 22 13:40:28 newman kernel: esi: 00000000 edi: e74af694 ebp: f4c6b480
esp: cfaf8f2c
Sep 22 13:40:28 newman kernel: ds: 007b es: 007b ss: 0068
Sep 22 13:40:28 newman kernel: Process bogofilter (pid: 29111,
threadinfo=cfaf8000 task=f5d10eb0)
Sep 22 13:40:28 newman kernel: Stack: f4c6b480 f920543a cfaf8f44 f9205e2a
f9277fb7 c016e85c 00000000 00000000
Sep 22 13:40:28 newman kernel: 00000000 0b9cc0ed 00000000 f6179380
000071b7 45141e72 00000000 45141e72
Sep 22 13:40:28 newman kernel: 00000000 f4c6b480 00000201 00000000
00000000 00000246 00000000 f4c6b480
Sep 22 13:40:28 newman kernel: Call Trace:
Sep 22 13:40:28 newman kernel: [<f920543a>] nlm_put_lockowner+0x11/0x49 [lockd]
Sep 22 13:40:28 newman kernel: [<f9205e2a>]
nlmclnt_locks_release_private+0xb/0x14 [lockd]
Sep 22 13:40:28 newman kernel: [<f9277fb7>] nfs_lock+0x0/0xc7 [nfs]
Sep 22 13:40:28 newman kernel: [<c016e85c>] locks_remove_posix+0x130/0x137
Sep 22 13:40:28 newman kernel: [<c015bbc2>] __fput+0x41/0x100
Sep 22 13:40:28 newman kernel: [<c015a7f5>] filp_close+0x59/0x5f
Sep 22 13:40:28 newman kernel: [<c02d47bf>] syscall_call+0x7/0xb
Sep 22 13:40:28 newman 22 13:40:28 newman kernel: <0>Fatal exception: panic in 5 seconds
So what is exactly is a security ISS scan and where can we get one...
Steve,
ISS is a commercial security scan package our IT Security Office
uses to scan all machines on campus at Indiana University. I do
not know how you can get one but I can try to find out.
Thanks,
Bruce
Please do... since it would be good to know if we have the
same problem with our RHEL5 produce line....
tia....
I have asked our ITSO staff for more information. The alternative
is to give me the beta version of RHEL5 so that I can install it
here and ask them to scan the machine as they do now. --Bruce
Here is more information at the time when this happened:
1) the machine that crashed, newman, is our mail server
and it NFS mounted user's home directory using autofs.
(This is so that sendmail can access users' homedirectory
for, say, .forward file.)
2) The machine as users' home directory server, frog, had its
locked/statd died due to the ISS scan mentioned above. (we
had to do 'service nfs stop; sevice nfslock restart; service
nfs start' later after we found out what was going on. At
this point, newman already crashed.
My guess is that probably newman could not locked users'
.forward due to 2).
Bruce
FYI, we just encountered exactly the same crash on one of our (CS department at
Cornell University) main compute servers. At the time of the crash, it probably
had a couple of dozens of users logged on. The load was not particularly high
(< 4 on a dual Xeon Dell PowerEdge 2650), according to our Bigbro monitor. We
use amd to auto-mount user home directories so at the time probably about half a
dozen NFS shares were mounted from four file servers. We are running RHEL 4 U4
kernel 2.6.9-42.0.2.ELsmp.
If you need any more info, please let me know. We've been running RHEL 4 on
this particular server for more than a year. I believe this is the first time
we encountered this crash.
---
Sep 27 16:04:34 lion kernel: ------------[ cut here ]------------
Sep 27 16:04:34 lion kernel: kernel BUG at fs/locks.c:1798!
Sep 27 16:04:34 lion kernel: invalid operand: 0000 [#1]
Sep 27 16:04:34 lion kernel: SMP
Sep 27 16:04:34 lion kernel: Modules linked in: loop nfsd exportfs md5 ipv6
parport_pc lp parport autofs4 i2c_dev i2c_core nfs lockd nfs_acl sunrpc joydev
button battery ac ohci_hcd tg3 floppy sg dm_snapshot dm_zero dm_mirror ext3 jbd
dm_mod qla6312 qla2xxx scsi_transport_fc aic7xxx sd_mod scsi_mod
Sep 27 16:04:34 lion kernel: CPU: 0
Sep 27 16:04:34 lion kernel: EIP: 0060:[<c016e904>] Not tainted VLI
Sep 27 16:04:34 lion kernel: EFLAGS: 00010246 (2.6.9-42.0.2.ELsmp)
Sep 27 16:04:34 lion kernel: EIP is at locks_remove_flock+0xa1/0xe1
Sep 27 16:04:34 lion kernel: eax: f515424c ebx: d84694a4 ecx: 00000000
edx: 00000001
Sep 27 16:04:34 lion kernel: esi: 00000000 edi: d84693fc ebp: f498b180
esp: e6d94f2c
Sep 27 16:04:34 lion kernel: ds: 007b es: 007b ss: 0068
Sep 27 16:04:34 lion kernel: Process lt-sqlite3 (pid: 1350, threadinfo=e6d94000
task=d117b130)
Sep 27 16:04:34 lion kernel: Stack: f498b180 f8c3843a e6d94f44 f8c38e2a f8cb1fb7
c016e85c 00000000 00000000
Sep 27 16:04:34 lion kernel: 00000000 00ebc056 00000000 f2545a80 00000546
451ad939 00000000 451ad939
Sep 27 16:04:34 lion kernel: 00000000 f498b180 00000201 00000000 00000000
00000246 00000000 f498b180
Sep 27 16:04:34 lion kernel: Call Trace:
Sep 27 16:04:34 lion kernel: [<f8c3843a>] nlm_put_lockowner+0x11/0x49 [lockd]
Sep 27 16:04:34 lion kernel: [<f8c38e2a>]
nlmclnt_locks_release_private+0xb/0x14 [lockd]
Sep 27 16:04:34 lion kernel: [<f8cb1fb7>] nfs_lock+0x0/0xc7 [nfs]
Sep 27 16:04:34 lion kernel: [<c016e85c>] locks_remove_posix+0x130/0x137
Sep 27 16:04:34 lion kernel: [<c015bbc2>] __fput+0x41/0x100
Sep 27 16:04:34 lion kernel: [<c015a7f5>] filp_close+0x59/0x5f
Sep 27 16:04:34 lion kernel: [<c02d47bf>] syscall_call+0x7/0xb
Sep 27 16:04:34 lion 27 16:04:34 lion kernel: <0>Fatal exception: panic in 5 seconds
---
Sorry, I forgot to mention: this compute server, in addition of mounting NFS
shares, also serves a NFS share (used a short-term storage space) to other
compute servers.
I just got another report for a user who has been able to consisitently crash
the servers :-) Hopefully this info can help you guys debug:
Here is an interesting point: Basically, he accessed the database file on 2
different NFS servers (web8 and panda, where his home directory resides). He
was able to crash multiple NFS clients (all of which are running RHEL 4 Update
4) by accessing the file on web8, but not the same file in his home directory on
panda. web8 is running RHEL 4 Update 4 whereas panda is still running RHEL 4
Update 3.
It seems to me something is bad with nfs/lockd in nfs-utils-1.0.6-70 in Update
4.....
---
But I don't know why.
I just crashed cfs03 again just now. Here's exactly what I did.
login (via ssh)
cd misc/rsrch/mpqa/fa04 # one of my research directories
sqlite3 ~/misc/web8/cs474/hmm.db
# this a soft-link to /cucs/web/w8/ebreck/cs474/hmm.db
# I run the command-line interface to SQLite, a serverless SQL engine,
# on a database stored on web8 - it's a database of student submissions
# for a homework assignment; I'm TAing for Prof Cardie's CS474 class.
# now within sqlite
> .tables
# this command just lists the tables in the database
# it runs for a long time, so I hit Ctrl-C, and sqlite
# responds with
Error: database is locked
# now I attempt to exit sqlite
.exit
# sqlite is still hung somehow, so I hit Ctrl-C again
# and down goes the machine.
I think it must have something to do with trying to write to the filesystem on
web8, because running
sqlite3 on the identical file copied to my home directory works fine.
So is anybody looking this problem now? It's pretty
severe to us -- we want to ask our ITSO (IT Security
Office) to perform a security scan on all of our servers
regularly but we don't want our servers to quit because
of the security scans. Currently we have to disable the
security scans because of this bug. Please escalate this
call if you can. Much appreciated.
Thanks,
Bruce Shei
Yes, I am looking at this issue. I am looking at it in conjunction
with another bugzilla, 211092. This other bugzilla shows some
similarities in the stack traces, although the scenario to recreate
the situation is very different.
It would help to have a testcase which can reproduce the problem.
Perhaps it would be possible to simulate the ISS scan by watching
the network traffic which causes the problem and then writing a
program which generates the same sort of traffic?
Thanks, Peter, for the quick response. I will try to contact our
ITSO staff to see how much they can help. Unfortunately, it's
kind of out of our control. But looks like Steven at Cornell
has described a reliable way to reproduce it? At this is what
he indicated in his posting. Thanks and have a great day.
--Bruce
Any information that you can provide would be appreciated.
Steven, what is the sqlite3 command, and would it be possible to get
a copy of hmm.db or some other database which can be used to reproduce
the hang?
I'll contact the user to see if he is willing to release his code for testing
purposes.
Created attachment 139148 [details]
Procedures to reproduce the crash
Created attachment 139149 [details]
database to reproduce the crash (See the attached procedure)
OK guys, I attached the procedure as well as the database that caused the crash
for us. sqlite3 is from
Please note, this database is not exactly the same one as the one that caused
the crash. But according to the user, should be similar enough. We'd rather
not crash our compute server to prove it though....
I hope this helps your debugging.
Thanx! I'll take a peek using this stuff and see what I can find.
Any news on this ? I have the same problem (RHEL4U4) :
Dec 4 18:22:19 storm kernel: [<e0b3c43a>] nlm_put_lockowner+0x11/0x49 [lockd]
Dec 4 18:22:19 storm kernel: [<e0b3ce2a>]
nlmclnt_locks_release_private+0xb/0x14 [lockd]
Dec 4 18:22:19 storm kernel: [<e0bb5fb7>] nfs_lock+0x0/0xc7 [nfs]
Dec 4 18:22:19 storm kernel: [<c016e85c>] locks_remove_posix+0x130/0x137
Dec 4 18:22:19 storm kernel: [<c015bbc2>] __fput+0x41/0x100
Dec 4 18:22:19 storm kernel: [<c015a7f5>] filp_close+0x59/0x5f
Dec 4 18:22:19 storm kernel: [<c02d4703>] syscall_call+0x7/0xb
Test case is simply running a gcov instrumented binary on NFS (tcp, hard)
so NFS locks to write gcov data at program exit ...
No news yet.
If you distill down the gcov behavior to a simple testcase, it would
be appreciated.
me, too. crash with EIP at exactly the same instruction
have opened service req. 1119828 (going on 3 months old)
in my circumstance, i believe the crash occurred when the client was holding
(or maybe trying to get rid of) locks on a server that was down (or was maybe
in the process of coming back up, i'm not sure)
Refering to
Test case :
#include <unistd.h>
#include <fcntl.h>
int main()
{
int fd;
struct flock lck;
fd = open("file_on_nfs", O_RDWR | O_CREAT, 0644);
memset(&lck, 0, sizeof(lck));
lck.l_type = F_WRLCK;
fcntl(fd, F_SETLK, &lck);
fchmod(fd, 02644);
close(fd);
}
Yeah I saw that Peter did respond on lkml, I don't care
to have a quick & dirty patch but I do care of kernel crash !
Thx.
@laurent.deniel:
when i run the reproducer in your comment#21, i get a different backtrace:
kernel BUG at fs/locks.c:1798!
invalid operand: 0000 [#1]
EFLAGS: 00010246 (2.6.9-42.0.10.ELsmp)
EIP is at locks_remove_flock+0xa1/0xe1
Call Trace:
[<f8d31fb7>] nfs_lock+0x0/0xc7 [nfs]
[<c016e787>] locks_remove_posix+0x8f/0x137
[<c015bb0a>] __fput+0x41/0x100
[<c015a73d>] filp_close+0x59/0x5f
[<c02d4903>] syscall_call+0x7/0xb
i.e., no NLM client stuff. i think comment#21 is germane to bz#218777
for the record, speaking of NLM, the only way i could get the reproducer to
crash the system was to restart rpc.statd (by condrestart-ing nfslock) first.
after a fresh boot, the reproducer wouldn't work; i'd only get
lockd: cannot monitor 172.31.206.130
lockd: failed to monitor 172.31.206.130
, where that IP address is the address of the local machine, and the file
would be created and chown-ed without causing a crash. i guess i should be
thankful that statd usually doesn't work
Right the problem in #21 is more related to bz#218777.
Created attachment 152672 [details]
Test case
README includued
Any news since the last test case ?
Shall I open a service request to speed thing up ?
Sorry, I haven't had a chance to look at this yet. I will try to take
a peek at it and see what I can find.
Has this test program been tried with the proposed patch included in bz211092?
Sorry but I don't have access to 211092 ...
Just had this same problem and RHEL4U5. Any updates?
Actually, I knew which version of RHEL that it was from the version
field, so that change wasn't really needed.
But no, no updates yet. I have been working on a data corruption issue
and a diffferent system crash in the Sun RPC code.
The patch referenced in the above discussion went into RHEL4.5. Comment #30 mentions seeing this issue in a RHEL4.5 kernel. I suspect however that that is a different problem that simply manifested itself in the same way. We've had a number of different fixes go in for problems that look similar to this one but are different.
I'm going to close this bug as a duplicate of the one that added the patch under discussion above. If anyone is able to reproduce this on more recent kernels, please try to reproduce it on the kernels here:
...and reopen this bug if you are able to do so.
*** This bug has been marked as a duplicate of bug 218777 ***
|
https://bugzilla.redhat.com/show_bug.cgi?id=207737
|
CC-MAIN-2017-09
|
refinedweb
| 2,575
| 67.69
|
ean Paul(3)
Jignesh Trivedi(2)
Arun Choudhary(2)
Dhananjay Kumar (2)
Dileepa Kariyawasam(1)
Praveen Kumar Sreeram(1)
Shubham Sharma(1)
Suraj Pant(1)
Shantha Kumar T(1)
Nitin Pandit(1)
Prashant Verma(1)
Vignesh Ganesan(1)
Venkatesan Jayakantham(1)
Priyaranjan K S(1)
Zain Nisar(1)
Kantesh Sinha(1)
Sabyasachi Mishra(1)
Tom Mohan(1)
Ketak Bhalsing(1)
Mahesh Chand(1)
Brijendra Gautam(1)
Chetna Solanki(1)
Abhishek Jaiswal :)(1)
Arpit Jain(1)
Anubhav Chaudhary(1)
Scott Lysle(1)
Shiju Joseph(1)
Sharad Gupta(1)
Veena Sarda(1)
Gaurav Gupta(1)
Amit Choudhary(1)
Vulpes (1)
Hirendra Sisodiya(1)
Ahsan Murshed(1)
Puran Mehra(1)
Vandita Pandey(1)
Sairam (1)
Sushmita Kumari(1)
Mike Gold(1)
Nenad Djodievic(1)
Resources
No resource found
How To Brand Your Office 365 Sign In Page
Sep 24, 2016.
In this article, you will learn how to brand your Office 365 Sign in Page..
Create SSL Website With Self-Signed Certificate
Jul 18, 2016.
In this article, you will learn how to create SSL website with self-signed certificate.
Manage Sign-In Status For Users In Office 365
Jul 13, 2016.
In this article, you will learn how to manage sign-in status for users in Office 365..
Workaround For Missing "Sign In As A Different User" Custom Action In SharePoint 2016
Jan 09, 2016.
In this article you will learn about a workaround for a missing "sign in as a different user" custom action in SharePoint 2016..
Sep 24, 2015.
This article explains how we can allow a user to sign up using LinkedIn and save data of user signing up after successfully sign up.
How to Sign a Certificate For Use in PHA Application
May 15, 2015.
This article shows how to sign a certificate for use in a PHA application.
Asynchronous Data Binding Using IsAsync and Delay in WPF
Mar 10, 2015.
In this article we will learn about asynchronous data binding using IsAsync and Delay in WPF.
Manage the Site Collection Life Cycle
Nov 13, 2014.
This article explores the management of a Site Collection life cycle.
SharePoint Server 2013: "Sign in as Different User" Menu Option is Missing
Oct 14, 2014.
This article describes how to activate the Sign in as Different User option in SharePoint 2013 and if you want to activate it permanently how you can do that.
Binding Delay in WPF 4.5
Aug 17, 2014.
The Binding object in WPF is responsible for data binding with user interface elements and a data source.
Creating Self-Signed Certificate For Development Purposes
Mar 23, 2014.
This article shows how to create and use a self-signed certificate for development purposes.
Delayed Transaction Durability in SQL Server 2014 (CTP2)
Feb 27, 2014.
SQL server 2014 (CTP2) introduced Delayed Durability. It helps reduce the IO contention for writing to the transaction log.
Use of ProgressBar in WPF
Feb 21, 2014.
This article describes the use of ProgressBar. What it is, how it works, and why we use this ProgressBar when developing software applications.
Creating Login Or Signin Page Using .NET Framework
Feb 08, 2014.
Here we learn how to create a Login or Signin page using the .NET Framework.
Timers in JQuery: Delay Method
Jan 31, 2014.
This article illustrates the use of a delay method with custom queues and duration.
Time and Distance Delay While Reordering Elements in List Using jQuery
Jan 08, 2014.
This article will explain time and distance delay while reordering elements in a List using jQuery..
Precompiled and Pre-generated Views in the Entity Framework
Feb 13, 2013.
While working with the Entity Framework, we can reduce the delay of the initial view by creating the view at compile time instead of runtime
How to Write Access 2013 Custom Web App on Office 365
Dec 19, 2012.
Sign into Office 365 enterprise and get a free version of Office as well as Sharepoint. I installed Access 2013 on my local machine and used SharePoint from the Office 365 enterprise version.
ThreadPool Delay Timer in Windows Store Apps
Nov 05, 2012.
In this article we learn ThreadPool concepts with a delay timer in Windows Store Apps..
Code Analysis Using Telerik JustCode
Jan 13, 2012.
JustCode does on-the-fly analysis of the code. It does code analysis when you type the code and reports to you an error or warning without any further delay.
How to convert unsigned integer arrays to signed arrays and vice versa
Mar 18, 2011.
Here's a simple technique for converting between signed and unsigned integer arrays.
SQL Server: WAITFOR Statement
Mar 07, 2011.
WAITFOR statement used to delay execution of T-SQL command for a specified period of time. This can be used to block the execution of batch statement, stored procedure and T SQL commands for a specified time..
Mar 04, 2010.
In this article I will explain you about Signing an Assembly in C#.
When to Delay Sign Assemblies
Aug 03, 2006.
Delay signing plays a vital role in development when you are building assemblies. In this article, I talk about the significance and process of delay signing the assemblies.
Enhancements in Assemblies and Versioning in Visual Studio 2005
May 01, 2006.
The article discusses a couple of features introduced for assembly and versioning in Visual Studio 2005 such as referencing assemblies, registering assemblies to GAC, digital signing and friend assemblies.
Delay Signing an Assembly
Jan 18, 2006.
In this article we will elaborate the terminology Delay Signing as well as what it means. How it works and the approach to achieve it..
File Encryption
May 05, 2002.
The classes in the .Net Framework cryptography namespace manage many details of cryptography for you.
About delayed-signing
NA
File APIs for .NET
Aspose are the market leader of .NET APIs for file business formats – natively work with DOCX, XLSX, PPT, PDF, MSG, MPP, images formats and many more!
|
http://www.c-sharpcorner.com/tags/delayed-signing
|
CC-MAIN-2017-13
|
refinedweb
| 982
| 65.62
|
Editor's note:.
Creating a simple
generic type is straightforward. First, declare your type variables
by enclosing a comma-separated list of their names within angle
brackets after the name of the class or interface. You can use those
type variables anywhere a type is required in any instance fields or
methods of the class. Remember, though, that type variables exist
only at compile time, so you can't use a type
variable with the runtime operators
instanceof and
new.
We begin this section with a simple generic type, which we will
subsequently refine. This code defines a
Tree data
structure that uses the type variable
V to
represent the type of the value held in each node of the tree:
import java.util.*; /** * A tree is a data structure that holds values of type V. * Each tree has a single value of type V and can have any number of * branches, each of which is itself a Tree. */ public class Tree<V> { // The value of the tree is of type V. V value; // A Tree<V> can have branches, each of which is also a Tree<V> List<Tree<V>> branches = new ArrayList<Tree<V>>(); // Here's the constructor. Note the use of the type variable V. public Tree(V value) { this.value = value; } // These are instance methods for manipulating the node value and branches. // Note the use of the type variable V in the arguments or return types. V getValue() { return value; } void setValue(V value) { this.value = value; } int getNumBranches() { return branches.size(); } Tree<V> getBranch(int n) { return branches.get(n); } void addBranch(Tree<V> branch) { branches.add(branch); } }
As you've probably noticed, the
naming convention for type
variables is to use a single capital letter. The use of a single
letter distinguishes these variables from the names of actual types
since real-world types always have longer, more descriptive names.
The use of a capital letter is consistent with type naming
conventions and distinguishes type variables from local variables,
method parameters, and fields, which are sometimes written with a
single lowercase letter. Collection classes like those in
java.util often use the type variable
E for "Element
type." When a type variable can represent absolutely
anything,
T (for Type) and
S
are used as the most generic type variable names possible (like using
i and
j as loop variables).
Notice that the type variables declared by a generic type can be used only by the instance fields and methods (and nested types) of the type and not by static fields and methods. The reason, of course, is that it is instances of generic types that are parameterized. Static members are shared by all instances and parameterizations of the class, so static members do not have type parameters associated with them. Methods, including static methods, can declare and use their own type parameters, however, and each invocation of such a method can be parameterized differently. We'll cover this later in the chapter.
The type variable
V in
the declaration above of the
Tree<V> class
is unconstrained:
Tree can be parameterized with
absolutely any type. Often we want to place some constraints on the
type that can be used: we might want to enforce that a type parameter
implements one or more interfaces, or that it is a subclass of a
specified class. This can be done by specifying a
bound for the type variable.
We've already seen upper bounds for wildcards, and
upper bounds can also be specified for type variables using a similar
syntax. The following code is the
Tree example
rewritten to make
Tree objects
Serializable and
Comparable. In
order to do this, the example uses a type variable bound to ensure
that its value type is also
Serializable and
Comparable. Note how the addition of the
Comparable bound on
V enables
us to write the
compareTo() method
Tree by guaranteeing the existence of a
compareTo() method on
V.
[4]
import java.io.Serializable; import java.util.*; public class Tree<V extends Serializable & Comparable<V>> implements Serializable, Comparable<Tree<V>> { V value; List<Tree<V>> branches = new ArrayList<Tree<V>>(); public Tree(V value) { this.value = value; } // Instance methods V getValue() { return value; } void setValue(V value) { this.value = value; } int getNumBranches() { return branches.size(); } Tree<V> getBranch(int n) { return branches.get(n); } void addBranch(Tree<V> branch) { branches.add(branch); } // This method is a nonrecursive implementation of Comparable<Tree<V>> // It only compares the value of this node and ignores branches. public int compareTo(Tree<V> that) { if (this.value == null && that.value == null) return 0; if (this.value == null) return -1; if (that.value == null) return 1; return this.value.compareTo(that.value); } // javac -Xlint warns us if we omit this field in a Serializable class private static final long serialVersionUID = 833546143621133467L; }
The bounds of a type variable are expressed by following the name of
the variable with the word
extends and a list of
types (which may themselves be parameterized, as
Comparable is). Note that with more than one
bound, as in this case, the bound types are separated with an
ampersand rather than a comma. Commas are used to separate type
variables and would be ambiguous if used to separate type variable
bounds as well. A type variable can have any number of bounds,
including any number of interfaces and at most one class.
Earlier in the chapter we saw examples using
wildcards and bounded wildcards in methods that
manipulated parameterized types. They are also useful in generic
types. Our current design of the
Tree class
requires the value object of every node to have exactly the same
type,
V. Perhaps this is too strict, and we should
allow branches of a tree to have values that are a subtype of
V instead of requiring
V
itself. This version of the
Tree class (minus the
Comparable and
Serializable
implementation) is more flexible:
public class Tree<V> { // These fields hold the value and the branches V value; List<Tree<? extends V>> branches = new ArrayList<Tree<? extends V>>(); // Here's a constructor public Tree(V value) { this.value = value; } // These are instance methods for manipulating value and branches V getValue() { return value; } void setValue(V value) { this.value = value; } int getNumBranches() { return branches.size(); } Tree<? extends V> getBranch(int n) { return branches.get(n); } void addBranch(Tree<? extends V> branch) { branches.add(branch); } }
The use of bounded wildcards for the branch type allow us to add a
Tree<Integer>, for example, as a branch of a
Tree<Number>:
Tree<Number> t = new Tree<Number>(0); // Note autoboxing t.addBranch(new Tree<Integer>(1)); // int 1 autoboxed to Integer
If we query the branch with the
getBranch( )
method, the value type of the returned branch is unknown, and we must
use a wildcard to express this. The next two lines are legal, but the
third is not:
Tree<? extends Number> b = t.getBranch(0); Tree<?> b2 = t.getBranch(0); Tree<Number> b3 = t.getBranch(0); // compilation error
When we query a branch like this, we don't know the precise type of the value, but we do still have an upper bound on the value type, so we can do this:
Tree<? extends Number> b = t.getBranch(0); Number value = b.getValue();
What we cannot do, however, is set the value of the branch, or add a
new branch to that branch. As explained earlier in the chapter, the
existence of the upper bound does not change the fact that the value
type is unknown. The compiler does not have enough information to
allow us to safely pass a value to
setValue() or a
new branch (which includes a value type) to
addBranch(). Both of these lines of code are
illegal:
b.setValue(3.0); // Illegal, value type is unknown b.addBranch(new Tree<Double>(Math.PI));
This example has illustrated a typical trade-off in the design of a
generic type: using a bounded wildcard made the data structure more
flexible but reduced our ability to safely use some of its methods.
Whether or not this was a good design is probably a matter of
context. In general, generic types are more difficult to design well.
Fortunately, most of us will use the preexisting generic types in the
java.util package much more frequently than we
will have to create our own.
As noted earlier, the type variables of a generic type can be used only in the instance members of the type, not in the static members. Like instance methods, however, static methods can use wildcards. And although static methods cannot use the type variables of their containing class, they can declare their own type variables. When a method declares its own type variable, it is called a generic method.
Here is a static method that could be added to the
Tree class. It is not a generic method but uses a
bounded wildcard much like the
sumList() method we saw earlier in the chapter:
/** Recursively compute the sum of the values of all nodes on the tree */ public static double sum(Tree<? extends Number> t) { double total = t.value.doubleValue(); for(Tree<? extends Number> b : t.branches) total += sum(b); return total; }
This method could also be rewritten as a generic method by declaring a type variable to express the upper bound imposed by the wildcard:
public static <N extends Number> double sum(Tree<N> t) { N value = t.value; double total = value.doubleValue(); for(Tree<? extends N> b : t.branches) total += sum(b); return total; }
The generic version of
sum() is no simpler than
the wildcard version and the declaration of the type variable does
not gain us anything. In a case like this, the wildcard solution is
typically preferred over the generic solution. Generic methods are
required where a single type variable is used to express a
relationship between two parameters or between a parameter and a
return value. The following method is an example:
// This method returns the largest of two trees, where tree size // is computed by the sum() method. The type variable ensures that // both trees have the same value type and that both can be passed to sum(). public static <N extends Number> Tree<N> max(Tree<N> t, Tree<N> u) { double ts = sum(t); double us = sum(u); if (ts > us) return t; else return u; }
This method uses the type variable
N to express
the constraint that both arguments and the return value have the same
type parameter and that that type parameter is
Number or a subclass.
It could be argued that constraining both arguments to have the same
value type is too restrictive and that we should be allowed to call
the
max( ) method on a
Tree<Integer> and a
Tree<Double>. One way to express this is to
use two unrelated type variables to represent the two unrelated value
types. Note, however, that we cannot use either variable in the
return type of the method and must use a wildcard there:
public static <N extends Number, M extends Number> Tree<? extends Number> max(Tree<N> t, Tree<M> u) {...}
Since the two type variables
N and
M have no relation to each other, and since each
is used in only a single place in the signature, they offer no
advantage over bounded wildcards. The method is better written this
way:
public static Tree<? extends Number> max(Tree<? extends Number> t, Tree<? extends Number> u) {...}
All the examples of generic methods shown here have been
static methods. This is not a requirement:
instance methods can declare their own type variables as well.
When you use a
generic type, you must specify the
actual type parameters to be substituted for
its type variables. The same is not generally true for generic
methods: the compiler can almost always figure out the correct
parameterization of a generic method based on the arguments you pass
to the method. Consider the
max() method defined
above, for instance:
public static <N extends Number> Tree<N> max(Tree<N> t, Tree<N> u) {...}
You need not specify
N when you invoke this method
because
N is implicitly specified in the values of
the method arguments t and
u. In the following code, for example, the
compiler determines that
N is
Integer:
Tree<Integer> x = new Tree<Integer>(1); Tree<Integer> y = new Tree<Integer>(2); Tree<Integer> z = Tree.max(x, y);
The process the compiler uses to determine the type parameters for a generic method is called type inference. Type inference is relatively intuitive to understand, but the actual algorithm the compiler must use is surprisingly complex and is well beyond the scope of this book. Complete details are in Chapter 15 of The Java Language Specification, Third Edition.
Let's look at a slightly more complex version of type inference. Consider this method:
public class Util { /** Set all elements of a to the value v; return a. */ public static <T> T[] fill(T[] a, T v) { for(int i = 0; i < a.length; i++) a[i] = v; return a; } }
Here are two invocations of the method:
Boolean[] booleans = Util.fill(new Boolean[100], Boolean.TRUE); Object o = Util.fill(new Number[5], new Integer(42));
In the first invocation, the compiler can easily determine that
T is
Boolean. In the second
invocation, the compiler determines that
T is
Number.
In very rare circumstances you may need to explicitly specify the
type parameters for a generic method. This is sometimes necessary,
for example, when a generic method expects no arguments. Consider the
java.util.Collections.emptySet( ) method: it
returns a set with no elements, but unlike the
Collections.singleton( ) method (you can look
these up in the reference section), it takes no arguments that would
specify the type parameter for the returned set. You can specify the
type parameter explicitly by placing it in angle brackets
before the method name:
Set<String> empty = Collections.<String>emptySet();
Type parameters cannot be used with an unqualified method name: they
must follow a dot or come after the keyword
new or
before the keyword
this or
super used in a constructor.
It turns out that if you assign the return value of
Collections.emptySet() to a variable, as we did
above the type inference mechanism is able to infer the type
parameter based on the variable type. Although the explicit type
parameter specification in the code above can be a helpful
clarification, it is not necessary and the line could be rewritten
as:
Set<String> empty = Collections.emptySet();
An explicit type parameter is necessary when you use the return value
of the
emptySet( ) method within a method
invocation expression. For example, suppose you want to call a method
named
printWords( ) that expects a single argument
of type
Set<String>. If you want to pass an
empty set to this method, you could use this code:
printWords(Collections.<String>emptySet());
In this case, the explicit specification of the type parameter
String is required.
Earlier
in the chapter we saw that the compiler does not allow you to create
an array whose type is parameterized. This is not, however, a
restriction on all uses of arrays with generics. Consider the
Util.fill() method defined above, for example. Its
first argument and its return value are both of type
T[]. The body of the method does not have to
create an array whose element type is
T, so the
method is perfectly legal.
If you write a method that uses varargs (see Section 2.6.4 in Chapter 2) and a type variable, remember that invoking a varargs method performs an implicit array creation. Consider this method:
/** Return the largest of the specified values or null if there are none */ public static <T extends Comparable<T>> T max(T... values) { ... }
You can invoke this method with parameters of type
Integer because the compiler can insert the
necessary array creation code for you when you call it. But you
cannot call the method if you've cast the same
arguments to be type
Comparable<Integer>
because it is not legal to create an array of type
Comparable<Integer>[ ].
Exceptions are thrown and caught at
runtime, and there is no way for the compiler to perform type
checking to ensure that an exception of unknown origin matches type
parameters specified in a
catch clause. For this
reason,
catch clauses may not include type
variables or wildcards. Since it is not possible to catch an
exception at runtime with compile-time type parameters intact, you
are not allowed to make any subclass of
Throwable
generic. Parameterized exceptions are simply not allowed.
You can, however, use a type variable in the
throws clause of a method signature. Consider this
code, for example:
public interface Command<X extends Exception> { public void doit(String arg) throws X; }
This interface represents a
"command": a block of code with a
single string argument and no return value. The code may throw an
exception represented by the type parameter
X.
Here is an example that uses a parameterization of this interface:
Command<IOException> save = new Command<IOException>() { public void doit(String filename) throws IOException { PrintWriter out = new PrintWriter(new FileWriter(filename)); out.println("hello world"); out.close(); } }; try { save.doit("/tmp/foo"); } catch(IOException e) { System.out.println(e); }
The new generics features in Java 5.0 are
used in the Java 5.0 APIs, most notably in
java.util but also in
java.lang,
java.lang.reflect,
and
java.util.concurrent. These APIs were
carefully created or reviewed by the inventors of generic types, and
we can learn a lot about the good design of generic types and methods
through the study of these APIs.
The generic types of
java.util are relatively
easy: for the most part they are collections classes, and type
variables are used to represent the element type of the collection.
Several important generic types in
java.lang are
more difficult. They are not collections, and it is not immediately
apparent why they have been made generic. Studying these difficult
generic types gives us a deeper understanding of how generics work
and introduces some concepts that we have not yet covered in this
chapter. Specifically, we'll examine the
Comparable interface and the
Enum class (the supertype of enumerated types,
described later in this chapter) and will learn about an important
but infrequently used feature of generics known as lower-bounded
wildcards.
In Java 5.0, the
Comparable interface has been made generic, with
a type variable that specifies what a class is comparable to. Most
classes that implement
Comparable implement it on
themselves. Consider
Integer:
public final class Integer extends Number implements Comparable<Integer>
The raw
Comparable interface is problematic from a
type-safety standpoint. It is possible to have two
Comparable objects that cannot be meaningfully
compared to each other. Prior to Java 5.0, the nongeneric
Comparable interface was useful but not fully
satisfactory. The generic version of this interface, however,
captures exactly the information we want: it tells us that a type is
comparable and tells us what we can compare it to.
Now consider subclasses of comparable classes.
Integer is
final and cannot be
subclassed, so let's look at
java.math.BigInteger instead:
public class BigInteger extends Number implements Comparable<BigInteger>
If we implement a
BiggerInteger subclass of
BigInteger, it inherits the
Comparable interface from its superclass. But note
that it inherits
Comparable<BigInteger> and
not
Comparable<BiggerInteger>. This means
that
BigInteger and
BiggerInteger objects are mutually comparable,
which is usually a good thing.
BiggerInteger can
override the
compareTo( ) method of its
superclass, but it is not allowed to implement a different
parameterization of
Comparable. That is,
BiggerInteger cannot both extend
BigInteger and implement
Comparable<BiggerInteger>. (In general, a
class is not allowed to implement two different parameterizations of
the same interface: we cannot define a type that implements both
Comparable<Integer> and
Comparable<String>, for example.)
When you're working with comparable objects (as you
do when writing sorting algorithms, for example), remember two
things. First, it is not sufficient to use
Comparable as a raw type: for type safety, you
must also specify what it is comparable to. Second, types are not
always comparable to themselves: sometimes they're
comparable to one of their ancestors. To make this concrete, consider
the
java.util.Collections.max() method:
public static <T extends Comparable<? super T>> T max(Collection<? extends T> c)
This is a long, complex generic method signature. Let's walk through it:
The method has a type variable
T with complicated
bounds that we'll return to later.
The method returns a value of type T.
The name of the method is
max( ).
The method's argument is a Collection. The element
type of the collection is specified with a bounded wildcard. We
don't know the exact type of the
collection's elements, but we know that they have an
upper bound of
T. That is, we know that the
elements of the collection are type
T or a
subclass of
T. Any element of the collection could
therefore be used as the return value of the method.
That much is relatively straightforward. We've seen
upper-bounded wildcards elsewhere in this section. Now
let's look again at the type variable declaration
used by the
max( ) method:
<T extends Comparable<? super T>>
This says first that the type
T must implement
Comparable. (Generics syntax uses the keyword
extends for all type variable bounds, whether
classes or interfaces.) This is expected since the purpose of the
method is to find the "maximum"
object in a collection. But look at the parameterization of the
Comparable interface. This is a wildcard, but it
is bounded with the keyword
super instead of the
keyword
extends. This is a lower-bounded wildcard.
? extends T is the familiar upper bound: it means
T or a subclass.
? super T is
less commonly used: it means
T or a superclass.
To summarize, then, the type variable declaration states
"
T is a type that is comparable
to itself or to some superclass of itself." The
Collections.min() and
Collections.binarySearch( ) methods have similar
signatures.
For other examples of lower-bounded wildcards (that have nothing to
do with
Comparable), consider the
addAll(),
copy( ), and
fill() methods of
Collections.
Here is the signature for
addAll():
public static <T> boolean addAll(Collection<? super T> c, T... a)
This is a varargs method that accepts any number of arguments of type
T and passes them as a
T[ ]
named a. It adds all the elements of
a to the collection
c. The element type of the collection is
unknown but has a lower bound: the elements are all of type
T or a superclass of
T.
Whatever the type is, we are assured that the elements of the array
are instances of that type, and so it is always legal to add those
array elements to the collection.
Recall from our earlier discussion of upper-bounded
wildcards that if you have a
collection whose element type is an upper-bounded wildcard, it is
effectively read-only. Consider
List<? extends
Serializable>. We know that all elements are
Serializable, so methods like
get() return a value of type
Serializable. The compiler won't
let us call methods like
add() because the actual
element type of the list is unknown. You can't add
arbitrary serializable objects to the list because their implementing
class may not be of the correct type.
Since upper-bounded wildcards result in
read-only collections, you might expect
lower-bounded wildcards to result in
write-only collections. This isn't actually the
case, however. Suppose we have a
List<? super
Integer>. The actual element type is unknown, but the
only possibilities are
Integer or its ancestors
Number and
Object. Whatever the
actual type is, it is safe to add
Integer objects
(but not
Number or
Object
objects) to the list. And, whatever the actual element type is, all
elements of the list are instances of
Object, so
List methods like
get( ) return
Object in this case.
Finally, let's turn our attention to the
java.lang.Enum
class.
Enum serves as the
supertype of all enumerated types (described later). It implements
the
Comparable interface but has a confusing
generic signature:
public class Enum<E extends Enum<E>> implements Comparable<E>, Serializable
At first glance, the declaration of the type variable
E appears circular. Take a closer look though:
what this signature really says is that
Enum must
be parameterized by a type that is itself an
Enum.
The reason for this seemingly circular type variable declaration
becomes apparent if we look at the
implements
clause of the signature. As we've seen,
Comparable classes are usually defined to be
comparable to themselves. And subclasses of those classes are
comparable to their superclass instead.
Enum, on
the other hand, implements the
Comparable
interface not for itself but for a subclass
Eof
itself!
[4] The bound shown here requires that the value type
Vis comparable to itself, in other words, that it implements the
Comparableinterface directly. This rules out the use of types that inherit the
Comparableinterface from a superclass. We'll consider the
Comparableinterface in much more detail at the end of this section and present an alternative there.
View catalog information for Java in a Nutshell, 5th Edition
Return to ONJava.com.
|
http://www.onjava.com/excerpt/javaian5_chap04/index1.html
|
CC-MAIN-2015-22
|
refinedweb
| 4,231
| 55.34
|
I have the following Play Singleton:
package p1
@Singleton
class MySingleton @Inject() (system: ActorSystem, properties: Properties) {
def someMethod = {
// ........
}
}
someMethod()
First of all in order to access the methods of a class you have to have the instance of the class. As you are using the dependency injection, You need to inject the singleton class first into the class where you want to use the method. So first declare class Lets say
Foo and using the Guice annotation @Inject inject the class
MySingleton and then once you get the reference (instance) of the class. You can invoke the someMethod using the
.
If you want to access the method in a class say
Foo. You need to inject the class
MySingleton.
import p1.MySingleton class Foo @Inject() (mySingleton: MySingleton) { //This line could be any where inside the class. mySingleton.someMethod }
Another way using Guice field injection.
import p1.MySingleton class Foo () { @Inject val mySingleton: MySingleton //This line could be any where inside the class. mySingleton.someMethod }
|
https://codedump.io/share/7aKfQups6fE8/1/accessing-play-singleton-methods
|
CC-MAIN-2017-17
|
refinedweb
| 164
| 55.24
|
Everyday we are seeing a bigger push towards adding automated tests to our apps. Whether these are unit tests, integration or e2e tests.
This will be a series of articles based on writing unit tests for Angular and some of it's core concepts: Components, Services, Pipes and Guards.
These articles are not intended to be comprehensive, rather a soft introduction to unit testing. For more detailed component testing documentation, Angular has a great docs page here:
It's worth noting that some of my opinionated approaches to testing will come through in this article. Testing is a very opinated topic already. My advice to look through all the testing strategies that are out there and make decide what you think is the best approach.
In this article, we will explore testing components, ranging from simple to more complex components and we will cover the following:
- What is a unit test? 💡
- Why write unit tests? 🤔
- Ok, now how do we write unit tests? 😄
We will be using the standard Jasmine and Karma testing setup that Angular provides out of the box on apps generated with the Angular CLI.
💡 What is a unit test?
A unit test is a type of software testing that verifies the correctness of an isolated section (unit) of code.
Lets say you have a simple addition function:
function sum(...args) { return args.reduce((total, value) => total + value, 0); }
This full function can be considered a unit, and therefore your test would verify that this unit is correct. A quick test for this unit could be:
it('should sum a range of numbers correctly', () => { // Arrange const expectedValue = 55; const numsToTest = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; // Act const total = sum(...numsToTest); // Assert expect(total).toBe(expectedValue); });
We're introducting a few concepts here.
The
it(...args) is the function that will set up our unit test. It's pretty common testing terminology across Test Runners.
We also introduce the AAA Test Pattern. It's a pattern that breaks your test into 3 sections.
The first section is Arrange: Here you perform any set up required for your test.
The second section is Act: Here you will get your code to perform the action that you are looking to test.
The third and final sction is Assert: Here you will make verify that the unit performed as expected.
In our test above we set what we are expecting the value to be if the function performs correctly and we are setting the data we will use to test the function.
We then call the
sum() function on our previously arranged test data and store the result in a
total variable.
Finally, we check that the
total is the same as the value we are expecting.
If it is, the test will pass, thanks to us using the
expect() method.
Note:
.toBe() is a matcher function. A matcher function performs a check that the value passed into the
expect() function matches the desired outcome. Jasmine comes with a lot of matcher functions which can be viewed here: Jasmine Matchers
🤔 But Why?
Easy! Confidence in changes.
As a developer, you are consistently making changes to your codebase. But without tests, how do you know you haven't made a change that has broken functionality in a different area within your app?
You can try to manually test every possible area and scenario in your application. But that eats into your development time and ultimately your productivity.
It's much more efficient if you can simply run a command that checks all areas of your app for you to make sure everything is still functioning as expected. Right?
That's exactly what automated unit testing aims to achieve, and although you spend a little bit more time developing features or fixing bugs when you're also writing tests, you will gain that time back in the future if you ever have to change functionality, or refactor your code.
Another bonus is that any developer coming along behind you can use the test suites you write as documentation for the code you write. If they don't understand how to use a class or a method in the code, the tests will show them how!
It should be noted, these benefits come from well written tests. We'll explore the difference between a good and bad test later.
😄 Let's write an Angular Component Test
We'll break this down into a series of steps that will cover the following testing scenarios:
- A simple component with only inputs and outputs
- A complex component with DI Providers
Let's start with a simple component that only has inputs and outputs. A purely presentational component.
🖼️ Presentational Component Testing
We'll start with a pretty straight forward component
user-speak.component.ts that has one input and one output. It'll display the user's name and have two buttons to allow the user to talk back:
import { Component, Input, Output, EventEmitter } from '@angular/core'; @Component({ selector: 'app-user-speak', template: ` <div>Hello {{ name }}</div> <div> <button (click)="sayHello()">Say Hello</button> <button (click)="sayGoodbye()">Say Goodbye</button> </div> ` }) export class UserSpeakComponent { @Input() name: string; @Output() readonly speak = new EventEmitter<string>(); constructor() {} sayHello() { this.speak.emit('Hello'); } sayGoodbye() { this.speak.emit('Goodbye'); } }
If you used the Angular CLI (highly recommended!) to generate your component you will get a test file out of the box. If not, create one
user-speak.component.spec.ts.
Note: the
.spec.ts is important. This is how the test runner knows how to find your tests!
Then inside, make sure it looks like this initially:(); }); });
Let's explain a little of what is going on here.
The
describe('UserSpeakComponent', () => ...) call is setting up a Test Suite for our User Speak Component. It will contain all the tests we wish to perform for our Component.
The
beforeEach() calls specify code that should be executed before every test runs. With Angular, we have to tell the compile how to interpret and compile our component correctly. That's where the
TestBed.configureTestingModule comes in. We will not go into too much detail on that for this particular component test, however, later in the article we will describe how to change it to work when we have DI Providers in our component.
For more info on this, check out the Angular Testing Docs
Each
it() call creates a new test for the test runner to perform.
In our example above we currently only have one test. This test is checking that our component is created successfully. It's almost like a sanity check to ensure we've set up
TestBed correctly for our Component.
Now, we know our Component class has a
constructor and two methods,
sayHello and
sayGoodbye. As the constructor is empty, we do not need to test this. However, the other two methods do contain logic.
We can consider each of these methods to be units that need to be tested. Therefore we will write two unit tests for them.
It should be kept in mind that when we do write our unit tests, we want them to be isolated. Essentially this means that it should be completely self contained. If we look closely at our methods, you can see they are calling the
emit method on the
speak EventEmitter in our Component.
Our unit tests are not interested in whether the
emit functionality is working correctly, rather, we just want to make sure that our methods call the
emit method appropriately:
it('should say hello', () => { // Arrange const sayHelloSpy = spyOn(component.speak, 'emit'); // Act component.sayHello(); // Assert expect(sayHelloSpy).toHaveBeenCalled(); expect(sayHelloSpy).toHaveBeenCalledWith('Hello'); }); it('should say goodbye', () => { // Arrange const sayGoodbyeSpy = spyOn(component.speak, 'emit'); // Act component.sayGoodbye(); // Assert expect(sayGoodbyeSpy).toHaveBeenCalled(); expect(sayGoodbyeSpy).toHaveBeenCalledWith('Goodbye'); });
Here we meet the
spyOn function which allows us to mock out the actual implementation of the
emit call, and create a Jasmine Spy which we can then use to check if the
emit call was made and what arguments were passed to it, thus allowing us to check in isolation that our unit performs correctly.
If we run
ng test from the command line, we will see that the tests pass correctly. Wonderful.
🔧 REFACTOR
Hold up! Having two methods that essentially do the same thing is duplicating a lot of code. Let's refactor our code to make it a bit more DRY:
import { Component, Input, Output, EventEmitter } from '@angular/core'; @Component({ selector: 'app-user-speak', template: ` <div>Hello {{ name }}</div> <div> <button (click)="saySomething('Hello')">Say Hello</button> <button (click)="saySomething('Goodbye')">Say Goodbye</button> </div> ` }) export class UserSpeakComponent { @Input() name: string; @Output() readonly speak = new EventEmitter<string>(); constructor() {} saySomething(words: string) { this.speak.emit(words); } }
Awesome, that's much nicer. Let's run the tests again:
ng test.
Uh Oh! 😱
Tests are failing!
Our unit tests were able to catch correctly that we changed functionality, and potentially broke some previously working functionality. 💪
Let's update our tests to make sure they continue to work for our new logic:(); }); it('should say something', () => { // Arrange const saySomethingSpy = spyOn(component.speak, 'emit'); // Act component.saySomething('something'); // Assert expect(saySomethingSpy).toHaveBeenCalled(); expect(saySomethingSpy).toHaveBeenCalledWith('something'); }); });
We've removed the two previous tests and updated it with a new test. This test ensures that any string that is passed to the
saySomething method will get passed on to the
emit call, allowing us to test both the Say Hello button and the Say Goodbye.
Awesome! 🚀
Note: There is an argument around testing JSDOM in unit tests. I'm against this approach personally, as I feel it is more of an integration test than a unit test and should be kept separate from your unit test suites.
Let's move on:
🤯 Complex Component Testing
Now we have seen how to test a purely presentational component, let's take a look at testing a Component that has a DI Provider injected into it.
There are a few approaches to this, so I'll show the approach I tend to take.
Let's create a
UserComponent that has a
UserService injected into it:
import { Component, OnInit } from '@angular/core'; import { UserService } from '../user.service'; @Component({ selector: 'app-user', template: ` <app-user-speak [name]="user?.name" (speak)="onSpeak($event)" ></app-user-speak> ` }) export class UserComponent implements OnInit { user: User; constructor(public userService: UserService) {} ngOnInit(): void { this.user = this.userService.getUser(); } onSpeak(words: string) { console.log(words); } }
Fairly straightforward except we have injected the
UserService Injectable into our Component.
Again, let's set up our intial test file(); }); });
If we were to run
ng test now, it would fail as we are missing the Provider for the
UserService therefore
TestBed cannot inject it correctly to create the component successfully.
So we have to edit the
TestBed set up to allow us to create the component correctly. Bear in mind, we are writing unit tests and therefore only want to run these tests in isolation and do not care if the
UserService methods are working correctly.
The
TestBed also doesn't understand the
app-user-speak component in our HTML. This is because we haven't added it to our declarations module. However, time for a bit of controversy. My view on this is that our tests do not need to know the make up of this component, rather we are only testing the TypeScript within our Component, and not the HTML, therefore we will use a technique called Shallow Rendering, which will tell the Angular Compiler to ignore the issues within the HTML.
To do this we have to edit our
TestBed.configureTestingModule to look like this:
TestBed.configureTestingModule({ declarations: [UserComponent], schemas: [NO_ERRORS_SCHEMA] }).compileComponents();
That will fix our
app-user-speak not declared issue. But we still have to fix our missing provider for
UserService error. We are going to employ a technique in Unit Testing known as Mocking, to create a Mock Object, that will be injected to the component instead of the Real UserService.
There are a number of ways of creating Mock / Spy Objects. Jasmine has a few built in options you can read about here.
We are going to take a slightly different approach:
TestBed.configureTestingModule({ declarations: [UserComponent], providers: [ { provide: UserService, useValue: { getUser: () => ({ name: 'Test' }) } } ], schemas: [NO_ERRORS_SCHEMA] }).compileComponents();
The part we are interested in now is our
providers array. Here we are telling the compiler to provide the value defined here as the UserService. We set up a new object and define the method we want to mock out, in this case
getUser and we will tell it a specific object to return, rather than allowing the real UserSerivce to do logic to fetch the user from the DB or something similar.
My thoughts on this are that every Public API you interact with should have be tested and therefore your unit test doesn't need to ensure that API is working correctly, however, you want to make sure your code is working correctly with what is returned from the API.
Now let's write our test to check that we are fetching the user in our
ngOnInit method.
it('should fetch the user', () => { // Arrange const fetchUserSpy = spyOn( component.userService, 'getUser' ).and.returnValue({ name: 'Test' }); // Act component.ngOnInit(); // Assert expect(fetchUserSpy).toHaveBeenCalled(); });
Here we simply create a spy to ensure that the
getUser call is made in the
ngOnInit methoid. Perfect.
We also leverage the
.and.returnValue() syntax to tell Jasmine what it should return to the
ngOnInit() method when that API is called. This can allow us to check for edge cases and error cases by forcing the return of an error or an incomplete object.
Let's modify our
ngOnInit() method to the following, to allow it to handle errors:
ngOnInit(): void { try { this.user = this.userService.getUser(); } catch (error) { this.user = null; } }
Now let's write a new test telling Jasmine to throw an error, allowing us to check if our code handles the error case correctly:
it('should handle error when fetching user', () => { // Arrange const fetchUserSpy = spyOn(component.userService, 'getUser').and.throwError( 'Error' ); // Act component.ngOnInit(); // Assert expect(fetchUserSpy).toHaveBeenCalled(); expect(fetchUserSpy).toThrowError(); expect(component.user).toBe(null); });
Perfect! 🔥🔥 We are now also able to ensure our code is going to handle the Error case properly!
This is a short brief non-comprehensive introduction into Unit Testing Components with Angular with Jasmine and Karma. I will be publishing more articles on Unit Testing Angular which will cover testing Services, Data Services, Pipes and Guards.
If you have any questions, feel free to ask below or reach out to me on Twitter: @FerryColum.
Discussion
Could you write the Angular documentation here on out? This is excellent!
The people writing the Angular Docs are a lot smarter than me and they do a great job I feel!
The docs say compileComponents is not necessary if running the test via the cli. I tend to remove it, but is there a compelling reason to leave it in there?
|
https://dev.to/coly010/unit-testing-angular-component-testing-2g47
|
CC-MAIN-2020-50
|
refinedweb
| 2,495
| 55.44
|
A number of free functions in the C++ standard library are intended to be both generic and customizable. Since we want to take advantage of two distinct features of overload resolution depending on what the actual type of T turns out to be, you need to do the infamous “std two-step”. This is most well-known with the swap function.
T
swap
You may have been taught the idiom and follow it without knowing why it is needed.
template <typename T> void foo (T& a)
{
T b;
⋮
using std::swap; // step 1
swap (a,b); // step 2
}
This is necessary not just for swap but for a whole list of free functions, including begin, end, and size.
begin
end
size
Meanwhile, we are not supposed to leak declarations into global (or namespace) scope, especially in headers. So, the using needs to go inside a block of some kind. But, there are cases where we want to use one of these library functions outside of any suitable scoping block. Some of these cases can be worked around by adding a details namespace or the like. Other cases are not so easily fixed!
using
details
template <typename R>
auto myfunction1 (const R& input) -> decltype(std::begin(input)) { ⋯ }
template <typename R>
C<R>::C (const R& val) : it{ std::begin(val) } { ⋯ }
The signature of a function can have a computed return type and noexcept specification, plus default argument values. These are all outside of the body, so where can you put the using std::begin?
noexcept
using std::begin
Constructors have the base-member initializer list. Again, this is outside of the function’s block.
Consider if a type T is a class that has its own swap (or begin, size, etc.) defined for it. Those functions will be in the same namespace as the class, so you must call them without any qualifier in order to use argument dependant lookup (ADL).
Or, suppose T is a built-in primitive type, or even a class that is perfectly happy with the default implementation. You would need to find the version in the std namespace, but ADL is not going to look there.
std
The two-step idiom allows for both cases. The using declaration brings the std version into scope, and that will be combined with the results of ADL when selecting a function.
Current wisdom is to prefer free functions. You don’t want to do things one way or a different way depending on the context. Also, it is generally good to program as if you are writing a template, to some extent, even when you are not. It is common during maintenance and enhancement for the types of things to change. If your code was resilient to such things, you will have a lot less ripple to deal with.
It is possible to write a wrapper function that calls the desired function. The wrapper has the two-step coded in its body. There are a number of nuances: make sure the arguments use “perfect forwarding”, make sure you match the noexcept status of the wrapped function, and make sure you provide SFINAE that matches the availability of any matching function!
I will have wrappers with the same names but for a capital letter: Swap calls swap, etc.
Swap
Now, how to make sure your wrapper is called, rather than some other function of the same name? Eric Niebler makes an argument that these customization points should have been function-call objects, not functions.
I get the same benefits by doing this with my wrappers. Specifically, if Swap is a variable name, not a function name, then none of the overload resolution stuff applies, at all! The compiler looks outward in lexical scopes and finds the name Swap. It is not a function name, so the compiler is done looking. In particular, it will not look for other versions of the function in all the namespaces associated with the argument types.
We want a namespace that has only the wrappers in it. The user can do using namespace twostep; and get these functions in the current scope. Specifically, it must not also pull in the std:: versions of the functions! But, we have the problem explained at the beginning so can’t just put the using std::swap inside the wrapper function — the return type metacalculation and the noexcept determination both need to use the underlying function as well. The using std:: (step 1) must be in a scope outside of the wrapper functions.
using namespace twostep;
std::
using std::swap
using std::
The solution is to make a sandwich of namespaces.
namespace detail_twostep_wrapper {
using std::swap;
namespace twostep_inner {
inline auto Swap = ⋯
}
}
namespace twostep = detail_twostep_wrapper::twostep_inner;
At this point, the code sees namespace twostep which contains only the wrapper functions. The stuff inside namespace detail_twostep_wrapper will not bother anyone, since no types are defined in it.
namespace twostep
namespace detail_twostep_wrapper
We know that the body of the wrapper function calls the function being wrapped, and the std:: version is already been brought into scope, as set up in the previous listing. We just need to use perfect forwarding:
Let’s use begin as our strawman, since it only has one argument. Also, I’ll start with ordinary functions (not function-call objects) for simplicity.
template <typename T> // first attempt
auto Begin (auto&& r)
{
return begin(std::forward(decltype(r)>(r));
}
The call to begin(r) shows the perfect forwarding idiom.
begin(r)
Unfortunately, this first attempt is not good enough. If there is no begin to forward to, you get a compiler error telling you arcane details of the template instantiation. Instead, we want Begin to disappear when begin disappears. That means adding some SFINAE stuff.
Begin
The easiest way to do that is with the return value. Repeating the body as part of the signature makes it subject to SFINAE.
template <typename T> // second attempt
auto Begin (auto&& r) -> decltype(begin(std::forward<decltype(r)>(r)))
{
return begin(std::forward(decltype(r)>(r));
}
Now, we still have the issue that our wrapper is not marked noexcept. Again, we want to give the same status as the wrapped function.
template <typename T> // third attempt
auto Begin (auto&& r)
noexcept(noexcept(xname(std::forward<decltype(r)>(r))))
-> decltype(begin(std::forward<decltype(r)>(r)))
{
return begin(std::forward(decltype(r)>(r));
}
The perfect wrapper is called the “you have to type it three times” idiom.
If you read Eric’s blog post, you’ll see that his listing is rather long and quite cryptic.
Finally, we define a std::begin object of type std::__detail::__begin_fn in a round-about sort of way, the details of which are not too relevant.
std::begin
std::__detail::__begin_fn
He had to battle with two major hurdles:
I’m writing with (nearly!) C++17. A new feature is specifically designed to address problem 1. Now, you can write inline variables. This means you can write the initialization value in the header and not need any secondary definition placed in exactly one cpp file. So, goodbye to “round-about sort of way” obfuscation.
With an object having a polymorphic function call operator, you will never have the variable itself disappear due to SFINAE or concept checks. I found that if I use SFINAE on the lambda, I get a rather short detailed error message that’s not hard to figure out. Four lines of detail, with the first being “no matching overloaded function found” and the 4th being the argument type.
Without the SFINAE on the lambda, I get a much longer error scroll going into details about the template and its caller.
The long details might be better, since it lets you find the location of the caller, as well as the problem location of the generic lambda. I think this is a general problem with using function-call objects: the error is not in the finding something with the right name. This is simply a need to improve the useful error messages of the compiler — hopefully that will happen if this idiom gains popularity and is used in major libraries. I may add a simple macro to disable the SFINAE checks if that is helpful in tracking down errors.
Here is the final version of the wrapper function as an object:
inline constexpr auto Begin = [](auto&& r)
noexcept(noexcept(begin(std::forward<decltype(r)>(r)))) /* ask if the code is noexcept */
-> decltype( begin(std::forward<decltype(r)>(r))) /* using return type to do SFINAE */
{ return begin(std::forward<decltype(r)>(r)); /* the real body to execute */ };
The header file twostep.h contains all the functions that need the two-step. It is stand-alone, not needing anything other than the standard headers containing the functions being wrapped. You can easily copy it into your project without dealing with the rest of the library.
The current version can always be found.
|
https://www.codeproject.com/Articles/1245810/Avoid-the-std-step
|
CC-MAIN-2019-09
|
refinedweb
| 1,487
| 61.87
|
Boon - ObjectMapper
- Creating an ObjectMapper
- Parsing JSON Into Objects
- Parsing JSON into Maps
- Parsing JSON From Other Sources
- Generating JSON From Objects
- Generating JSON From Maps
- Date Formats in JSON
- Annotations
- Parsing Primitives
Once you have installed Boon you can start parsing JSON into objects using
the Boon
ObjectMapper class.
The Boon
ObjectMapper class is designed to have an interface
similar to both GSON and the Jackson ObjectMapper classes. This makes
it easier to switch to Boon from GSON or Jackson.
Creating an ObjectMapper
Before you can use the Boon
ObjectMapper you must first create an instance of it. You do so via the
JsonFactory's static
create() method. Here is an example of how you create a
Boon
ObjectMapper in Java code:
ObjectMapper mapper = JsonFactory.create();
Parsing JSON Into Objects
The Boon
ObjectMapper can parse JSON into object graphs. Here is an example of parsing JSON into
an object graph with the Boon
ObjectMapper:
String fleetStr = "{" + " \"cars\" : [" + " { \"brand\" : \"Audi\", \"doors\" : 4 }" + " ,{ \"brand\" : \"Mercedes\", \"doors\" : 3}" + " ,{ \"brand\" : \"BMW\", \"doors\" : 2 }" + " ]" + "}"; ObjectMapper mapper = JsonFactory.create(); Fleet fleet = mapper.readValue(fleetStr, Fleet.class);
The
Fleet and
Car classes look like this:
public class Fleet { public Car[] cars = null; }
public class Car { public String brand = null; public int doors = 0; public Car() {} public Car(String brand, int doors) { this.brand = brand; this.doors = doors; } }
Notice that the example called the
readValue() method of the
ObjectMapper class. This
method's name and parameters are similar to those of Jackson's
ObjectMapper.
Boon also has a
fromJson() method which does the same as the
readValue() method, but looks more like the
GSON interface. Here is the same example using the
fromJson() method:
String fleetStr = "{" + " \"cars\" : [" + " { \"brand\" : \"Audi\", \"doors\" : 4 }" + " ,{ \"brand\" : \"Mercedes\", \"doors\" : 3}" + " ,{ \"brand\" : \"BMW\", \"doors\" : 2 }" + " ]" + "}"; ObjectMapper mapper = JsonFactory.create(); Fleet fleet = mapper.fromJson(fleetStr, Fleet.class);
If you come from using Jackson or GSON, switching to Boon will thus be easier, as Boon contains methods that work similarly to the methods you were used to from GSON or Jackson. Just remember, they are similar, but not exactly the same. There are slight differences in exceptions thrown, parameters etc. You will quickly overcome those minor differences though.
Parsing JSON into Maps
Sometimes it is easier to parse a JSON object into a
Map than to create a custom class to hold the
parsed JSON. Boon can do this very easily. All you have to do is to pass
Map.class as second parameter
to the
readValue() ( or
fromJson() ) method. Here is an example of parsing JSON into
a
Map with Boon:
String fleetStr = "{" + " \"cars\" : [" + " { \"brand\" : \"Audi\", \"doors\" : 4 }" + " ,{ \"brand\" : \"Mercedes\", \"doors\" : 3}" + " ,{ \"brand\" : \"BMW\", \"doors\" : 2 }" + " ]" + "}"; ObjectMapper mapper = JsonFactory.create(); Fleet fleet = mapper.fromJson(fleetStr, Fleet.class); Map fleetMap = mapper.readValue(fleetStr, Map.class); List<Map> carList = (List<Map>) fleetMap.get("cars"); for(Map carMap : carList){ String brand = (String) carMap.get("brand"); int doors = (Integer)carMap.get("doors"); System.out.println("brand: " + brand); System.out.println("doors: " + doors); }
Parsing JSON From Other Sources
Boon can parse JSON from other sources than strings. You can parse also JSON from a:
bytearray
chararray
File
Reader
InputStream
String
Here is an example of parsing JSON from an
InputStream:
Fleet fleet = mapper.readValue( new FileInputStream("data/fleet.json"), Fleet.class);
As you can see the
InputStream is passed as the first parameter to the
readValue()
method of the
ObjectMapper, instead of the JSON string used in the examples earlier in this tutorial.
If you want to parse JSON from another source, pass that source as the first parameter to the
readValue()
method instead.
Generating JSON From Objects
Boon can also generate JSON from objects using the Boon
ObjectMapper. You generate JSON from an object
by calling the
writeValue() or
writeValueAsString() methods of the Boon
ObjectMapper.
The
writeValueAsString() method takes an object as parameter and returns a string containing the
JSON generated from the object. Here is an example of generating JSON using the
writeValueAsString()
method:
Fleet fleet = new Fleet(); fleet.cars = new Car[1]; fleet.cars[0] = new Car("Mercedes", 5);; ObjectMapper mapper = JsonFactory.create(); String json = mapper.writeValueAsString(fleet); System.out.println(json);
This example first creates a valid
Fleet object with a
Car object inside. Then it
creates an
ObjectMapper and calls the
writeValueAsString() method. Finally the
generated JSON is printed to
System.out. The JSON generated from this example looks like this:
{"cars":[{"brand":"Mercedes","doors":5}]}
You can also write the generated JSON directly to a
File,
Writer or
OutputStream.
To do so you need to call the
writeValue() method.
Here is an example of how to write the generated JSON directly to an
OutputStream:
mapper.writeValue( new FileOutputStream("data/output.json"), fleet);
To write the generated JSON to a
File or a
Writer, just pass a
File
or
Writer as first parameter to the
writeValue() method.
Generating JSON From Maps
The Boon
ObjectMapper can also generate JSON from a
Map. Just pass a
Map
instance as the second parameter to the
writeValue() or
writeValueAsString() method.
Here is an example of generating a JSON string from a
Map using the
writeValueAsString()
method:
Map car = new HashMap(); car.put("brand", "BMW"); car.put("doors", 4); List cars = new ArrayList(); cars.add(car); Map fleet = new HashMap(); fleet.put("cars", cars); ObjectMapper mapper = JsonFactory.create(); String json = mapper.writeValueAsString(fleet); System.out.println(json);
First the example creates a
Map which represents a fleet object, and adds a list of cars (also represented
by
Map instances) to it. Second, the example creates an
ObjectMapper and calls the
writeValueAsString() method, passing the fleet
Map as parameter. Finally the generated
JSON String is printed out. The output from this code example would be:
{"cars":[{"doors":4,"brand":"BMW"}]}
You can also write the generated JSON directly to a
File,
Writer or
OutputStream.
Here is an example writing the generated JSON to an
OutputStream, by calling the
writeValue()
method:
mapper.writeValue( new FileOutputStream("data/output-2.json"), fleet);
To write the generated JSON to a
File or a
Writer, just pass a
File
or
Writer as first parameter to the
writeValue() method.
Date Formats in JSON
The Boon
ObjectMapper can work with different date formats in JSON. It can use parse and generate
both a
long version of a date (the number of milliseconds since jan. 1. 1970), or use the official
JavaScript date format.
First, let us create a Java class that contains a
Date object:
public class Payment { public Date paymentDate = null; }
This
Payment class only contains a single field,
paymentDate, which is a
java.util.Date instance.
Parsing a long Into a Date
Boon's
ObjectMapper can parse a long value in JSON into a date. Here is a code example that:
String paymentJson = "{ \"paymentDate\" : 1434016456493 }"; ObjectMapper objectMapper = JsonFactory.create(); Payment payment = objectMapper.readValue(paymentJson, Payment.class);
Notice the JSON string at the beginning of the example. It contains a field named
paymentDate
and the value is a
long representation of a date, in milliseconds.
When the
ObjectMapper parses this JSON string it converts the
long value of the
paymentDate into a
Date object, because the
paymentDate field of the
Payment class is a
Date.
Parsing a Date String Into a Date
Boon can also parse a readable date string into a
Date object. Here is the example from the
previous section, with the date expressed as a date string instead of a
long millisecond value:
String paymentJson = "{ \"paymentDate\" : \"2015-06-11T12:33:00.014Z\" }"; ObjectMapper objectMapper = JsonFactory.create(); //ObjectMapper objectMapper = JsonFactory.createUseJSONDates(); Payment payment = objectMapper.readValue(paymentJson, Payment.class); System.out.println("payment.paymentDate = " + payment.paymentDate);
Generating Date Strings in JSON
By default the
ObjectMapper will serialize a
Date to a
long number
(milliseconds) when generating JSON from an object. However, you can create a version of the
ObjectMapper
which creates date strings instead. Here is how:
ObjectMapper objectMapper = JsonFactory.createUseJSONDates();
Once you have created an
ObjectMapper using the
createUseJSONDates() method instead of
create() method of the
JsonFactory class,
Date fields will get converted
to date strings in the serialized JSON instead of numbers.
Annotations
Boon contains a set of annotations that can be used to adjust the parsing or generation of JSON. These annotations are:
@JsonIgnore
@JsonInclude
These annotations will explained in the following sections:
@JsonIgnore
The
@JsonIgnore annotation can be placed above a field in a class. When Boon detects the
@JsonIgnore it will ignore that field. Here is an example class that uses the
@JsonIgnore
annotation :
import org.boon.json.annotations.JsonIgnore; public class Car { public String brand = null; public int doors = 0; @JsonIgnore public String comment = "blablabla"; public Car() {} public Car(String brand, int doors) { this.brand = brand; this.doors = doors; } }
When Boon detects the
@JsonIgnore annotation attached to the
comment field Boon will
not serialize the
comment field when serializing a
Car object.
@JsonInclude
By default Boon excludes fields from serialization that are empty (
null), empty lists, or fields which have
default values (e.g. 0 for
int and
false for
boolean etc.).
If you want Boon to include such fields in the generated JSON, you can add a
@JsonInclude
annotation to the field. Here is an example class that uses the
@JsonInclude annotation:
import org.boon.json.annotations.JsonIgnore; public class Car { public String brand = null; public int doors = 0; @JsonInclude public String comment = "blablabla"; public Car() {} public Car(String brand, int doors) { this.brand = brand; this.doors = doors; } }
Parsing Primitives
Boon can also parse fragments of JSON into Java primitives. For instance, parsing a string into an
int
or a JSON array into an array of Java primitives. This can be handy from time to time.
To use these primitive parsing features you need to access the
JsonParserAndMapper returned from
the
ObjectMapper's parser method. On the
JsonParserAndMapper instance you can call
the primitive parsing methods.
Here is an example parsing a string into an
int:
ObjectMapper objectMapper = JsonFactory.create(); int intVal = objectMapper.parser().parseInt("123");
The
JsonParserAndMapper can also parse a JSON string representing an array of strings or numbers
into an array of primitive Java types. Here is an example showing how to parse a JSON array into an array of
int:
int[] ints = objectMapper.parser() .parseIntArray("[123, 456, 789]");
Similarly the
JsonParserAndMapper can parse a JSON string representing an object into a Java
Map, like this:
String jsonMap = "{ \"key1\" : \"val1\", \"key2\" : \"val2\" }"; Map<String, Object> map = objectMapper.parser().parseMap(jsonMap);
The
JsonParserAndMapper has several more methods that can parse JSON strings and fragments into
primitive Java types.
|
http://tutorials.jenkov.com/java-json/boon-objectmapper.html
|
CC-MAIN-2017-13
|
refinedweb
| 1,742
| 56.55
|
you can type icon names in the QLineEdit search box to search ( re supported i guess becuase it's a QCompleter )
i also want to drag an icon from my ui to shelf , or to an existing shelfButton, but don't know how to do that.
pyqt4 needed.
usage:
import iconChooser as ic
reload(ic)
ic.main()
edit:
ver 0.0.2 updated to support maya 2014 , and speed up by using qlabel
ver 0.0.3 center dispaly icons and not scale them upver 0.0.4 fix large icons not scaled down bug, now < 32 pixel icons will be kept to original size, > 32 pixel width large .png images will be scaled to 32*32
ver 0.0.5 use api v2, new style signal/slot, cleanup code
v 0.0.6 fallback to api v1, tested on maya 2015 sp6, fixed bug with name searching.
No update will be posted here from now on, check github repo for latest version.
Please use the Feature Requests to give me ideas.
Please use the Support Forum if you have any questions or problems.
Please rate and review in the Review section.
|
https://www.highend3d.com/maya/script/maya-icon-chooser-for-maya
|
CC-MAIN-2021-49
|
refinedweb
| 192
| 83.15
|
Details
Description
Context: The mailbox subproject () supports maildir, SQL database (via JPA) and Java Content Repository (JCR) as technology for mail storage. This flexibility is achieved thanks to a API design that abstracts mail storage from the mail protocols.
Task:.
Mentor: eric at apache dot org
Complexity: medium
Activity
- All
- Work Log
- History
- Activity
- Transitions
I think you can close this one. I have just committed the code base to trunk and it looks ok. I also committed the integration tests, now going for improvements and finishing integration.
I've created
Feel free to further comment this JIRA, MAILBOX-72 (requirement for distributed mailbox) or update the wiki page.
The final goal is to have a enough detailed wiki page with datamodel...
@Stack:
Hope this makes it more clear:
messagesMetaData(CF): {
mailboxId/uid:
}
messagesContent(CF): {
mailboxId/uid:
}
Then I have secondary indexes on the messagesMetaData CF to be able to get all messages which belongs to mailbox X and have the deleted flag set etc.
I used RP and used the secondary indexes for "filter" the right messages.
Does it explain it a bit more ?
Hi there, and tks to Stack to join and help us in this design.
I've added on MAILBOX-72 some food for the brains.
You can see on the interfaces that the HBase store will have to implement.
There's no option there, but the implementation is really free to implement it as it wants.
First the tables:
- If you look at the classes, we could have Mailbox, Subscription and Message tables.
- A row per mailbox, subscription and message
- The unanswered question are: 1. The structure of the rowkey? - 2. Header and Property as separate table or as additional column to the message row.
Second the queries:
- The implemented SQL queries are on
- Some are simple Get (efficient), some not.
- We will need for to use the HBase scanners (existing one, maybe also specific one we will have to implement).
- For the IMAP built queries (especially for search), this can lead to a full scan of the table (see following point)
Finally the index to help optimize the search
- solr to the rescue can help
- I like lucene on hbase on-going work, especially when it will be done
- In the meantime, we could rely on custom hbase scanners (inefficient due to full table scan)
Waiting on your feedbacks.
Tks,
- Eric.
@stack:
First of welcome
I wrote a few of the other mailbox implementations in JAMES. So maybe I can answer your questions (concerns)
I also wrote a prototype for a mailbox on top of cassandra which is not to different in terms of "limitations".
So here we go:
I think putting all the mail in one row for a mailbox will not work. As really big mailboxes are quite common these days. This will just limit the distribution a lot (as you already pointed out). So let me try to explain how I did the schema for cassandra maybe it also fits for hbase (I had not the time to dig deeper into it).
- one row for the mailbox meta data (mailboxId, uidvalidity, namespace, username ...).
- one row for the message metadata ( mailboxId, uid, size, headers, flags, messagecontentId...).
- one row per message content where I splitted the messagecontent in 1mb parts and put each "raw" byte[] in a new column. This makes sure we don't get to big column (not sure if this is also needed for hbase, in cassandra big columns are a problem)
For queries there a the following:
- retrieve all messages which have the recent flag set
- retrieve all messages which have the sent flag set
- retrieve all messages with uid <=> X
- retrieve all messages with the deleted flag set
- retrieve all mailboxes with name like '%X%'
Then IMAP also allows to build your own search query. Which is really problematic with nosql stores or even if sql stores. As it mainly allow the user todo any kind of filtering, which in fact just suck when you don't have the indexes set. So we have a lucene index for that atm. I plan to write one in SOLR too.
Threading is not supported atm but is on my todo list.
Hope this helps, just ask if you need more infos
All mail in a single row in hbase would mean that the mailbox would be changed 'atomically' since row updates in hbase are so but downsides might be that that some users would have really big mailboxes and gigabyte-sized rows; this might mess w/ balance and distribution of across the cluster (perhaps).
If you did put them all in a single row, in hbase columns are sorted too; if the column qualifier were a reverse order date you could encounter mail in order of newest first. HBase has versioning too so you could stamp mail into hbase and write the mail receipt date as the cell version. Naturally it returns versions in order of newest first.
How would you do threading? Does James support this? What else does James support that you expect the db to provide?
Thank you for the input, I appreciate it and I will look into it, it seems very promising.
My first idea was to store all the users emails in a single row, but I couldn't figure how to access the emails in an efficient manner.
I hope I will get my hands on that book soon, but until then I will see what I can get from other sources.
We are currently discussing the requirements and constraints about building a NoSQL storage here:. For now, the discussion is targeting HBase, but I think it can be adapted to other NoSQL implementations. We will publish the schema details there.
@Loan Going the Gora route will allow you swap stores. I've not used it so am not up on the costs that come with the indirection (if any).
You'll need to figure a schema design for your store. I'd suggest you study how James does queries currently and make a list. This will be the key input feeding your schema design. For example, in the coming "HBase: The Definitive Guide", Lars has some discussion of HBase as a mail store. Rows are sorted in HBase so he arrives at a row key schema that looks like this:
<userid><date in reversed chronological order so you see newest mail first><message-id><attachment-id>
You can start up a scan to see all mail from a user and you'll see the latest first. Mail will be grouped by mail id. If attachments ids are their sequence number, then they'll be encountered in order (you'll probably need to zero pad some of the attributes above). This is just an example. You may end up w/ different row key design after you've studied James queries.
Further to [1] and [2], an extra layer (HBase) upon Hadoop will be used
[1]
[2]
Ioan, we will use apache-extras (backed by google) for your source code repository.
You can create one via
It provides:
Mercurial and Subversion code hosting
==> Apache is SVN for now, but I think you better know Mercurial. So choose what you want. Choose Apache 2 Licence.
Download/release hosting
Integrated source code browsing and code review tools
An issue tracker and project wiki
==> Don't use this, use the Apache James JIRA and Apache James Wiki.
For the pom, just inspire you from the mailbox-jpa pom (rename it to mailbox-hdfs update dependencies,...)
So you build and run james, you've got hadoop setup. Cool!
The next step would be to create a maven project, declare the needed dependencies to james mailbox and hadoop libraries, and make a few attemps:
1. Access the mailbox (create session,...) from a java test case (see [1] and [2] for inspiration, I will try to commit more focused examples tomorrow)
2. Access a hadoop cluster based on Mini(MR)Cluster : these are the classes hadoop uses for testing without having to deploy a real cluster.
Also have a look at gora documentation. This will be useful when we will have to decide on how to access the hdfs files,... and don't forget to subscribe to hadoop and gora mailing lists.
[1]
[2]
I have installed Hadoop on my machine and run the wordcount example. Now all I have to do is figure out how to put all things together
. I guess I will have to get to know a little bit of Hadoop API.
The maibox component injection is achieved by the server project with context files you can find in
There are some functional tests for the different mailbox impl in
These are tests for the imap protocol using the mailbox impl.
The mailbox project in it self only contains some basic testing for now.
Having a dependency injection modulie in the mailbox project (without the need to have a server) is on my todo.
Btw, which missing plugins exceptions have you received. If needed, you may remove from you local maven repo ($HOME/.m2/repository/...) the bad plugins, your next build should download it again from the internet maven repositories.
Hi there,
for the mailbox api part I suggest you to have a look at the jpa implementation. This will give you a feeling about what needs to get done. After that have a look at the store module which contains everything you need to write your implementation. It already have many abstract base classes which just needs to get extended. Once you got the idea its really straight forward
And yes james use spring to load the right classes depending on the .xml files
I installed James 3.0m2 on localhost. Installation was easy, I just had to disable exim so james could bind to port 25. I succesfully sent an email and configured Icedove (Mozilla Thunderbird to get the mail by IMAP).
I also succesfully built james (trunk) on my machine but I had to disable the test building because maven complained about missing plug-ins on tests.
Last, I had a look on James mailbox API. Didn't know where to start, but got it: it's big.
My first try was to find the mailbox dependency in James Server but I couldn't find it. Luckily I had just read about Dependency Injection () and spring. James is using dependency injection fitting in the right mailbox API at runtime, based on config files. Right?
My favorite tutorial for hadoop setup:
+1
I just read your application on google-melange and it's ok to me.
Good job
I have added your recomandations to my application. Thanks for all the help.
Please keep in mind that many of these are new to me and right now I am a bit overwhelmed by that.
I see a lot of new names and it's a bit discouraging.
I wish to see the project complete so please keep things simple for me until I can manage all this information.
First make it run and then make it run fast.
- Ioan
Robert,
Regarding distributed uid generation, we have defined [gsoc2011] Design and implement Distributed UID generation
I must reread your post and rereread the RFCs to have a better idea.
I suppose this doesn't change anything on Ioan application's scope. If mails are persisted, and we have a solution for uid, we have a distributed james. But uid is not in-scope here. wdyt?
- Eric
A distributed email server is an interesting topic
There are a number of different ways which might reasonably approach the problem. Take a look at the way UIDs are defined in IMAP [1]. The strong uniqueness qualities may only be required within a mailbox, not universally. Though mailboxes can be shared, requirements for maintenance message sequence number limit how well concurrency access to a single mailbox will scale.
This suggests to me that the framers of the IMAP standard considered the possibility that distribution might happen between the protocol and mailbox tiers. In the scenario, the servers handling client connections and handling mailboxes would operate in separate processes, potentially separated by a network. Each mailbox could then be located close to dedicated storage.
I believe that a consequence of this engineering decision by the standards group may be that a fully distributed UID may be not really be necessary. I suspect that using HBase[3] or Cassandra [4] to store UIVALIDITY+UID keyed by mailbox name (perhaps using Gora[5]) would be good enough.
[1].
[3]
[4]
[5]
IMHO JSON is an interesting option for email storage, and a Mime4J module parsing a MIME mail into JSON would be useful for much more than just AVRO
Regarding : "Another problem to settle is the format and compression of the HDFS files to store the emails", an option would be avro (other otpions would be to use the different native hdfs file type, or to develope a MailHadoopFile).
From,.
The nice thing is that you define your format in JSON and you get for free the persistent of your object in hadoop (direct + via map/reduce).
Twitter uses for example similar mechanism to store their tweets (very small objects) in their distibuted store.
To be tested/compared with other alternatives...
Would be cool to inject this in your application.tks,
Ioan, please link to from your google-melange application. tks.
HDFS is good at (relatively) small numbers of large files. For small files, the main limitation was block size. Hadoop moves fast. Need to establish early the current state of the art, and what tuning would be required.
Yep
IMHO there's an art to RFCs. Implementation requires lots of reading and re-reading but you don't need to do that if you just want to use them. Aim to skim read them, so you know where to find information rather than retain any details.
The Structure Of An Mail
------------------------------------
Numerous RFCs describe the structure which emails should have. Though in the wild, wild web variations are encountered, it's important to read these standards to start to understand the data structure used by mail.
Take a look at the Mime4J mail parser () and here's a selection of RFC to skim:
(and for historic reasons also:)
... and Ioan, don't forget to delete the apache-extra repo () or to clearly indicate on the project home page that the code is now part of the official apache james mailbox repo (if you want to keep the hg repo as reminder
Thx again
Eric
|
https://issues.apache.org/jira/browse/MAILBOX-44
|
CC-MAIN-2017-51
|
refinedweb
| 2,432
| 70.53
|
Docker/OSX Quickstart (not grokking docker yet? start here)
Docker has only been around since 2013, but it seems like it’s all over my Twitter feed and RSS reader. I’ve gone trough the “Hello world” example in the past, but never felt like I really understood either the value proposition, or exactly how it works. This week, I had some time to sit down and give it more of my attention. What I found was that it was neither as mysterious or as complicated as I anticipated.
Installing on a Mac
Docker was born on Linux and uses Linux internals like LXC to work its magic. There is a Windows native version in the works (not that anyone cares). But given that software engineering in the Bay Area is dominated by Macs, let’s start by looking at how to get this installed and running on OSX.
First off, don’t try to install it via
brew, or any other package manager. Docker is written in Go, which has the advantage of compiling down to dependency-less binaries. Plus, the project is moving so fast that the versions in package managers are out of date. So, suck it up and install it manually by downloading the binary.
If you can open a terminal and run
docker --version, you’re good to go. This tutorial is for version 1.5.0.
Boot2Docker
If you try to run a docker image now, you will get an cryptic error like
docker max dial unix /var/run/docker.sock: no such file or directory. This is because the Docker daemon process is not running. Actually, it cannot run on a Mac! Instead, you must use boot2docker, which is a tiny virtual machine that runs in VirtualBox and has the Docker daemon. Again, use the binary installer (sorry!).
To get up and running, open a terminal the run the following.
boot2docker init boot2docker up eval "$(boot2docker shellinit)" docker run ubuntu:14.04 /bin/echo 'Hello world'
That’s your hello world example. Let’s breakdown what’s happening here.
boot2docker init creates a new virtual machine in VirtualBox.
The next step,
boot2docker up runs the virtual machine. The
eval "$(boot2docker shellinit)" step is setting some environment variables that tell Docker what container context you are currently in. If you run just
boot2docker shellinit by itself, you can see the raw output:
Writing /Users/chase/.boot2docker/certs/boot2docker-vm/ca.pem Writing /Users/chase/.boot2docker/certs/boot2docker-vm/cert.pem Writing /Users/chase/.boot2docker/certs/boot2docker-vm/key.pem export DOCKER_HOST=tcp://192.168.59.104:2376 export DOCKER_CERT_PATH=/Users/chase/.boot2docker/certs/boot2docker-vm export DOCKER_TLS_VERIFY=1
The first three lines are just informational, only the last three lines are printed to stdout.
The last line,
docker run ubuntu:14.04 /bin/echo 'Hello world' actually instantiates a new Docker container (using Ubuntu 14.04), and runs a single command inside it.
A Note about Containers
Containers are little sandboxed Linux instances. Images are the serialized file definition that containers are spun up from. The magic of Docker is that the images are completely portable. This concept escaped me at first. I was under the impression that you needed to build an image on your Mac to run it there, and then build a separate image on Amazon EC2 to run the same thing there.
In fact, you can build an image on your Mac, and then essentially
scp that file up to AWS and run it. In reality, you don’t even need to copy it manually, that’s what Docker Hub is for.
Also, the Linux distribution used inside your Docker container does NOT have to match the distribution of the host operating system. You can run Ubuntu inside a CentOS host, and visa-versa.
Finally, images have a built-in layering mechanism. Essentially, you can have a base image and then any number of small layers of diffs on top of that. This is a powerful optimization and abstraction, which we will talk about later.
Example Python Flask App
This is the canonical tutorial for Python folks getting started with Docker, and yet I could not complete is successfully with any of the documentation I found. Here is my own special snowflake version.
First, create a new directory called
flask. Inside, you are going to create three files.
The first file is called
app.py, which is just a simple hello world Flask app.
from flask import Flask import os app = Flask(__name__) @app.route('/') def hello(): return 'Hello World!' if __name__ == "__main__": app.run(host="0.0.0.0", debug=True)
Then, create a
requirements.txt file to list Flask as a dependency:
Flask==0.10.1
Finally, create your
Dockerfile:
FROM python:2.7 ADD . /code WORKDIR /code RUN pip install -r requirements.txt EXPOSE 5000 CMD python app.py
Let’s take a moment and breakdown this last file. The
FROM line tells Docker to base this container off of a named image in the public repository called
python, and to use the named tag of that image (kind of like a version) of
2.7.
The
ADD line copies your code from the current directory
. to
/code inside the Docker container Linux instance.
WORKDIR settings the working directory there as well.
RUN can be specified multiple times. It tells Docker to run these commands when building the container for the first time. Run steps are actually cached; changing one of them later will only result in that one being run again. This is possible due to the container layering we talked about earlier.
EXPOSE tells Docker that the container will be serving port 5000 externally. This is the port we will run the flask app on.
Finally, the
CMD line specifies the command that will run inside the container as your main daemon process. If you need multiple daemons, look into docker-compose.
Run it
To run the example, execute the following commands:
open "(boot2docker ip):5000" docker build -t flask-example . docker run -it -p 5000:5000 -v $(pwd):/code:ro flask-example
This should have opened a browser tab before spawning flask. That likely came up with a “This webpage is not available” error page, but if you refresh it now, you should see your “Hello World!” text.
What you have done is create a named image called
flask-example and run it. You can even edit the code on your local file system and it will sync over to Docker (thanks to
-v) and flask will restart.
Running the same container on AWS
Now, let’s look at how to run that same container on AWS. First, go sign up for Docker Hub. It’s free.
Let’s assume your Docker Hub username is
foobar. First, re-build and publish your image:
docker build -t foobar/flask-example . docker login docker push foobar/flask-example
Now, create a new EC2 instance. Make sure to use the “Amazon Linux” base image, which will make installing Docker easier. SSH into your instance and run the docker container:
sudo yum install -y docker; sudo service docker start sudo docker run -it -p 8000:5000 foobar/flask-example
The first line simply installs Docker and starts it. The second line pulls down your image from Docker Hub (note: no need to authenticate!), runs it in an interactive shell, and maps the external port 8000 on the host EC2 instance to port 5000 inside the container.
If you have your security group setup to expose port
8000, you should be able to open this EC2 public host name on port 8000 in a web browser.
More Stuff
When I was getting started with this, I made the mistake of reading about and trying to leverage
docker-compose and
docker-machine right away. These are official plugins, which ease the configuration of multi-service and multi-machine capabilities in Docker, respectively. I suggest NOT starting in with those until you have the above basics buttoned down. I found that they clouded my understanding of what was happening at first.
|
https://chase-seibert.github.io/blog/2015/04/11/docker-osx-quickstart.html
|
CC-MAIN-2018-05
|
refinedweb
| 1,350
| 66.44
|
Difference between revisions of "Talk:SDK Beta"
Revision as of 03:00, 16 June 2008
-- Where do you get SDK Beta? -- Hey Hey. Sorry for being such a newb. Where do you get the SDK Beta codeline? Says nothing about it on this page, nor the hl2coders list. I've never developed long enough to see an update, and finally there is one! LOL.
Do you just find it by updating steam and "create a mod"? Or is that codeline still the old version? Is there somewhere else (a repo of some sort) that the sdk beta code is located in? Thanks --Mflux 17:54, 14 Jun 2008 (PDT)
What should be done to this page?—ts2do
- It will be used for future beta releases, as will the Sdk Beta Bugs page. --JeffLane 18:48, 17 Jan 2006 (PST)
- Given the confusion of at least one user, perhaps it would be prudent to create a template(s) for the beta pages to say "This Beta is now closed" and "This Beta is currently active"? --Giles 12:04, 4 Jun 2006 (PDT)
Heyyo,
I'd like to point out.. since the first time valve officially added basic third party bot support to the source sdk..
Since this update:
BOTS CAN'T BE BUILT EFFCIENTLY FOR THE SOURCE ENGINE. vALVE has taken no actions to try and rectify the situation. So here I am trying again...
- They haven't made it possible to read current bot inventory (weapons/items)
- Can't use any kind of code to detect bot's (health/armor/being damaged by what kind of weapon or hazard like drowning)
- Cannot find out what is current bot ammo counts of both in the gun, and spare ammo.. so bot creators can only "guess" when the bot's out of ammo and randomly make the bot switch weapons.. which brings us to the next problem...
- Cannot switch weapons properly on bots. The way vALVE have it set up, they just spawn new weapon entities.. which is ok for singleplayer campaign where bots don't have to change weapons ever.. but in multiplayer it means the game eventually crashes since you can't just keep creating entities.
- The physics aren't updated right for the bots, so that in HL2DM, if you toss a physics object at a bot or the secondary attack of weapon_ar2 (The Combine Assault Rifle's Energy Ball) sometimes the physics objects will pass right through the bots.
- Not as important.. but there's no code that shows how to wrap the built in navmesh system of the CSS bot which is present in all Source Engine games to be used with third party bots. Most bot creators have invented different navmeshes that can wrap properly to geometry instead of just square boxes that are meant to snap to a grid.. but for some bot coders who wish to try the built in navmesh with their bots, it would probably help them out quite a bit.
- NOTE: some of the above can be hacked into the sourceSDK to read stuff like bot ammo counts... but it would be an unstable hack, and still would mean if the bot wanted to switch weapons he would still have to create those new weapon entities and end up crashing the game/server due to creating too many entities.
--ThE_MarD 07:30, 11 Mar 2008 (PDT)
Code fix
You can't play a game with the current beta code, here are the few things to change :
Index: waterbullet.cpp =================================================================== @@ -28,9 +28,6 @@ LINK_ENTITY_TO_CLASS( waterbullet, CWaterBullet ); -IMPLEMENT_SERVERCLASS_ST( CWaterBullet, DT_WaterBullet ) -END_SEND_TABLE() - //----------------------------------------------------------------------------- // Purpose: //-----------------------------------------------------------------------------
Index: waterbullet.h =================================================================== @@ -25,7 +25,6 @@ void BulletThink(); DECLARE_DATADESC(); - DECLARE_SERVERCLASS(); }; #endif // WEAPON_WATERBULLET_H
Index: sdk_usermessages.cpp =================================================================== @@ -21,6 +21,7 @@ usermessages->Register( "GameTitle", 0 ); // show game title usermessages->Register( "ItemPickup", -1 ); // for item history on screen usermessages->Register( "ShowMenu", -1 ); // show hud menu + usermessages->Register( "Rumble", -1 ); usermessages->Register( "Shake", 13 ); // shake view usermessages->Register( "Fade", 10 ); // fade HUD in/out usermessages->Register( "VGUIMenu", -1 ); // Show VGUI menu
(Tested with the "create mod from scratch" option.)
Asibasth 14:55, 14 Jun 2008 (PDT)
where are the new editors???
Hi! i got the new beta and stuffs, but im not able to use the new editors! plz help! sencerly, lazy genius.
- Hey there, to access the new (Commentary, Particle, and Matieral) editors, simple add -tools to the advance launch options of Episode Two, Team Fortress 2, Portal, or Day of Defeat: Source Beta. (It's in the properties of each game on your Steam's Gamelist.) Hope that helps! --JakeB 17:44, 14 Jun 2008 (PDT)
- Hi jake. I tryed that but I still cant acess them! do I need to add -tools to my mods that use the OB engein??? And also, I first tried adding -tools to portal but when I opened the SDK and selected portal for the current game it didn't show the new editors. is it supposed to? --Lazy genius 19:05, 14 Jun 2008 (PDT)
|
https://developer.valvesoftware.com/w/index.php?title=Talk:SDK_Beta&diff=prev&oldid=78286
|
CC-MAIN-2020-10
|
refinedweb
| 831
| 71.14
|
OPS435 Python Lab 5
** DO NOT USE - TO BE UPDATED FOR CENTOS 8.0 **
Contents
- 1 LAB OBJECTIVES
- 2 INVESTIGATION 1: Working with Files
- 3 INVESTIGATION 2: Exceptions and Error Handling
- 4 LAB 5 SIGN-OFF (SHOW INSTRUCTOR)
- 5 LAB REVIEW
LAB OBJECTIVES
- So far, you have created Python scripts to prompt a user to input data from the keyboard. When creating Python scripts, you may also need to be able to process large volumes of information, or store processed data for further processing. The first investigation in this lab will focus on file management, opening files, saving data to files, and reading files.
- NOTE: Since many tasks that system administrators perform deal with files, this is a crucial skill to understand.
- It is very important to provide logic in your Python script in case it encounters an error. An example would be an invalid path-name or trying to close a file that is already closed. The second investigation in this lab will look into how the Python interpreter handle errors (commonly referred to as "exception handling") at run time, and learn how to write Python code that will run gracefully even when problems occur during program execution.
PYTHON REFERENCE
- In previous labs, you have been advised to make notes and use online references. This also relates to working with files and learning about objected oriented programming. You may be "overwhelmed" with the volume of information involved in this lab.
- Below is a table with links to useful online Python reference sites (by category). You may find these references useful when performing assignments, etc.
INVESTIGATION 1: Working with Files
- You will now learn how to write Python scripts in order to open text files, to read the contents within a text file, to process the contents, and finally to write the processed contents back into a file. These operations are very common, and are used extensively in programming. Examples of file operations would include situations such as logging output, logging errors, reading and creating configuration/temporary files, etc.
- Files are accessed through the use of file objects. An object is a storage location which stores data in the form of attributes (variables) and methods (functions). Creating our own objects will be covered later in investigation 3.
PART 1 - Reading Data From Files
- Perform the Following Steps:
- Create a new python file for testing.
- Create a new text file in the lab5 directory:
cd ~/ops435/lab5 vim ~/ops435/lab5/data.txt
- Place the following content inside the new text file and save it:
Hello World This is the second line Third line Last line
In order to read data from a text file, we need to create an object that will be used to access the data in a file. In some programming languages (like C) this is called a file descriptor, or a file pointer. In Python, it's an object.
- Now lets write some python code to open this created file for reading. We will define and object called "f" in order to help retrieve content from our text file. Issue the following:
f = open('data.txt', 'r')
The open() function takes two string arguments: a path to a file, and a mode option (to ask for reading, writing, appending, etc). The open() function will return a file object to us, this file object will allow us to read the lines inside the file.
- Here are the most useful functions for text file manipulation:
f.read() # read all lines and stores in a string f.readlines() # read all lines and stores in a list f.readline() # read first line, if run a second time it will read the second line, then third f.close() # close the opened file
- Next, read data from the buffer of the opened file and store the contents into a variable called
read_data, and then confirm the contents of the variable
read_data:
read_data = f.read() print(read_data)
After you have completed accessing data within a file, you should close the file in order to free up the computer resources. It is sometimes useful to first confirm that the file is still open prior to closing it. But really you should know - it's your code that would have opened it:
f.close() # This method will close the file
Let's take a moment to revisit the file read sequence. The following code sequence will open a file, store the contents of a file into a variable, close the file and provide confirmation that the file has been closed:
f = open('data.txt', 'r') # Open file read_data = f.read() # Read from file f.close() # Close file
- read_data in this case contains the data from the file in a single long string. The end of each line in the file will show the special character '\n' which represents the newline character in a file used to separate lines (or records in a traditional "flat database file"). It would be convenient to split the line on the new-line characters, so each line can be stored as an item in a list.
- Store the contents of our file into a list called list_of_lines:
read_data.split('\n') # Returns a list list_of_lines = read_data.split('\n') # Saves returned list in variable print(list_of_lines)
Although the above sequence works, there are functions and methods the we can use with our object (called "f") to place lines from our file into a list. This would help to reduce code and is considered a more common method to store multiple lines or records within a list.
- Try these two different means to store data into a list more efficiently:
# METHOD 1: f = open('data.txt', 'r') method1 = list(f) f.close() print(method1) # METHOD 2: f = open('data.txt', 'r') method2 = f.readlines() f.close() print(method2)
Create a Python Script Demonstrating Reading Files
- Create the ~/ops435/lab5/lab5a.py script.
- Use the following as a template:
#!/usr/bin/env python3 def read_file_string(file_name): # Takes a filename string, returns a string of all lines in the file def read_file_list(file_name): # Takes a filename string, returns a list of lines without new-line characters if __name__ == '__main__': file_name = 'data.txt' print(read_file_string(file_name)) print(read_file_list(file_name))
- This Python script will read the same file (data.txt) that you previously created
- The read_file_string() function should return a string
- The read_file_list() function should return a list
- The read_file_list() function must remove the new-line characters from each line in the list
- Both functions must accept one argument which is a string
- The script should show the exact output as the samples
- The script should contain no errors
- Sample Run 1:
python3 lab5a.py Hello World This is the second line Third line Last line ['Hello World', 'This is the second line', 'Third line', 'Last line']
- Sample Run 2 (with import):
import lab5a file_name = 'data.txt' print(lab5a.read_file_string(file_name)) # Will print 'Hello World\nThis is the second line\nThird line\nLast line\n' print(lab5a.read_file_list(file_name)) # Will print ['Hello World', 'This is the second line', 'Third line', 'Last line']
- 3. Download the checking script and check your work. Enter the following commands from the bash shell.
cd ~/ops435/lab5/ pwd #confirm that you are in the right directory ls CheckLab5.py || wget python3 ./CheckLab5.py -f -v lab5a
- 4. Before proceeding, make certain that you identify all errors in lab5a.py. When the checking script tells you everything is OK - proceed to the next step.
PART 2 - Writing To Files
- Up to this point, you have learned how to access text from a file. In this section, you will learn how to write text to a file. Writing data to a file is useful for creating new content in a file or updating (modifying) existing data contained within a file.
- When opening a file for writing, the 'w' option is specified with the open() function. When the 'w' option is specified - previous (existing) content inside the file is deleted. This deletion takes place the moment the open() function is executed, not when writing to the file. If the file that is being written to doesn't exist, the file will be created upon the file opening process.
- Create a temporary Python file and open a non-existent data file (called file1.txt) for writing:
f = open('file1.txt', 'w')
- To confirm that the new file now exists and is empty, issue the following shell command:To add lines of text to the file, you can use the write() method for the file object. Typically you end every line in a text file with the special character '\n' to represent a "new line". Multiple lines may also be placed inside a single write operation: simply put the special character '\n' wherever a line should end.
ls -l file1.txt
- Try adding multiple lines:Once the write() method has completed, the final step is to close() the file. The file MUST be closed properly or else data will not consistently be written to the file. NOTE: Not closing a file can lead to corrupted or missing file contents.:
f.write('Line 1\nLine 2 is a little longer\nLine 3 is too\n')
f.close()
- View the contents of the file in the shell to make sure the data was written successfully:You will now create a new file called file2.txt, but this time run multiple write() methods in sequence. You will often write to a file multiple times inside a loop:
cat file1.txt
f = open('file2.txt', 'w') f.write('Line 1\nLine 2 is a little longer\nLine 3 is as well\n') f.write('This is the 4th line\n') f.write('Last line in file\n') f.close()
- Issue the following shell command to confirm that the contents were written to file2.txt:
cat file2.txt
- Issue the following shell commands to backup both of your newly-created files and confirm backup:
cp file1.txt file1.txt.bk cp file2.txt file2.txt.bk ls -l file*
- Let's demonstrate what can happen if you perform an incorrect write() operation:
f = open('file2.txt', 'w')You should notice that the previous content in your file2.txt file was destroyed. Why do you you think the previous data is no longer there?
cat file2.txt
- Restore your file from the backup and verify the backup restoration:To avoid overwriting the contents of a file, we can append data to the end of the file instead. Use the option 'a' instead of 'w' to perform appending:
cp file2.txt.bk file2.txt cat file2.txt
f = open('file1.txt', 'a') f.write('This is the 4th line\n') f.write('Last line in file\n') f.close()The final thing to consider when writing to files is to make certain that the values being written are strings. This means that before trying to place integers, floats, lists, or dictionaries into a file, first either convert the value using str() function or extract the specific strings from items in the list.
cat file1.txt
- In this example we convert a single number and all the numbers in a list to strings before writing them to a file:
my_number = 1000 my_list = [1,2,3,4,5] f = open('file3.txt', 'w') f.write(str(my_number) + '\n') for num in my_list: f.write(str(num) + '\n') f.close()
- Confirm that the write() operation was successful
cat file3.txt
Create a Python Script Demonstrating Writing to Files
- Copy ~/ops435/lab5/lab5a.py script to ~/ops435/lab5/lab5b.py script (We need the previous read functions that you created).
- Add the following functions below the two functions that you already created:
def append_file_string(file_name, string_of_lines): # Takes two strings, appends the string to the end of the file def write_file_list(file_name, list_of_lines): # Takes a string and list, writes all items from list to file where each item is one line def copy_file_add_line_numbers(file_name_read, file_name_write): # Takes two strings, reads data from first file, writes data to new file, adds line number to new file
- Replace the main section of your Python script near the bottom with the following:
if __name__ == '__main__': file1 = 'seneca1.txt' file2 = 'seneca2.txt' file3 = 'seneca3.txt' string1 = 'First Line\nSecond Line\nThird Line\n' list1 = ['Line 1', 'Line 2', 'Line 3'] append_file_string(file1, string1) print(read_file_string(file1)) write_file_list(file2, list1) print(read_file_string(file2)) copy_file_add_line_numbers(file2, file3) print(read_file_string(file3))
- append_file_string():
- Takes two string arguments
- Appends to the file(Argument 1) all data from the string(Argument 2)
- write_file_list():
- Takes two arguments: a string and a list
- Writes to file(Argument 1) all lines of data found in the list(Argument 2)
- copy_file_add_line_numbers():
- Takes two arguments: Both are files path-names (which happen to be strings)
- Reads all data from first file(Argument 1), and writes all lines into second file(Argument 2) adding line numbers
- Line numbers should be added to the beginning of each line with a colon next to them(see sample output below for reference)
- Hint: Use an extra variable for the line number
- Sample Run 1:
rm seneca1.txt seneca2.txt seneca3.txt ./lab5b.py First Line Second Line Third Line Line 1 Line 2 Line 3 1:Line 1 2:Line 2 3:Line 3
- Sample Run 2 (run second time):
python3 lab5b.py First Line Second Line Third Line First Line Second Line Third Line Line 1 Line 2 Line 3 1:Line 1 2:Line 2 3:Line 3
- Sample Run 3 (with import):
import lab5b file1 = 'seneca1.txt' file2 = 'seneca2.txt' file3 = 'seneca3.txt' string1 = 'First Line\nSecond Line\nThird Line\n' list1 = ['Line 1', 'Line 2', 'Line 3'] lab5b.append_file_string(file1, string1) lab5b.read_file_string(file1) # Will print 'First Line\nSecond Line\nThird Line\nFirst Line\nSecond Line\nThird Line\n' lab5b.write_file_list(file2, list1) lab5b.read_file_string(file2) # Will print 'Line 1\nLine 2\nLine 3\n' lab5b.copy_file_add_line_numbers(file2, file3) lab5b.read_file_string(file3) # Will print '1:Line 1\n2:Line 2\n3:Line 3\n'
- 3. Download the checking script and check your work. Enter the following commands from the bash shell.
cd ~/ops435/lab5/ pwd #confirm that you are in the right directory ls CheckLab5.py || wget python3 ./CheckLab5.py -f -v lab5b
- 4. Before proceeding, make certain that you identify all errors in lab5b.py. When the checking script tells you everything is OK - proceed to the next step.
INVESTIGATION 2: Exceptions and Error Handling
- Running into errors in programming will be a common occurrence. You should expect that it will happen for any code that you write. In python, when an error occurs, the python runtime raises an exception. This section will teach you to catch these exceptions when they happen and to allow the program to continue running, or to stop program execution with a readable error message.
PART 1 - Handling Errors
There is a massive amount of exceptions. Online references can be useful. If you are searching for a common exception check out the Python Exception Documentation.
In this section, we will provide examples of how to handle a few exceptions when creating Python scripts.
- To start, open the ipython3 shell. Before attempting to handle exception errors, let's create an error, and then see how to can "handle" it:You should get an exception error similar to the following:
print('5' + 10)
--------------------------------------------------------------------------- Traceback (most recent call last) Fiel "<stdin>", line 1, in <module> TypeError: Can't convert 'int' object to str implicitly
Question: According to the exception error message, what do you think caused the error?
- Click on the link '.' and scroll or search for TypeError. Take a few moments to determine what a TypeError exception error means.
You should have learned that the TypeError exception error indicates a mismatch of a type (i.e. string, int, float, list, etc). If Python doesn't know how to handle it, perhaps we could change the number into a string or change the string into a number or at least provide a more user-friendly error message.
If we don't want the user of our program to have to learn how to read Python exceptions (which is a very good idea), we can catch/trap/handle this error when it happens. This is done with a specific block of code called a try clause where you place code in-between the try: and the except: coding blocks. In a general sense, it works like a modified if-else statement, where the try statement acts as a test, and the except statement will or will not handle the exception depending if it occurs or does NOT occur. That is to say, If no error occurs in the code contained in the except section, the script will continue as usual but if an error occurs in the except section, then it can be handled with additional coding (like an user-friendly error message).
Let's demonstrate to handle our TypeError error with code that first does not contain an error and then similar code that DOES generate an error.
- The following code does NOT generate an error:
try: print(5 + 10) except TypeError: print('At least one of the values is NOT an integer') 15
You should notice that since there was NOT an error, the Python script performed the required task.
- The following code handles an exception error to provide user-friendly feedback that at least one of the values is not an integer:
try: print(5 + 'ten') except TypeError: print('At least one of the values is NOT an integer') At least one of the values is NOT an integer
- Let's generate another type of error where we try to open a file that doesn't exist:
f = open('filethatdoesnotexist', 'r')
- Now, catch and handle this exception error:
try: f = open('filethatdoesnotexist', 'r') f.write('hello world\n') f.close() except FileNotFoundError: print('no file found')
Multiple exceptions can also be caught at the same time, such as does not exist, is a directory, or we don't have permission.
- To test out the error handling code (previously issued), try removing permissions from the file, or specify a directory instead of a regular file, and then try to open it:
try: f = open('filethatdoesnotexist', 'r') f.write('hello world\n') f.close() except (FileNotFoundError, PermissionError, IsADirectoryError): print('failed to open file')
- By taking the time to view the Python Exception Hierarchy, you can see how errors get caught in python. The options FileNotFoundError, PermissionError, and IsADirectory are all inherited from OSError. This means that while using more specific errors might be useful for better error messages and handling, it's not always possible to catch every error all the time.
- Another way to catch multiple exceptions is with separate
exceptbloks:When catching multiple exceptions, make certain to catch the lowest ones contained in the exception-hierarchy first. For example, if you put 'Exception' first, both 'OSError' and 'FileNotFoundError', would never get caught.
try: f = open(abc, 'r') f.write('hello world\n') f.close() except (FileNotFoundError, PermissionError): print('file does not exist or wrong permissions') except IsADirectoryError: print('file is a directory') except OSError: print('unable to open file') except: print('unknown error occured') raise
TIP: In python it's usually best to 'try:' and 'except:' code rather than to attempt to anticipate everything that could go wrong with if statements. For example, instead of checking to see if a file exists and we have read permissions, it can be better to just try and read the file and fail and catch any errors with 'OSError'.
Create a Python Script Which Handles Errors
- Create the ~/ops435/lab5/lab5c.py script.
- Use the following as a template:
#!/usr/bin/env python3 def add(number1, number2): # Add two numbers together, return the result, if error return string 'error: could not add numbers' def read_file(filename): # Read a file, return a list of all lines, if error return string 'error: could not read file' if __name__ == '__main__': print(add(10,5)) # works print(add('10',5)) # works print(add('abc',5)) # exception print(read_file('seneca2.txt')) # works print(read_file('file10000.txt')) # exception
- Sample Run 1:
python3 lab5c.py 15 15 error: could not add numbers ['Line 1\n', 'Line 2\n', 'Line 3\n'] error: could not read file
- Sample Run 2 (with import):
import lab5c lab5c.add(10,5) 15 lab5c.add('10',5) 15 lab5c.add('10','5') 15 lab5c.add('abc','5') 'error: could not add numbers' lab5c.add('hello','world') 'error: could not add numbers' lab5c.read_file('seneca2.txt') ['Line 1\n', 'Line 2\n', 'Line 3\n'] lab5c.read_file('file10000.txt') error: could not read file'
- 3. Exit the ipython3 shell, download the checking script and check your work. Enter the following commands from the bash shell.
cd ~/ops435/lab5/ pwd #confirm that you are in the right directory ls CheckLab5.py || wget python3 ./CheckLab5.py -f -v lab5c
- 4. Before proceeding, make certain that you identify any and all errors in lab5c.py. When the checking script tells you everything is OK before proceeding to the next step.
LAB 5 SIGN-OFF (SHOW INSTRUCTOR)
- Have Ready to Show Your Instructor:
- ✓ Output of:
./CheckLab5.py -f -v
- ✓ Output of:
cat lab5a.py lab5b.py lab5c.py
LAB REVIEW
- What is the purpose of a file object?
- Write a Python command to open the text file called customers.txt for read-only operations.
- Write Python code to efficiently store the contents of the file in question #2 as a large string (including new-line characters) called customer-data.
- Write Python code to store the contents of the file in question #2 as a list, removing the new-line characters.
- What is the purpose of closing an open file? Write a Python command to close the file opened in question #2.
- Write the Python command to confirm you successfully closed the customers.txt file in question #5. What is the returned status from that command to indicate that the file has been closed?
- What is the difference between opening a file for writing data as opposed to opening a file for appending data? What can be the consequence if you don't understand the difference between writing and appending data?
- Write a Python command to open the file customer-data.txt for writing data.
- Write a Python command to save the text: customer 1: Guido van Rossum (including a new-line character) to the opened file called customer-data.txt
- Briefly explain the process writing a list as separate lines to an open file.
- What is the purpose of handling exception errors?
- Write a Python script to prompt a user for the name of the file to open. Use exception error handling to provide an error message that the specific file name (display that exact name) does not exist; otherwise, open the file for reading and display the entire contents of the file (line-by-line).
|
https://wiki.cdot.senecacollege.ca/wiki/OPS435_Python_Lab_5
|
CC-MAIN-2021-39
|
refinedweb
| 3,815
| 63.59
|
import java.util.Scanner; public class square { public static void main ( String [] args) {
Scanner keyboard = new Scanner (System.in); int N; N = keyboard.nextInt(); System.out.println(" Enter number between 1 , 10 ?? "); while ( N =< 10 ) System.out.println(N + " = " + N * N); N++; System.out.println(" Do you want to square another number ?? "); System.out.println(" Enter yes or no "); String A = keyboard.next(); ; System.out.println(A.equalsIgnoreCase("yes")); System.out.println(" "); System.out.println("Thank you "); }
}
i should use while to get the square of the number entered by the user ..
import java.util.Scanner; public class square{ public static void main(String [] args){ Scanner keyboard = new Scanner(System.in); System.out.println("Enter number between 1 , 10 ?? "); int N = keyboard.nextInt(); System.out.println(N + " = " + (N*N)); } }
thx =)
but not what i was asking for T_T I want it using while .. I really tried hard to do it but i dont know where is the mistake ='(
I appreciate you'r answer..
Thank you =)
|
http://www.roseindia.net/answers/viewqa/Java-Beginners/17455-why-cant-i-close-this-.html
|
CC-MAIN-2016-26
|
refinedweb
| 165
| 53.07
|
On 19 Jul 2014, at 10:38 , Robert Samuel Newson <rnewson@apache.org> wrote:
>
> I.
That is what I meant to express, with the caveat that we should be
careful, taking a conservative stance, so we can meet in the middle.
>?
Most apps should continue to work on CouchDB 2.0.
Specifically, the regular document CRUD cycle should work as-is.
Especially moving things around in the JSON usually goes further
than the HTTP/Couch layer of most apps, as it is usually passed
down into the rest of the app, while HTTP specifics are kept on
the outside.
In that scenario, adding properties should be easier to do than
removing them (e.g. _conflicts could be standard, but renaming
_rev to _mvcc would break things more significantly), although
Bob mentioned the replicator compatibility as a major concern,
so we need to make sure this is doable.
My main point here is to start a discussion about how we would
go about evolving this down the road and my suggestion was the
separate API endpoint that we can mess with at will and gradually
introduce until we switch at a later time when we feel confident
that people have migrated, or a solid compatibility API is available.
I see us having three discussions:
1. What do we want to fix/break for 2.0?
2. How do we introduce fixes/breaks that we aren’t comfortable doing for 2.0?
3. What do we want to fix/break for later versions?
From this thread, I’d handwavingly suggest these fall into category 1:
(as per the “most apps should just continue to work”-mantra):
- timeout and heartbeat params for /_db_updates works in different way
then the same parameters for changes feed;
- we need to find the way to pass open_revs in POST body instead of
tweaking max URL param;
- we have /db/_revs_diff and /db/_revs_missing endpoints which are
doing the same job. Well, the latter is only used for pre-1.1 CouchDB
replicator.
- /db/doc accepts conflicts, deleted_conflicts and revs params. In the
same time we provides meta one which includes each of specified.
- make eventsource feed to follow the specification format more better
then it does now
- MVCC for /db/_security and allow atomic changes for admins/members only
- a variant of “Changing the default respones for conflicts to include all
versions (or no version).” where ?conflicts=BOOL defaults to true, so we
get an additional _conflicts: [] member on regular GETs (if there are
conflicts), but not the conflicting versions themselves (see above note
about additional doc members)
- Fix the list API (inside couchjs) so that its a pure callback like
everything else.
- 'JSONP responses should be sent with a "application/javascript"'
These fall into category 3:
- Change _rev to _mvcc or other.
- Move document metadata elsewhere (sub-object, headers, whatever)
- Changing the default respones for conflicts to include all versions
(or no version).
- more RESTy API (move /_all_docs to /, db info to _info etc), self-defining REST API
- don’t pollute top level namespace (e.g. /database moves to /db/database)
This isn’t exhaustive, and we don’t yet know the answers to some of them.
As a repeat: with our new understanding of SemVer, we are free to ship CouchDB
3.0 a month after 2.0, if we really want to. We are not beholden to marketing
version numbers after 2.0 (strictly, we aren’t for 2.0 either, but it is
rather convenient :).
* * *
The view server protocol change suggested by Samuel is IMHO an internal
change that should not break BC unless people rely on implementation details.
* * *
Most apps should continue to work on CouchDB 2.0.
Jan
--
>
>>>>>>>
>>>>>>
>>>>>>
>>>>
>>
>
|
http://mail-archives.apache.org/mod_mbox/couchdb-dev/201407.mbox/%3C951221CB-76D0-432D-8123-8C0B23DEEC54@apache.org%3E
|
CC-MAIN-2018-51
|
refinedweb
| 617
| 61.56
|
lp:python-jujuclient
- Get this branch:
- bzr branch lp:python-jujuclient
Branch merges
Related bugs
Related blueprints
Branch information
- Owner:
- juju-deployers
- Project:
- python-jujuclient
- Status:
- Development
Recent revisions
- 99. By David Britton on 2018-10-17
Merge websocket-
client- compat [a=cjwatson] [r=dpb]
Work around SSL option handling bug in some versions of
websocket-client.
- 98. By Tim Van Steenburgh on 2017-04-13
v0.54.0
- 97. By Tim Van Steenburgh on 2016-12-08
Add support for macaroon auth
Makes python-jujuclient usable with a shared controller or model
by automatically attempting macaroon auth using ~/.go-cookies.
Does not contain support for fetching or discharging new
macaroons. In other words, if you don't already have a discharged
macaroon (e.g. the ones created by the juju cli in ~/.go-cookies),
this won't work for you.
- 96. By Tim Van Steenburgh on 2016-09-28
v0.53.3
- 95. By Tim Van Steenburgh on 2016-09-21
[timkuhlman] If an owner is present in the env name parse it out properly
- 94. By Tim Van Steenburgh on 2016-09-14
Import juju1 Enviroment into top namespace for back-compat
- 93. By Tim Van Steenburgh on 2016-08-30
Update parsing to handle new juju switch output format
- 92. By Tim Van Steenburgh on 2016-08-24
Support new `juju show-model` cmd format
- support `juju show-model -m foo` (pre beta 16)
- support `juju show-model foo` (post beta 16)
- 91. By Tim Van Steenburgh on 2016-08-11
show-model no longer accepts -m arg
- 90. By Tim Van Steenburgh on 2016-08-05
v0.53.2
Branch metadata
- Branch format:
- Branch format 7
- Repository format:
- Bazaar repository format 2a (needs bzr 1.16 or later)
|
https://code.launchpad.net/~juju-deployers/python-jujuclient/trunk
|
CC-MAIN-2021-10
|
refinedweb
| 292
| 56.86
|
How to install gcc
:
@Justin-Sowers I will write a quick guide for this and post it here in a .pdf file - keep watching.
@Boken-Lin When I have done the guide, would it be a suitable candidate for a Wiki Tutorial?
@Justin-Sowers @Boken-Lin I have created a new post () that contains the guide.
Hope it is useful.
- Ahmed Mkadem
@Kit-Bishop then how can you compile a C program ? can you explain it to me please ?
@Ahmed-Mkadem I compile C/C++ using cross compilation on a Linux system (I use Kubuntu).
Some details can be found at
You may also be interested in the set of C++ libraries and programs I have produced for perfroming GPIO, I2C and Arduino access on the Omega. The full set of these can be found at The various documentation .pdf files therein give more details.
You might in particular look at the template program that is included in the above referenced package - see and its documentation at
Hope that helps
- Ahmed Mkadem
@Kit-Bishop Thank you !! this is helpful
- anglo marc
@Kit-Bishop hi dear
im new with Linux and want to use the cross compiler
can you please explain ho to do these steps !
setting the environment variables , install the tool chain ?
also the omega didn't have sftp server so how can i transfer the executable file ?
- anglo marc
@Boken-Lin said in How to install gcc:
.
hi dear
is there ant tutorial explain that process in actual details ?
- José Luis Cánovas
@anglo-marc you can start from here.
- Przemyslaw Downar
hello i install gcc. I downloaded library i2c from
onion-i2c.c
onion-i2c.h
onion-debug.c
onion-debug.h
wrote a program, copy files to usr/c/ and run gcc
gcc -I usr/c/ -o program usr/c/HTU21D.c
but while compiling gets an error:
/tmp/ccnKdEjA.o: In function `getTemperature()': HTU21D.c:(.text+0x88): undefined reference to `i2c_writeBytes' HTU21D.c:(.text+0x94): undefined reference to `i2c_writeBytes' collect2: error: ld returned 1 exit status
my program:
#include "HTU21D.h" int main () { printf("Hello world!\n "); //printf("%5.2fC\n", getTemperature()); return 0; } double getTemperature() { unsigned char buf [32]; int status; i2c_writeBytes(0, 0x80, 0xF3, 0, 0); //status = i2c_read(0, HTU21D_I2C_ADDR, HTU21D_TEMP, buf, 3); unsigned int temp = (buf [0] << 8 | buf [1]) & 0xFFFC; double tSensorTemp = temp / 65536.0; return -46.85 + (175.72 * tSensorTemp); }
file .h
#ifndef _HTU21D_H_ #define _HTU21D_H_ #include <stdlib.h> #include <stdio.h> #include <math.h> #include <unistd.h> #include <onion-i2c.h> #define HTU21D_I2C_ADDR 0x80 #define HTU21D_TEMP 0xF3 #define HTU21D_HUMID 0xF5 double getTemperature(); #endif
- José Luis Cánovas
- Your
file.hshould be named
HTU21D.h.
- Delete the space after the -I argument.
- Move every unnecessary
#includefrom the
.hto the
c/
cppfile. In this case, all.
- Chris Stratton
None of those suggestions are likely to help. Unless there are missing macro definitions, an "undefined reference" error is an issue with linking rather than compilation.
You need to be passing whatever implements this function to the linker, either as a library or as an additional object file, or by providing it to the compiler as an additional source file to be built.
|
http://community.onion.io/topic/9/how-to-install-gcc/55
|
CC-MAIN-2019-26
|
refinedweb
| 528
| 68.67
|
13 January 2009 05:35 [Source: ICIS news]
By Prema Viswanathan
SINGAPORE (ICIS news)--Indian polymers except for polystyrene (PS) have surged by as much as 12% in the past month largely due to production cuts that limited supply to the market, but the gains may not be sustained in the second quarter, industry sources said on Tuesday.
Low inventories coupled with restricted availability allowed polyethylene (PE) and polypropylene (PP) to gain about $90/tonne (€67.50/tonne) last Friday to $800-950/tonne CFR (cost and freight) ?xml:namespace>
Over the same period, polyvinyl chloride (PVC) also inched up $40/tonne to $670-690/tonne CFR India, while PS fell by up to $80/tonne to $720-880/tonne CFR India, according to global chemical market intelligence service ICIS pricing.
PS prices have been on a downtrend amid adequate availability and weak demand.
“With crude prices falling and the economic outlook worsening by the day in
“It would be unwise for polymer suppliers to be too bullish,” he said.
At 12.07pm
Taking into account the weakness in vital sectors of the economy, some Indian producers were wary about being too bullish on the polymers market.
“Currently, we are seeing good demand for PP, PE and PVC from the agriculture and food packaging segments. But the construction and automotive segments, which have been major drivers of demand in the past, are now at a low ebb. This is a cause for concern,” said a polymer producer.
Tight supply and low inventories among end users have been major factors behind the recent price surge, said a polymer converter.
“Most suppliers of PP, for instance, say they have exhausted their allocations for January and are only willing to offer for February. Converters’ stocks, on the other hand, are quite low, as we have been buying only limited volumes in past months, in anticipation of lower prices,” the converter said.
Supply has been restricted mainly due to plant outages and production cuts in Asia and the
An expected easing of the tight supply could dampen market sentiment, especially for PE and PP, said a second polymer trader.
“There are several new PE and PP plants due to start up in the next few months in the
An end to the nation-wide truckers’ strike, which has resulted in a supply bottleneck for both polymers and plastic goods, would also accelerate deliveries and ease supply, said a second polymer converter.
Major polymer producers in
($1 = €0.75)
|
http://www.icis.com/Articles/2009/01/13/9183950/most-indian-polymers-rise-12-but-q2-gains-in-doubt.html
|
CC-MAIN-2014-52
|
refinedweb
| 414
| 54.76
|
Table of Contents
I’ve written a page about polygon map generation with Voronoi[1] but it doesn’t explain how to code it. On this page I’m going to go through the basics of how to make maps like this with Javascript code examples:
Many people want to code Voronoi and Simplex Noise themselves. I usually use libraries. Here’s what I’ll use:
- Simplex Noise: jwagner/simplex-noise[2].
- Voronoi: mapbox/delaunator[3].
If you’re not using Javascript, there are noise libraries available for most languages, and Delaunator has been ported to many languages.
The first step is to load the libraries. You might use
npm or
yarn but for this page I’m going to load them with script tags. Delaunator’s documentation includes instructions for this but the Simplex Noise library does not, so I’m using unpkg[4] to load them:
<script src=""></script> <script src=""></script>
I also put my own source into a file and a place for it to draw:
<canvas id="map" width="1000" height="1000"></canvas> <script src="voronoi-maps-tutorial.js"></script>
1 Seed points#
With a square tile map we’d loop through to generate a tile for each location:
const GRIDSIZE = 25; let points = []; for (let x = 0; x <= GRIDSIZE; x++) { for (let y = 0; y <= GRIDSIZE; y++) { points.push({x, y}); } }
For Voronoi we need to give it locations for its polygons. Although we could use the same points for our Voronoi, one of the main reasons to use Voronoi is to break up the regular grid lines. Let’s add jitter to the locations:
const GRIDSIZE = 25; const JITTER = 0.5; let points = []; for (let x = 0; x <= GRIDSIZE; x++) { for (let y = 0; y <= GRIDSIZE; y++) { points.push({x: x + JITTER * (Math.random() - Math.random()), y: y + JITTER * (Math.random() - Math.random())}); } }
To see if it looks reasonable, let’s draw them:
function drawPoints(canvas, points) { let ctx = canvas.getContext('2d'); ctx.save(); ctx.scale(canvas.width / GRIDSIZE, canvas.height / GRIDSIZE); ctx.fillStyle = "hsl(0, 50%, 50%)"; for (let {x, y} of points) { ctx.beginPath(); ctx.arc(x, y, 0.1, 0, 2*Math.PI); ctx.fill(); } ctx.restore(); } drawPoints(document.getElementById("diagram-points"), points);
These points could be better, but this set seems good enough for now.
2 Voronoi cells#
Now we can construct the Voronoi cells around each of the seed points. It’s not obvious that Delaunator, a library for Delaunay triangulation, can construct Voronoi cells. For that, I refer you to the Delaunator Guide[5], which shows sample code for constructing Voronoi cells (without clipping).
The first step is to run the Delaunay triangulation algorithm:
let delaunay = Delaunator.from(points, loc => loc.x, loc => loc.y);
The second step is to calculate the circumcenters of the triangles. For various reasons I’m going to use a variant of Voronoi that uses centroids. The code on this page will work whether you use circumcenters or centroids, so experiment with different triangle centers and see which works best for you.
function calculateCentroids(points, delaunay) { const numTriangles = delaunay.halfedges.length / 3; let centroids = []; for (let t = 0; t < numTriangles; t++) { let sumOfX = 0, sumOfY = 0; for (let i = 0; i < 3; i++) { let s = 3*t + i; let p = points[delaunay.triangles[s]]; sumOfX += p.x; sumOfY += p.y; } centroids[t] = {x: sumOfX / 3, y: sumOfY / 3}; } return centroids; }
Let’s construct an object to store everything:
let map = { points, numRegions: points.length, numTriangles: delaunay.halfedges.length / 3, numEdges: delaunay.halfedges.length, halfedges: delaunay.halfedges, triangles: delaunay.triangles, centers: calculateCentroids(points, delaunay) };
And now we can draw the Voronoi cells. This code is based on the code in the Delaunator Guide[6].
function triangleOfEdge(e) { return Math.floor(e / 3); } function nextHalfedge(e) { return (e % 3 === 2) ? e - 2 : e + 1; } function drawCellBoundaries(canvas, map) { let {points, centers, halfedges, triangles, numEdges} = map; let ctx = canvas.getContext('2d'); ctx.save(); ctx.scale(canvas.width / GRIDSIZE, canvas.height / GRIDSIZE); ctx.lineWidth = 0.02; ctx.strokeStyle = "black"; for (let e = 0; e < numEdges; e++) { if (e < delaunay.halfedges[e]) { const p = centers[triangleOfEdge(e)]; const q = centers[triangleOfEdge(halfedges[e])]; ctx.beginPath(); ctx.moveTo(p.x, p.y); ctx.lineTo(q.x, q.y); ctx.stroke(); } } ctx.restore(); } drawCellBoundaries(document.getElementById("diagram-boundaries"), map);
Hey, that looks pretty good, except for the edges of the map. Reload the page to see a new map. Let’s ignore the edges for now. We’ll figure out a solution later.
3 Island shape#
The next step is to assign a height map. I’ll adapt the techniques I use on my terrain-from-noise page. Instead of assigning elevation to every tile, we’ll assign elevation to every Voronoi region. Let’s put them into an array indexed by the region number.
const WAVELENGTH = 0.5; function assignElevation(map) { const noise = new SimplexNoise(); let {points, numRegions} = map; let elevation = []; for (let r = 0; r < numRegions; r++) { let nx = points[r].x / GRIDSIZE - 1/2, ny = points[r].y / GRIDSIZE - 1/2; // start with noise: elevation[r] = (1 + noise.noise2D(nx / WAVELENGTH, ny / WAVELENGTH)) / 2; // modify noise to make islands: let d = 2 * Math.max(Math.abs(nx), Math.abs(ny)); // should be 0-1 elevation[r] = (1 + elevation[r] - d) / 2; } return elevation; } map.elevation = assignElevation(map);
Let’s draw these regions. I’ll again use code based on the Delaunator Guide[7].
function edgesAroundPoint(delaunay, start) { const result = []; let incoming = start; do { result.push(incoming); const outgoing = nextHalfedge(incoming); incoming = delaunay.halfedges[outgoing]; } while (incoming !== -1 && incoming !== start); return result; } function drawCellColors(canvas, map, colorFn) { let ctx = canvas.getContext('2d'); ctx.save(); ctx.scale(canvas.width / GRIDSIZE, canvas.height / GRIDSIZE); let seen = new Set(); // of region ids let {triangles, numEdges, centers} = map; for (let e = 0; e < numEdges; e++) { const r = triangles[nextHalfedge(e)]; if (!seen.has(r)) { seen.add(r); let vertices = edgesAroundPoint(delaunay, e) .map(e => centers[triangleOfEdge(e)]); ctx.fillStyle = colorFn(r); ctx.beginPath(); ctx.moveTo(vertices[0].x, vertices[0].y); for (let i = 1; i < vertices.length; i++) { ctx.lineTo(vertices[i].x, vertices[i].y); } ctx.fill(); } } } drawCellColors( document.getElementById("diagram-cell-elevations"), map, r => map.elevation[r] < 0.5? "hsl(240, 30%, 50%)" : "hsl(90, 20%, 50%)" );
Ok, not great, but it works. It’ll take some tweaking to make the shapes the way you want, but the basics are there.
4 Biomes#
The next step is to make biomes. Again, I’ll follow the techniques from my terrain-from-noise page. The main idea is to add a second noise map:
function assignMoisture(map) { const noise = new SimplexNoise(); let {points, numRegions} = map; let moisture = []; for (let r = 0; r < numRegions; r++) { let nx = points[r].x / GRIDSIZE - 1/2, ny = points[r].y / GRIDSIZE - 1/2; moisture[r] = (1 + noise.noise2D(nx / WAVELENGTH, ny / WAVELENGTH)) / 2; } return moisture; } map.moisture = assignMoisture(map);
Then we can use both the elevation and moisture map to decide on a biome color:
function biomeColor(map, r) { let e = (map.elevation[r] - 0.5) * 2, m = map.moisture[r]; if (e < 0.0) { r = 48 + 48*e; g = 64 + 64*e; b = 127 + 127*e; } else { m = m * (1-e); e = e**4; // tweaks r = 210 - 100 * m; g = 185 - 45 * m; b = 139 - 45 * m; r = 255 * e + r * (1-e), g = 255 * e + g * (1-e), b = 255 * e + b * (1-e); } return `rgb(${r|0}, ${g|0}, ${b|0})`; } drawCellColors( document.getElementById("diagram-cell-biomes"), map, r => biomeColor(map, r) );
Hey, this looks reasonable!
5 Next steps#
I hope this gets you started. There’s plenty more to do.
For example, you may have noticed the map at the top of the page has straight edges but the map we made has ragged edges. Here’s the trick: I added some extra points outside the map so that everything at the edges of the map has another point outside to connect to.
points.push({x: -10, y: GRIDSIZE/2}); points.push({x: GRIDSIZE+10, y: GRIDSIZE/2}); points.push({y: -10, x: GRIDSIZE/2}); points.push({y: GRIDSIZE+10, x: GRIDSIZE/2}); points.push({x: -10, y: -10}); points.push({x: GRIDSIZE+10, y: GRIDSIZE+10}); points.push({y: -10, x: GRIDSIZE+10}); points.push({y: GRIDSIZE+10, x: -10});
Other improvements to consider:
- organize the code to fit into your own project.
- improve point spacing by using better jitter, or “blue noise” with poisson-disk-sampling[8] or this code[9] from Martin Roberts
- remove the weird shapes at the edges of the map, or maybe clip them
- change the island shaping to use a better formula
- use more octaves for the elevation and moisture noise; see my guide to noise for maps
I describe more features and other things to try on my polygon map generator page[10].
All the code on the page also generates the diagrams on this page: voronoi-maps-tutorial.js. I used emacs org-mode to extract the code from this page into a javascript file, and then I run the javascript file on the page to generate the diagrams. The code shown is the same as the code that runs.
|
https://www.redblobgames.com/x/2022-voronoi-maps-tutorial/
|
CC-MAIN-2020-40
|
refinedweb
| 1,539
| 68.06
|
v9fs: Plan 9 Resource Sharing for Linux¶
About¶
v9fs is a Unix implementation of the Plan 9 9p remote filesystem protocol.
This software was originally developed by Ron Minnich <rminnich@sandia.gov> and Maya Gokhale. Additional development by Greg Watson <gwatson@lanl.gov> and most recently Eric Van Hensbergen <ericvh@gmail.com>, Latchesar Ionkov <lucho@ionkov.net> and Russ Cox <rsc@swtch.com>.
The best detailed explanation of the Linux implementation and applications of the 9p client is available in the form of a USENIX paper:
Other applications are described in the following papers:
- XCPU & Clustering
- KVMFS: control file system for KVM
- CellFS: A New Programming Model for the Cell BE
- PROSE I/O: Using 9p to enable Application Partitions
- VirtFS: A Virtualization Aware File System pass-through
Usage¶
For remote file server:
mount -t 9p 10.10.1.2 /mnt/9
For Plan 9 From User Space applications ():
mount -t 9p `namespace`/acme /mnt/9 -o trans=unix,uname=$USER
For server running on QEMU host with virtio transport:
mount -t 9p -o trans=virtio <mount_tag> /mnt/9
where mount_tag is the tag associated by the server to each of the exported mount points. Each 9P export is seen by the client as a virtio device with an associated “mount_tag” property. Available mount tags can be seen by reading /sys/bus/virtio/drivers/9pnet_virtio/virtio<n>/mount_tag files.
Behavior¶
This section aims at describing 9p ‘quirks’ that can be different from a local filesystem behaviors.
- Setting O_NONBLOCK on a file will make client reads return as early as the server returns some data instead of trying to fill the read buffer with the requested amount of bytes or end of file is reached.
Resources¶
Protocol specifications are maintained on github:
9p client and server implementations are listed on
A 9p2000.L server is being developed by LLNL and can be found at
There are user and developer mailing lists available through the v9fs project on sourceforge ().
News and other information is maintained on a Wiki. ().
Bug reports are best issued via the mailing list.
For more information on the Plan 9 Operating System check out
For information on Plan 9 from User Space (Plan 9 applications and libraries ported to Linux/BSD/OSX/etc) check out
|
https://www.kernel.org/doc/html/latest/filesystems/9p.html
|
CC-MAIN-2021-04
|
refinedweb
| 377
| 52.09
|
).
JPT 2.5.0 is almost 100% compatible with JPT 2.4.0 but not quite. One
of the key forces leading to this release was the desire to permit a user
to define simple functions in the JPT interactive parser as well as well
as to enable variable definitions that may be either persistant or
temporary. This required refactoring of the parser classes and renaming
of the old
AbstractParser class to be
BasePaser,
a class that is no longer abstract. All algorithmic code migrated down to
BasePaser
from class
JPTParser.
The purpose of
JPTParser
is now to define built-in functions, operations,
and constants. A new exponentiation operator denoted by a caret has been
added to class
JPTParser.
The class
SimpleFunctionBuilder
defines a GUI that permits a
user to define simple functions interactively and immediately test them
in an expression evaluation pane.
The class
SimpleFunctionBuilderWithIO
adds the ability to
save simple function definitions to disk so that these definitions may
be later recalled for use in interative expression evaluation. 1.5.0 API Documentation online
Access the SUN Java 1.4.2
enough for classroom presentation.
The following class shows a typical starter class for use with the JPF. This class has a large number of useful imports so that a student need not waste time searchng for a necessary import. The class also has several comments that are useful for a beginner but may be detailed by a more experienced user.
The JPF starter class
Methods.java.
The structure of the central portion of the starter code
in
Methods.java is as follows with comments
removed:
public class Methods extends JPF { public static void main(String[] args) { new Methods(); } // place methods and data below }
The critical issue is that
Methods extends
JPF
and that the constructor call,
new Methods(), automatically
calls the default constructor for the
JPF class which does
all of the magic. The
JPF constructor scans the
Methods class looking for what we call simple public
methods. By simple we mean methods whose parameters and
return value may be expressed using one-line strings that are simple
enough to be typed in by the user. For such methods, the JPF constructor
automatically creates a button in its GUI (which is also created
automatically) and the button executes its associated method. If the
method has no arguments and void return it is simply executed. If the
method has arguments and/or a return value, then
JPF will
automatically generate an auxiliary panel as needed to handle the user
interaction.
See the notes in the JPF Source Files documentation for further details.
Access detailed instructions for setting up JPT in Eclipse here. Numerous screen snapshots are provided as slide shows.
New demonstration programs will be added to this site as time permits. Older demos may be found by following the links to the older versions of JPT.
We teach a 1 SH course Freshman Honors Seminar that has many simple demos of how to use the Java Power Tools.
The Java Power Tools team:
To send e-mail to the Java Power Tools team, use: jpt@ccs.neu.edu
Our postal address and fax number are given below:
|
http://www.ccs.neu.edu/jpt/jpt_2_5/details.htm
|
CC-MAIN-2017-17
|
refinedweb
| 530
| 54.12
|
Discussion in 'Domain Names & Parking' started by ethan1, Feb 16, 2010.
Can any one give me some clever article directory site.
.com or .info
thanks
any? ideas?
depends on the niche of articles - try getting something like:
<niche>articles
or
<niche>dir
or
about<niche>
or something like that
I want my site to have like everyones articles of all kinds.
well that's kind of a broad niche lol
lot of options for you then - don't buy that "all the good names are taken" business (a lot of them, but there's still some interesting options)
one thing I've noticed is there are some interesting name hacks in the .net namespace still available for hand-reg
for example, informationpla.net is available right now
or you could find an uninteresting name and then build a subdomain out on it; like grab ticle.us (available right now) and then build out the subdomain ar.ticle.us
Separate names with a comma.
|
https://www.blackhatworld.com/seo/article-directory-site-names.172501/
|
CC-MAIN-2017-13
|
refinedweb
| 163
| 71.34
|
(Extendable) project management for atom.
This is package still under development.
In atom goto
View > Toggle Sweet Projects View or press
ctrl + alt + p.
To open a project double click the project tile. The project will load in a new window.
Right-click a project tile and select
Project Settings to enter a project name and an url.
The settings are stored in a
.sweetproject file at the projects root.
You can ignore them in your .gitignore or keep them to share project settings.
First of all you have to register a json definition like shown below under an unique namespace. Always use your package name as namespace.
The following code will register a json for the package
your-package-name.
The settings dialog of each project will now display a new section with the title
Your Package Name To Be Displayed and a simple text input field with the label
My Input and a default value
hi.
if(atom.sweetprojects){ atom.sweetprojects.setInputs('your-package-name', { package: 'your-package-name', label: 'Your Package Name To Be Displayed', inputs: [ {name: 'myInput', label: 'My Input', value: 'hi', type: 'text', placeholder: 'Insert text here'} ] }); }
You can also have the following types:
checkbox,
number,
Inputs with the type
select must have an additional property
options:
{name: 'mySelect', label: 'My Select', value: '', type: 'select', options: [ {value: 'option1', label: 'Option1'}, {value: 'option2', label: 'Option2'}, {value: 'option3', label: 'Option3'} ]}
You can get the value the user set for the active project by calling the
getValue method:
The first parameter is the namespace (your package name). The second one is the name of the input field, like defined above.
if(atom.sweetprojects){ if(atom.sweetprojects.isActive('your-package-name')){ var text = atom.sweetprojects.getValue('your-package-name', 'myInput'); // the value of the variable text is the text the user set for this field on // the project loaded when this code is executed } }
The
isActive method returns true, if your projects section is turned on for this project. False if not. The
getValue method will always return the value set for the active project.
Don't forget to check the availability of the sweetprojects api. Simply check if the sweetprojects property exists in the atom object by doing
if(atom.sweetprojects).
Sometimes input fields depend on the value of other fields. For example, you may only want the user to enter a password if he selected use password in your selectbox before. You can do that with conditions. Each input field can have an optional
condition property where you can define under which condition the field is displayed. A condition string must have the folowing format:
// [input-name] [operator] [value] condition: 'mySelect=option1'
accepted operators:
=equal
>higher (numerical)
<lower (numerical)
>=higher and equal (numerical)
<=lower and equal (numerical)
<>not equal
You can combine multiple conditions to more complex conditions by using
and and
or.
And-operators (
&) have a stronger binding than or-operators (
|).
{name: 'complexInput', label: 'input with condition', value: '', type: 'text', placeholder: '', condition: 'mySelect=option1|mySelect=option2&myInput=hi'}
The condition of the example above causes the input field complexInput to hide and become visible if either Option1 is selected in the field mySelect or Option2 is selected and the value of the input field myInput is equal to 'hi'.
Let me know if you're using the sweetprojects api in your own packages so I can add a link to your project!
Leonard Nürnberg - Initial work - LennyN95
node-webshotpackage.
This project is licensed under the MIT License - see the LICENSE.md file for details
Donation is welcome :)
Buy me a coffee
Good catch. Let us know what about this package looks wrong to you, and we'll investigate right away.
|
https://atom.io/packages/sweetprojects
|
CC-MAIN-2019-09
|
refinedweb
| 614
| 54.32
|
Item 5: Avoid Creating Unnecessary Objects
It is often appropriate to reuse a single object instead of creating a new functionally equivalent object each time it is needed. Reuse can be both faster and more stylish. An object can always be reused if it is immutable (Item 15).
As an extreme example of what not to do, consider this statement:
String s = new String("stringette"); // DON'T DO THIS!
The statement creates a new
String instance each time it is executed, and none of those object creations is necessary. The argument to the
String constructor ("stringette") is itself a String instance, functionally identical to all of the objects created by the constructor. If this usage occurs in a loop or in a frequently invoked method, millions of String instances can be created needlessly.
The improved version is simply the following:
String s = "stringette";
This version uses a single
String instance, rather than creating a new one each time it is executed. Furthermore, it is guaranteed that the object will be reused by any other code running in the same virtual machine that happens to contain the same string literal [JLS, 3.10.5].
You can often avoid creating unnecessary objects by using static factory methods (Item 1) in preference to constructors on immutable classes that provide both. For example, the static factory method
Boolean.valueOf(String) is almost always preferable to the constructor
Boolean(String). The constructor creates a new object each time it's called, while the static factory method is never required to do so and won't in practice.
In addition to reusing immutable objects, you can also reuse mutable objects if you know they won't be modified. Here is a slightly more subtle, and much more common, example of what not to do. It involves mutable
Date objects that are never modified once their values have been computed. This class models a person and has an
isBabyBoomer method that tells whether the person is a "baby boomer," in other words, whether the person was born between 1946 and 1964:
public class Person { private final Date birthDate; // Other fields, methods, and constructor omitted // DON'T DO THIS! public boolean isBabyBoomer() { // Unnecessary allocation of expensive object Calendar gmtCal = Calendar.getInstance(TimeZone.getTimeZone("GMT")); gmtCal.set(1946, Calendar.JANUARY, 1, 0, 0, 0); Date boomStart = gmtCal.getTime(); gmtCal.set(1965, Calendar.JANUARY, 1, 0, 0, 0); Date boomEnd = gmtCal.getTime(); return birthDate.compareTo(boomStart) >= 0 && birthDate.compareTo(boomEnd) < 0; } }
The
isBabyBoomer method unnecessarily creates a new
Calendar,
TimeZone, and two
Date instances each time it is invoked. The version that follows avoids this inefficiency with a static initializer:
class Person { private final Date birthDate; // Other fields, methods, and constructor omitted /** * The starting and ending dates of the baby boom. */ private static final Date BOOM_START; private static final Date BOOM_END; static { Calendar gmtCal = Calendar.getInstance(TimeZone.getTimeZone("GMT")); gmtCal.set(1946, Calendar.JANUARY, 1, 0, 0, 0); BOOM_START = gmtCal.getTime(); gmtCal.set(1965, Calendar.JANUARY, 1, 0, 0, 0); BOOM_END = gmtCal.getTime(); } public boolean isBabyBoomer() { return birthDate.compareTo(BOOM_START) >= 0 && birthDate.compareTo(BOOM_END) < 0; } } 32,000ms for 10 million invocations, while the improved version takes 130 ms, which is about 250 times faster. Not only is performance improved, but so is clarity. Changing
boomStart and
boomEnd from local variables to static final fields makes it clear that these dates are treated as constants, making the code more understandable. In the interest of full disclosure, the savings from this sort of optimization will not always be this dramatic, as
Calendar instances are particularly expensive to create.
If the improved version of the
Person class is initialized but its
isBabyBoomer method is never invoked, the
BOOM_START and
BOOM_END fields will be initialized unnecessarily. It would be possible to eliminate the unnecessary initializations by lazily initializing these fields (Item 71) the first time the
isBabyBoomer method is invoked, but it is not recommended. As is often the case with lazy initialization, it would complicate the implementation and would be unlikely to result in a noticeable performance improvement beyond what we've already achieved (Item 55).
In the previous examples in this item, it was obvious that the objects in question could be reused because they were not modified after initialization. There are other situations where it is less obvious. Consider the case of adapters [Gamma95,p. 139], also known as views. An adapter is an object that delegates to a backing object, providing an alternative interface to the backing object. Because an adapter has no state beyond that of its backing object, there's no need to create more than one instance of a given adapter to a given object.
For example, the
keySet method of the
Map interface returns a
Set view of the
Map object, consisting of all the keys in the map. Naively, it would seem that every call to
keySet would have to create a new Set instance, but every call to
keySet on a given
Map object may return the same Set instance. Although the returned
Set instance is typically mutable, all of the returned objects are functionally identical: When one of the returned objects changes, so do all the others because they're all backed by the same
Map instance. While it is harmless to create multiple instances of the
keySet view object, it is also unnecessary.
There's a new way to create unnecessary objects in release 1.5. It is called autoboxing, and it allows the programmer to mix primitive and boxed primitive types, boxing and unboxing automatically as needed. Autoboxing blurs but does not erase the distinction between primitive and boxed primitive types. There are subtle semantic distinctions, and not-so-subtle performance differences (Item 49).
Consider the following program, which calculates the sum of all the positive
int values. To do this, the program has to use long arithmetic, because an
int is not big enough to hold the sum of all the positive int values:
// Hideously slow program! Can you spot the object creation? public static void main(String[] args) { Long sum = 0L; for (long i = 0; i < Integer.MAX_VALUE; i++) { sum += i; } System.out.println(sum); } runtime from 43 seconds to 6.8 seconds on my machine. The lesson is clear: prefer primitives to boxed primitives, and watch out for unintentional autoboxing.
This item should not be misconstrued to imply that object creation is expensive and should be avoided. On the contrary, the creation and reclamation of small objects whose constructors do little explicit work is cheap, especially on modern JVM implementations. Creating additional objects to enhance the clarity, simplicity, or power of a program is generally a good thing.
Conversely, avoiding object creation by maintaining your own object pool is a bad idea unless the objects in the pool are extremely heavyweight. The classic example of an object that does justify an object pool is a database connection. The cost of establishing the connection is sufficiently high that it makes sense to reuse these objects. Also, your database license may limit you to a fixed number of connections. Generally speaking, however, maintaining your own object pools clutters your code, increases memory footprint, and harms performance. Modern JVM implementations have highly optimized garbage collectors that easily outperform such object pools on lightweight objects.
The counterpoint to this item is Item 39 on defensive copying. Item 5 says, "Don't create a new object when you should reuse an existing one," while Item 39 says, "Don't reuse an existing object when you should create a new one." Note that the penalty for reusing an object when defensive copying is called for is far greater than the penalty for needlessly creating a duplicate object. Failing to make defensive copies where required can lead to insidious bugs and security holes; creating objects unnecessarily merely affects style and performance.
|
http://www.drdobbs.com/architecture-and-design/creating-and-destroying-java-objects-par/210602264?pgno=2
|
CC-MAIN-2015-18
|
refinedweb
| 1,306
| 53.71
|
I noticed that when I flashed my application with a sample code for AdaFruit’s NFC module, it breathes cyan first but within a minute or so switches to breathing green. Any thoughts on how I could troubleshoot and figure out why the cloud connection is disconnected?
Photon Breathing green after flashing
The code that is causing me trouble is right here.
I flashed another application that just reads analog inputs from sensors and that worked fine and did not cause the cloud connection to be disconnected. Any pointers will be appreciated.
@knspriya, a breathing green LED means that wifi is connected but not cloud (breathing cyan). Without system threading enabled, the Photon requires that the “background” process be allowed to run at least once every 10 secs or the cloud connection will be lost. This can be done by allowing loop() to “end” or by calling Particle.process().
If you look at the demo code you are trying to run, you will see some while() statements preventing loop() from exiting. The first waits for
Serial.available() while the next waits on
!versiondata. You need to add a call to Particle.process() in both those while statements. Give that a shot and let me know how it goes.
Note that in the near future, with system threading fully supported, the wifi and cloud connections will be maintained in the background without the need for user intervention.
@peekay123, thanks for coming to the rescue, as always :-). I will try that out tomorrow and report back. Interesting, I didn’t need this modification when I tried the same code a couple of months back earlier. The only thing that has changed is that I now have the Photon on a PCB with additional NFC Circuitry.
As a side question, does the particle web IDE ever force a firmware update on the Photon? I did not change the firmware and would like to verify that its the same version with which it was shipped. Is there a CLI where I can explore these things further?
Never mind, looked through the forums and looks like a firmware update was done automatically when I flashed the build using the WebIDE. Now comes the question as to what changes were done that requires user calling Particle.process() now. :-).
@knspriya, in your code you called
SPARK_WLAN_Loop() which is the “old” Particle.process(). This function has been deprecated in the new firmware. HOWEVER, note that in your code,
SPARK_WLAN_SETUP is not defined so
SPARK_WLAN_Loop() is never called!
No changes were made in the 0.4.6 firmware that require you to call Particle.process() more often. Rather, we detect when particle process hasn’t been called and change the LED color to reflect that, since the cloud will have already disconnected the device by that point.
If your app does eventually resume then the cloud will automatically connect again.
So look for code that is blocking loop. Alternatively, add
SYSTEM_THREAD(ENABLED) to the top of your sketch to enable multithreading.
And this is the answer to a similar issue I am currently seeing in the generic Adafruit_PN532 library Ive been working on .
Now I suspect thast as I can connect to the Photon via USB via Particle-CLI I can just flash it with another .bin downloaded from the cloud and then it will reconnect.
Just to confirm that I resolved the cause of my own issue in the Adafruit_pn532 readmifareClassic example it turns out that the code to check for Serial Availability was constantly failing. So I commented it out.
// this is not useful feature to loop around …
while(!Serial.available()) { #ifdef SPARK_WLAN_SETUP SPARK_WLAN_Loop(); // Open serial terminal and Press ENTER. #endif Serial.println("Serial was not available\n"); }
Effectively this code never completed so the process never responded on the serial; despite printing out that Serial was available;
Somewhere between making better use of Particle.process and reducing loops which can run infinitely I got myself out of a hole in which I could not get the photon online.
So I am leaving this lesson learned here just in case anyone else stumbles upon the issue in the future and wonders what I did next.
This is a useful construct during debugging to ensure you really catch all serial output, but sure enough for production use it should be commented out
This waits for the user (debugging person) to hit any key in serial monitor to indicate: “Now I’m paying attention, go on …”
Good point; I’ll add my comments and I pulled up the debug by picking up the serial.println in the loop then deciding to comment it out to keep the code falling forward.
This was exactly my problems. I thought my devices were bricked but saviour “particle flash --usb tinker” from the particle-cli and all back.
Since this thread has been brought to the surface again, I’ll add my breathing green tip sheet:
Particle Breathing Green Tips
Breathing green mode can be confusing to new Particle programmers but fortunately it’s usually easy to recover from.
I can’t flash my Photon anymore.
Do not unclaim your device
This rarely if ever fixes anything, and it sometimes can make things much worse. Resist the urge to do this. It never fixes a blinking green problem.
Cause 1: Blocking the loop
In this simple program, you’ll breathe cyan, then about 10 seconds later, you’ll go to blinking.
Solution 1: Add some Particle.process() calls.
Solution 2: Enable SYSTEM_THREAD); }
Side note: Wi-Fi only mode.
[SOLVED] Cloud actions make photon to stop working
Good day
I am currently having a similar problem with my photon. Its breathing green and when i try to enter safe mode using the solution 1 its blinking orange (part yellow part red). I am new to particle and an a novice programmer. I had done few simple projects with the arduino uno but nothing involving SPI or I2C communications. i have no idea how to restore the operation of the microcontroller. i have been sitting with this problem for 2 hours after trying to upload the following code to the photon:
//Portable Weather Station: BME280 #include <Wire.h> #include <SPI.h> #include <Adafruit_Sensor.h> #include <Adafruit_BME280.h> #define BME_SCK A3 #define BME_MISO A4 #define BME_MOSI A5 #define BME_CS A2 ); void setup() { Serial.begin(9600); Serial.println(F("BME280 test")); // if (!bme.begin(0x76)) {); }
This is code from the adafruit BME280 library which i amended to make it applicable to the photon using the particle dev. Please help.
|
https://community.particle.io/t/photon-breathing-green-after-flashing/16529
|
CC-MAIN-2019-22
|
refinedweb
| 1,092
| 64.2
|
I'm using Saxon for querying my xml documents, so I will see
the xml schema processor that is provided in Saxon :
Is there an example in the source documentation that implements an
xml schema as a tree structure ?
No, I'm afraid there aren't any examples of navigating the schema model.
You'll have to work mainly from the Javadoc, which is at
You will need the schema-aware version of Saxon (Saxon-SA) of course.
The SchemaAwareConfiguration object has a method addSchemaSource() that
allows you to load a schema document supplied as any JAXP Source object. You
can then use methods getElementDeclaration() and getSchemaType() to get
global element declarations and global types, respectively, if you know
their names. You can cast the results to a more specific type such as
com.saxonica.schema.ElementDecl or com.saxonica.schema.UserComplexType.
You can also call getSchema() to get the schema components for a particular
namespace (this will return an instance of com.saxonica.PreparedSchema), or
getSuperSchema() to get a PreparedSchema containing all schema components
for all namespaces.
To navigate around the structure you will need a fairly good understanding
of the schema component model. For example, to find out all the possible
children of a given element, you need first to find the SchemaType
representing the type of the element. If this is a ComplexType, you then
need to obtain the Particle that represents its content model. If the
Particle is a ChoiceCompositor or a SequenceCompositor you can then get the
content model of this compositor, and by following this structure
recursively you can eventually get to the Element particles in the content
model (you might also find Wildcard particles, of course). You may also need
to navigate from a ComplexType to other types derived from it by restriction
or by extension.
If you let me know what you actually want to find out from the schema then I
may be able to point you to some shortcuts.
Michael Kay
Since I'm using Saxon for querying my xml documents, so I will see the xml schema processor that is provided in Saxon :
Is there an example in the source documentation that implements an xml schema as a tree structure ?
thanks for help.
DIV { MARGIN:0px;} Yes,@...]
Sent: 02 August 2006 13:29
To: Michael Kay; jdom-interest@...
Subject: Re : [jdom-interest] Parsing an XML Schema
Is there an XML Schema processor that makes this analysis ?
>>>>>>>
DIV { MARGIN:0px;}@... [mailto:jdom-interest-bounces@...] On Behalf Of enis enis
Sent: 01 August 2006 16:41
To: Kevin POCHAT; jdom-interest@...:...
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
|
https://sourceforge.net/p/saxon/mailman/message/12864598/
|
CC-MAIN-2017-13
|
refinedweb
| 473
| 53.31
|
Wikibooks:Staff lounge/Archive 21
From Wikibooks, the open-content textbooks collection
[edit] Manuals without courses
A textbook is defined as a manual for a course of study. It seems that this definition would prevent someone from writing a manual for a new course of study and mounting it at Wikibooks until they have run and advertised the course. It would also prevent old textbooks for now-defunct courses of study from being mounted (ie: super-8 cinematography, Servicing the Triumph Bonneville etc.).
We seem to have a choice of whether to exclude manuals that have no courses (such as some video game guides and cutting edge guides and manuals), to allow them to be shown like any other textbook or to put them in a bookshelf called "Manuals without courses of study". Of the three options exclusion seems the worst. RobinH 13:53, 2 June 2006 (UTC)
- Don't be too quick to declare that certain topics don't have a course that is set up. And Wikibooks is not just oriented toward college textbooks either. Can you cite a specific book (on Wikibooks right now) that is not going to be covered? I see merit in a Wikibook about servicing a Model-T Ford, as it is something that would be both facinating from an historical perspective as well as being something current and up-to-date by noting modern machining methods and parts sources for trying to rebuild or recreate one of these historical automobiles. Otherwise, Wikibooks is simply too new of a project to have very old content that needs to be archived or depreciated (aka moved to Wikisource or something like that) simply because it has become obsoleted. Come back in 10 years and that may be a different issue altogether here on Wikibooks. --Rob Horning 04:48, 3 June 2006 (UTC)
I agree with Rob here. A textbook does not need to be related to a particular course of study. It does need to adopt a textbook style (and I deliberately mean a rather than the here, as there is a multitude of textbook styles), and it has to have content worthy of study. Our scope is textbooks. (Indeed, Robin, you continue to use manuals as well, that is wrong. We should not have manuals unless.....they also fit the definition of a textbook.) Textbook should be given its ordinary English meaning (the one you think it means even though it's not written down in the dictionary and it's difficult to express exactly what it is). It should be given a wide, but not strained meaning. And that does not require the book to be geared up to a course of study, Jguk 08:15, 3 June 2006 (UTC)
- In which case are we all agreed that video game guides are appropriate for Wikibooks? They are often manuals without courses of study. RobinH 15:34, 3 June 2006 (UTC)
- The fact of the matter is; Wikibooks right now has not defined Textbooks, yet so many books have been deleted because they are "Not textbooks". Those books did fit the criteria for WB:WIW. If you think there is a solid definition to be set here, then I think WB:WIW should be clarified before any further deletions take place.
- Second; what the heck is a "textbook style" book? English 3200 consists of nothing but questions and exersizes. There are no explanations for the answers, no "text" to be read except exersize after exersize. Yet, it is used as a book for some college courses. Is "useable" for a college course good enough to be named a "textbook"? And if not, then why not?
- JGuk, all your responces on WB:VFD consist of a single line: "Not a textbook". There is an issue here, and quite a big one too. There is no metric to decide what is and what isn't a textbook. All I have to do is say "Yes, it is a textbook" to any of those WB:VFD and you can imagine exactly how lively a debate on this will be.
- I can label German as "not a textbook", and there is no contradiction to that. In fact, perhaps I should to just show exactly how bad this situation is on Wikibooks, and how freakishly bad the argument of "not a textbook" is. --Dragontamer 16:54, 3 June 2006 (UTC)
I would define a textbook as a manual or guide to an existing and recognised body of knowledge. It would include high school physics, a guide to the streets of London and motorcycle maintenance manuals. If we adopt this definition we will not be in danger of rejecting obvious textbooks such as "A guide to the mollusca of the New Jersey shore" but we will be in danger of accepting slightly dubious textbooks such as a guides to video games and small villages in Serbia. But disk space is cheap and "Wikibooks is not paper" so does it really matter if we go too wide in our scope? It will certainly matter if we go too narrow and reject important new fields of study or rule out books that would be excellent study companions or resources for courses.
The definition also includes blue collar books. If I was running a motorcycle garage and someone brought in an old bike I would be over the moon to find a "Maintaining the BSA Bantam" in Wikibooks. RobinH 16:31, 4 June 2006 (UTC)
[edit] Request for comments about a deletion
I'd like to ask for comments about the deletion of Transwiki:List of tongue-twisters. It was a Wikipedia article which was proposed for moving to Wikibooks, and the result of the debate was Transwiki (see [1]), which happened on 18:52, 29 May 2006 (UTC) ([2]), but User:Jguk deleted it on the same day (18:29, 29 May 2006, [3]). I'm afraid it would be unfair to act against community consensus since nobody proposed, supported or agreed on deletion on the discussion page. I asked him for reversion on his user page and I'm currently waiting for his response but in the meantime I'd be happy to hear your comments. This list was a considerable collection made by several people during years, something that is certainly of interest by illustrating the differences of languages in terms of phonology, morphonology and morphophonemic, and I really doubt how it could be "not suitable for wikibooks", as Jguk argued at the deletion. Adam78 15:42, 2 June 2006 (UTC)
- I have emailed Adam a copy of the deleted text and noted why it is unsuitable for wikibooks and should not have been sent here by wikipedia in the first instance. I am willing to extend this courtesy to others (provided they are not too many), but really an admin on wikipedia should undelete the content from there to allow the page to be transwikied to a suitable location. Unfortunately, this is an example of wikipedians not appreciating that wikibooks' scope is textbooks and not books more generally.
- This will need to be sorted sometime. I think my Wikibooks mission for July might be to go back to Wikipedia to educate them about us, and encourage some of the WikiProject participants in particular, to come here and write good quality textbooks (particularly some directed at school syllabuses), Jguk 17:27, 2 June 2006 (UTC)
- I think this should be done sooner than later. One of the things I did on Wikipedia was fix up the {{movetowikibooks}} template on Wikipedia to explictly note that if the content fails to meet the WB:WIW criteria for accpetable content, that performing a transwiki to Wikibooks is essentially deleting it from all Wikimedia projects. I think this message needs to be hammered home more perhaps, and the warning issued formally on the Wikipedia Village Pump.
As far as having WikiProjects becoming involved with individual Wikibooks, this is already happening in a number of instances. The Harry Potter Wikiproject, for example, dealt with the Muggles' Guide quite effectively, and the Astronomy Wikiproject has also added some substance to the Astronomy Wikibook as well. This is something that should be happening more, and it would be appreciated if there are some people on Wikipedia who are experts on a topic to come to Wikibooks and help flesh out some of the Wikibooks as well. I've had some drive-by (minor proofreading and technical review) of the book I've written, Serial Programming, which has several links on Wikipedia as well (most that have been put in by others than myself). In this case it would be useful for specific Wikiprojects that might be able to help out a particular Wikibook to come in and join with the effort. A general appeal to Wikipedia WikiProjects, letting them know that Wikibooks is something that can help out with their efforts, is something that might also have some merit as well. --Rob Horning 04:41, 3 June 2006 (UTC)
Why so keen to delete? Surely this list can be classified suitably along with similar lists such as reviews of cliches etc and merged into a single book about the peculiarities of English by someone at some time in the future. RobinH 16:38, 4 June 2006 (UTC)
- I agree. Pages should stay in the transwiki pseudo-namespace until needed by a book. "Glurch", for example, could have been added to Wikijunior Big Book of Fun Science Experiments. --hagindaz 16:53, 4 June 2006 (UTC)
I started my Accounting Interactions fragment specifically *because* it ran opposite to common courses currently running, but yet felt more useful because it was based on working world corporate practices. As one of the most stolid subjects, Accounting shouldn't be close to deletion, but I would be unhappy if my particular volume were marked to delete "because no course" yet runs the subject. Shouldn't we encourage innovative texts, and *then* inspire professors to try the subject? --TaoPhoenix
- This is rather amusing. Wikiversity is currently in danger of activating with a mandate for "no courses". Meanwhile some people are rather insistent that "textbooks" created at Wikiversity must absolutely be transferred to Wikibooks when ready. So under currently proposed Board mandated/approved policies at the two Wikimedia sites, Wikibooks and Wikiversity, a popular learning trail at the impending Wikiversity could be running a few months or years have the "textbook" transwikied to Wikibooks and then the active studiers at Wikiversity be shocked to find their locally created textbook deleted from Wikibooks because no "course" is using it. I hope you guys have been around long enough for Roberth to vouch for your handles or the special committee and/or the Wikimedia Foundation Board is likely to suspect I made this up to back up a point I have been pounding on regarding local autonomy and policy making. I go now to stash an electronic copy of all pertinent text and provide the link to the powers that be appointed. Hopefully they will enjoy presenting it too Board Members at an appropriate times. 8) user:lazyquasar
[edit] Citing Sources
Hi, I'm new from Wikipedia - so I'm not sure if you guys work the same way. I recently added this addition to an article. Is it appropriate to add a source in Wikibooks like it is in Wikipedia? Is footnotes appropriate, because I can't really do anything else in an infobox. Thanks, --DanielBC 04:00, 3 June 2006 (UTC)
[edit] Help with setting up a "wikimanual"
I've just started a book called A Wikimanual of Gardening, and would like some help getting it working properly. The book is inspired by the recurring discussions over at WP about getting "how-to" information out of articles dealing with plant, insect, soil, etc.
- It would be wise to retitle to "A Wiki Textbook of Gardening". "Manuals" are frequently rapidly deleted or voted for deletion (VfD) and then deleted. Local policy is TEXTBOOKS ONLY. user:lazyquasar
What I'm hoping to do is enable them to create chapters in an organized wikibook, rather than "dumping" information through the transwiki process. I've opened a chapter on that (A_Wikimanual_of_Gardening/How_to_Transwiki_Information_to_this_Book), perhaps I could get a few pointers on that part of things from you folks?
Also, I'm moving a couple other books (ones I started) to this one as chapters. Still figuring that part out. Johnny 17:01, 3 June 2006 (UTC)
- We already have a Gardening textbook. Can you not merge your work into here? Unless you're trying to appeal to a different audience than the Gardening book, we really shouldn't have two books on the same subject, Jguk 07:54, 4 June 2006 (UTC)
- Forgive me for asking I am simply hanging out waiting for Wikiversity. Can you cite a policy such as NO DUPE or is this a personal interpretation of the best way to proceed locally at the moment. You are contending that there should only be one textbook on Gardening at Wikibooks? user:lazyquasar
- From what I understand from the Talk:Gardening page, that book has serious issues and was slated either for deletion or to be moved to wikisource. Seems better to start anew than work on a book with that hanging over it. Johnny 13:43, 4 June 2006 (UTC)
The Gardening book really doesn't seem to have much in it when I look at the individual chapters, it's more of a commentary than a guide, and much is quite out of date. I'll keep looking though. Is deletion compulsory? Some readers might find it interesting. (BTW: Why would having "manual" in the title attract an RfD? It's a how-to book, which is just a long-winded way to say "manual".) Johnny 11:24, 4 July 2006 (UTC)
[edit] Please remove bookshelf protection
This is really not working. Genuine contributors are being forced to work around bookshelf edit protection, which was instituted due to there being no objections. Several contributors have added books that they started to Wikibooks:Requested books and others have added {{cleanup-link}} to their own books. I'm sure many other users think they either can't create new books or have to go through some requests process (no one reads help pages). I have every bookshelf watched and usually check my watchlist once a day, so I'll be able to revert any requests or vandalism fairly quickly. --hagindaz 03:21, 4 June 2006 (UTC)
- Any comments? As protection was instituted due to no objections being raised, could it be removed until some discussion occurs on the issue (as I am now objecting to the policy). Protection seems trivial and insignificant, but I believe that the growth of Wikibooks is being stunted severly due to the policy. --hagindaz 22:23, 7 June 2006 (UTC)
- As a future Wikiversitium Alumi I favor as many Wikibooks as possible so in my ignorance I think it should be removed as per request until at least two votes can be found for retention. [[user:lazyquasar]
One other idea I would like to throw out is to add preselected red links to books on core, foundational topics to the bookshelves, like I have done on Wikibooks:Biology bookshelf. Many of the requests on Wikibooks:Requested books aren't about traditional textbook subjects taught in universities, and I think a greater focus is needed (though other books should also be encouraged). One way this could be done is by adding <small> Key uncreated books: [[Book 1]] - [[Book 2]] - [[Book 3]] - etc. </small> to the end of each section instead of lists with comment tags explaining that requests should not be added there. --hagindaz 03:21, 4 June 2006 (UTC)
- I will weigh in on the issue, for what little my opinion is worth. I do agree with the initial presumption that the bookshelves are especially prone to anonymous users adding in junk. Even if those additions are in good faith, they are contributions that either need to be rolled back (which is hard to justify if the edit was in good faith), or will to sit around as a red link forever. On a broader note, I do assert that high-traffic and "Wikibooks:" pages especially should be blocked from being edited by anonymous users. Pages that are highly visible, such as the policy pages, bookshelves, main pages, etc are as much of an advertisement for us as a community as they are functional navigation pages. Allowing people, even in good faith, to post any garbage they see fit to one of these pages makes the pages look bad, and makes the community look bad by extension. The {{cleanup-link}} was instituted just for this purpose, and anonymous book authors are encouraged to use it when possible. Creating a new book here, as we all have discovered, is a large task, and I have to question whether an anonymous user--who can't even be troubled to register a free username--would have enough dedication and motivation to properly start and nurture a book here. If they don't have long-term intentions to contribute and nurture a book that they are starting, those books are essentially doomed to become eternal stubs, which then becomes a cleanup problem later. --Whiteknight(talk) (projects) 23:19, 7 June 2006 (UTC)
- Garbage will simply be reverted. If a new user goes to a bookshelf, see the page locked (even after registering), and only sees a "suggest book" link, what is he supposed to think? I have seen quality books created by both newly registered and IP users. Wikibooks certainly never would have grown at this rate if protection had been instituted on day one. Wikinews has a "start a new article" form right on the main page, and while we shouldn't go that far, I do think full protection is not conducive to the growth of Wikibooks and does more harm than good. I think stating in either comment tags or small text that red links and stubs should not be added would discourage most users from posting garbage, which would solve the situations you described. Once Wikibooks has the level of "completeness" in textbook topics that Wikipedia has in encyclopedia articles, I agree that protection should certainly be instituted again. And please correct me if I'm wrong, but the {{cleanup-link}} tag was instituted in order to aid in cleanup. Users will never know about the tag unless they perform general cleanup tasks, which is unlikely for a new user to do. One alternative I would support would be the system I described above, which would "guide" new users into creating books on traditional textbook topics. --hagindaz 13:39, 15 June 2006 (UTC)
[edit] User:Jguk administrator abuse.
I would appreciate some assistance with a dispute I have having with User:Jguk. He has begun changing the Wikijunior Ancient Civilizations module from the original BCE/CE notation to his preferred BC/AD notation, and when I changed it back he threatened to, then, blocked me (ip: 65.115.220.89). The BCE/CE notation is more appropriate for textbooks and academic texts, and is preferred when dealing with non-Christian subjects.
- He also tried to hide this message. All in all very disappointing behavior.
- To be fair, BC/AD year convention may be more appropriate for textbooks and academic texts, but it adds in an extra dimension of confusion for younger readers. Children, oblivious to the nuances of the politico-religious impetus behind the change, are more commonly exposed to the BC/AD scheme. That said, User:Jguk may be out of line here (i don't know from the evidence), but entering into an edit war, especially if you are contributing from an anonymous IP account, is highly suspect. The correct response to this situation would be to raise the matter on the appropriate talk page, and attempt to reach community concensus on the matter. I will look over the records and see if the block on IP:65.115.220.89 is appropriate and warranted. --Whiteknight(talk) (projects) 00:39, 5 June 2006 (UTC)
- After looking at things more closely, I feel that User:Jguk acted correctly, although perhaps too quickly. First off, an anonymous user from an open proxy entered into an editwar, without ever once explaining the reasoning for repeatedly changing User:Jguk's edits. Also, this IP has been known as a vandal on Wikipedia. Entering into an edit war without explanation, especially from an IP address with a vandalism history, was a bad move on your part, and hence the proxy was blocked. For future reference however, such a case should not be labeled as vandalism so quickly, at least not without some sort of confirmation from a second user. I will start a discussion on the talk page concerning the use of the BC/AD or BCE/CE naming conventions. Community concensus on the matter will determine the way the dates are written in that module, in the future. --Whiteknight(talk) (projects) 00:47, 5 June 2006 (UTC)
- I will apologize if I've done something wrong, but I don't believe I have. It does take two to engage in an edit war after all, and if certain standards of behavior are expected from anonymous users, then surely admins should be held to the same standards (or maybe even higher). I hope that you can agree that labeling edits you don't agree with as "vandalism", and issuing threats are definitely not appropriate behavior. Attempting to hide complaints about your behavior is even worse, as well as using admin tools to win in a dispute. I have been editing here for many months without problems until now, and when I saw that something that I had previously contributed to had been changed for the worse (without explanation), I changed it back. User:Jguk then proceeded to change it back (without explanation), and then proceeded to threaten me.
- As to BC/AD vs. BCE/CE, I feel that it would be a disservice to children to insist on using archaic terminology, and the are likely to be even more confused when they encounter the appropriate terminology used in other textbooks (as the majority of textbooks and academic papers use BCE/CE).
This is an example of a user who has previously vandalised wikipedia coming over here with a series of open proxies to make disruptive, trolling edits (as easily shown by the complete lack of discussion by the user prior to my blocking, and then using open proxies to post the old wikipedia complaints of "administrator abuse" as soon as the disruptive account is posted. Wikimedia also has a policy of blocking open proxies on sight. Please don't feed the trolls, and please someone block this "Julie", who is clearly the same person as before. Meanwhile, I will block all the open proxies on the list I posted to WB:VIP (which may have the same effect) Jguk 06:43, 5 June 2006 (UTC)
- Wikibookians may be interested in my response to a query Whiteknight asked me which can be found here. Some may be unfamiliar with the concept of open proxies, which this user was using. An open proxy essentially allows someone on the internet to be entirely anonymous, with no easy trace back to themselves. They are also used to circumvent censorship. Although internet users may use open proxies for entirely bona fide reasons, Wikimedia has found that in practice the overwhelming majority of edits made by users using open proxies are disruptive - being vandalism, trolling or spamming. Accordingly (and see m:WM:NOP) editors to Wikimedia projects are not allowed to use them.
- Here we had an instance of a user using open proxies in order to try to disrupt wikibooks by introducing an issue that has proven very controversial on wikipedia. Once warned to stop, the user continued, resulting in a ban. That user then, instead of following the instructions on MediaWiki:Blockedtext which they would have seen, chose to use another open proxy to come straight to the staff lounge complaining of the usual things trolls complain about. I accept that I could have been slower and more definite that this user was up to no good before banning them, and will be slower in the future, but my initial suspicions have certainly been proven correct.
- I would add that none of this has anything whatsoever to do with the underlying content/style issue this user is now mentioning - the blocks are for using open proxies and trolling. (This has nothing to do with vandalism, I only added it to WB:VIP as I was unaware of Wikibooks:Problem users, where this user would more properly belong.) There can, of course, be no problem with bona fide Wikibookians discussing any content issue they see fit - but that discussion must be between those acting in good faith with the aim of improving Wikibooks, with those who seek only to disrupt excluded, Jguk 17:13, 5 June 2006 (UTC)
Contrary to what Jguk says this was his first message to me before he decided to block me:
- Please stop coming to Wikijunior, a project that is there for children, and making edits that will only serve to confuse them. There is no need at all, as far as I can see, to swap one very common system of notation for one used only a fraction of the time and which has proven controversial almost everywhere where it has been introduced to the general public. If you do this again, you will be blocked. On the other hand, if you would like to assist in improving the book, bearing in mind its target audience of 8 to 12 year olds, that would be welcome. Kind regards, Jguk 19:48, 4 June 2006 (UTC)
Notice that there is nothing about proxies, but more along the lines of "agree with me or be blocked". When I did disagree, he blocked me. I will also note that in each case it was user Jguk who unilaterally changed from the original notation to his preferred format, without discussion or explanation.
As for the proxy, I have to use one to edit from work. I did not know this was against the rules and will stop editing from work from now on.
This Jguk fellow also appears to be banned from en.wikipedia for this same type of behavior and for harassing other editors (particularly Jewish and Muslim ones). Should he be allowed to continue the same type of behavior here? What effect should the ban on en.wikipedia have here, especially when he continues in the same pattern of abuse?
It is most troubling that people of his character are put in positions of authority. He has also, again, threatened to block me for no reason whatsoever.
- First off, we are not wikipedia, and what happens over there has very little bearing on what happens over here. Second, there is a standing policy against the use of open proxies for accessing wikimedia projects: this is a safeguard against spam and vandalism and is a valuable safeguard at that. User:Jguk has stated his reasons for blocking your proxy very clearly on WB:VIP. Also, you never disagreed. According to the page history, you didn't engage in any kind of discussion on the topic whatsoever. What User:Jguk saw was an anonymous user from an open proxy engaging in an edit war without discussion. It is a shame that you no longer want to contribute from work. This problem could have been avoided if you had a valid account, or if you gave us some kind of indication that your intentions were good. --Whiteknight(talk) (projects) 23:09, 7 June 2006 (UTC)
- We are Wikipedian when Wales says so, here is a WP quote which pertains to certain obsessive behavior which might be frowned upon: My own opinion is to allow the people who go through Wikipedia changing BC to BCE and the people who go through changing BCE to BC free reign. It keeps them off the street, and the devil makes work for idle hands. Rick (WIKIPEDIAN) 14:58, 6 March 2006 (UTC) - Athrash | Talk 00:40, 19 June 2006 (UTC)
[edit] Idea to use Commons pronunciation files to assist language learning
Hello,
Mastering pronuncation is obviously one of the vital first steps in learning any language. There are quite a lot of pronunciation files stored at the Commons (for example, over 800 files just for German), but I have a bit of a feeling they are not widely utilised here.
I had an idea that we could create a "flash cards" style program/tool for language learners to use, based on picking random files from the corresponding Commons category. (Either that - or the textbook writers here could supply lists for appropriate levels, eg. first level just basic alphabetic sounds, vocab lists for each week.) The program could display the word on the screen and after a short display/hitting a button, play the sound.
What do you think? The interface could probably be translated pretty easily. Then the same tool could be used by people learning language X no matter what their native language was (assuming someone supplied a translation for the interface). For example, there are over 1000 English files. It just seems like a great potential resource for language learners.
What do you guys think? A useful idea, or not really? If there are some people here who think it might be useful, I'll follow it up and try to find some techy-types who might be able to write such a tool. :)
--pfctdayelise 13:16, 5 June 2006 (UTC)
- Sounds like a great idea - particularly for our larger or more active language books (although I confess I am not a contributor of these). I know the person editing Modern Greek would be particularly interested in samples from a native speaker, Jguk 16:59, 5 June 2006 (UTC)
- To me sounds like an ideal tool to help create content at Wikiversity if it is approved as well. Could become the effective basis for an entire introductory languages department. Go for it! Can not succeed big if you do not try. user:lazyquasar
[edit] Another free textbook project from University of Denver
Thought you might be interested in this article from the Rocky Mountain News: "Free online textbooks":
- Free electronic textbooks for underprivileged college students in far-flung lands. That's the aim of a University of Denver professor and three colleagues who are using the free online encyclopedia Wikipedia as a model.
- They've launched a venture to develop a Wiki-based system for producing online college textbooks free to students in developing nations. Wiki refers to software that allows users to freely create and edit Web pages. The four have sent out a call to thousands of professors seeking content for Globaltext.
- "Our primary targets are students in countries where a typical college textbook can amount to 20 percent of the average person's yearly income," said Donald McCubbrey, professor at DU's Daniels College of Business.
- Students could print the textbooks at a low cost.
- For the project, McCubbrey has teamed with professors at the University of Georgia, City University of Hong Kong and Ohio University.
Their goals may not include licensing that's compatible with ours, but it might be worth communicating with them at least. Here is McCubbrey's contact page at University of Denver:
Good luck! Catherine 17:49, 5 June 2006 (UTC)
- If they know about wikipedia, and they have done their research, I would wonder why they haven't heard of wikibooks? at the very least, mention us as a model instead of wikipedia (which is not about creating textbooks). I'll send the guy an email telling him about our existance here. If we could enlist the help of academia, it would be a great help to us at wikibooks. --Whiteknight(talk) (projects) 17:56, 5 June 2006 (UTC)
- Actually, I see now from reading a bit of material, that he is familiar with wikipedia, and that he is a driving force behind the XML book here. The main idea of his project is to utilize corporate sponsorship as a means of driving book production towards a state of completion, and therefore maintain prolonged interest among contributors. By securing corporate sponsorship in this manner, he is able to provide payment for valuable contributions in a manner that wikibooks would be unable to do. interesting project. --Whiteknight(talk) (projects) 18:02, 5 June 2006 (UTC)
[edit] Does the wiki background image consume server bandwidth?
The nice wiki sunflower-resembling background image which is a part of the default 'mono-book' skin is quite a lot larger than what is actually displayed on any wiki page I have seen.
Since probably the wast majority of wikis users use the default skin, and wikis bandwidth seems limited, I wonder how much it would improve the load on the wiki servers if the image was reduced in size and cut to only fill the visible region of the wiki pages. I at least often see that image load somewhat slowly filling about 1/3 of the page height, for then to be covered by the page content and wonder if it could be consuming a major part of the wiki bandwidth. (I've chosen another skin now where this doesnt happen).
(I'm not even sure this is the forum for this question since its more general than wikibooks - but couldnt find a obviously suitable place to put it. KristianMolhave 19:56, 5 June 2006 (UTC))
- I think you're referring to the image at, which is a closeup of the binding of a book that's open on a flat surface. It's 7881 bytes long. I think there are two reasons it's not a burden on the Wikibooks servers:
- It's not a very large file. It probably takes only four or five IP packets to send the file to a user's browser.
- Every browser caches images, so it is very rarely downloaded more than once every few days or weeks for the typical user. In fact, a user that visits Wikibooks frequently, may never reload the image, since it will always be fresh in his or her cache.
- yes - sounds very reasonable - thanks KristianMolhave 23:36, 5 June 2006 (UTC)
In a not-very-related-idea, can we change the favicon to the Wikibooks logo? Commons: has done it so I presume it is possible. I'd hate to think I'm on Wikipedia when I'm not. ;) (Probably if you make a bugzilla: request the devs can do it pretty easily.) pfctdayelise 14:34, 6 June 2006 (UTC)
- Well, the response was overwhelming, once again. I made a bugzilla request anyway. pfctdayelise 12:49, 8 June 2006 (UTC)
[edit] Mailing list
Please... I encourage all Wikibookians to subscribe to the Wikibooks mailing list. You can see more information and sign up here. For some (including Jimbo), it is easier to discuss things there. Just thought you should know. Hope to see you all there soon! --LV (Dark Mark) 19:45, 7 June 2006 (UTC)
- How is it easier to discuss things on a mailing list? --kwhitefoot 11:41, 8 June 2006 (UTC)
- There is a single unified source for the discussion instead of 15-20 seperate discussions. Jimbo lives in email much more than on the wiki, so if there are things he or other WMF people should know about and participate in the discussion of, then the mailing list is the place to go. Kellen T 11:51, 8 June 2006 (UTC)
[edit] Admins?
Would interested admins please try and resolve some of the modules in Category:Candidates for speedy deletion. It's getting rather large. I may have a go at some later this evening. Thanks. --LV (Dark Mark) 21:18, 7 June 2006 (UTC)
- I'll do what I can for now. Let me know if there is anything else that I can do. --Whiteknight(talk) (projects) 23:23, 7 June 2006 (UTC)
- There are a couple I dunno what to do with. So I just left them as they were --Dragontamer 22:12, 27 June 2006 (UTC)
I would like to revive this request. There are 39 pages, 5 categories, and 118 images currently at WB:SD, some of which have been there for over six days. --Think Fast 23:01, 27 June 2006 (UTC)
[edit] Watching a Directory
Is there a way for me to watch an entire directory, including any new pages that are created within that directory in future? Thanks, --DanielBC 10:46, 9 June 2006 (UTC)
- No. If you have a single table of contents, though, you can use the "Related changes" link in the toolbox in the left hand menu to see changes on pages linked from that particular page. Kellen T 11:22, 9 June 2006 (UTC)
You can type in {{Special:Prefixindex/Booktitle}}, assuming the book complies with the naming convention and is not really, really large. This will give an alphabetical listing (using the Wikimedia alphabet), not a chapter order listing (unless you start each page with 01, 02, 03, etc.), Jguk 18:22, 9 June 2006 (UTC)
[edit] Wikiversity courses
I came across this website today: - it is an online database of lectures, and there are a pretty large amount of them. If anyone is still working on classes at Wikiversity, this might be a useful tool. DettoAltrimenti 21:26, 11 June 2006 (UTC)
- I am testing it out now. It requires download and installation of a "free" basic executable "Realplayer" which also has a 19.95 premium download available. If the free tool works well with the site content and it looks useful I will add the link to the online resources at the engineering area and start looking for appropriate places for some of the videos as I crash test the "free" capability. Thanks a lot for bringing this to our attention! I think I will go establish a Wikiversity Lounge link on the front portal page. Maybe we can get some indication of how many people are still interested and dropping by to chit chat. Lazyquasar 03:48, 18 June 2006 (UTC)
- WUPS! It has spyware embedded to enable online tracking of DMCA enable content. From the license agreement while you are installing:
")." This has large potential hassles for our participants as the free culture/micropayments online DMCA issue heats up. They could be incurring charges from unethical or clumsy providers unbeknownst to themselves. Sure it could be staightened later and fixed but who needs the hassle of arguing over a couple of buck royalty with an American Megacorp? I am going to recommend against its use. Lazyquasar 04:04, 18 June 2006 (UTC)
- Try RealAlternative instead. And pester the people who issue files and streams using proprietary encoding to use OGG/Vorbis or some other open standard. Unfortunately a lot of the biggest players have already refused to change (BBC for instance started a test stream using ogg but dropped it). --kwhitefoot 10:43, 20 June 2006 (UTC)
[edit] Template:AutoTOC
I created this template to help provide a service that seems to be commonly in demand: an automatically generated table of contents for a particular book. This template takes a single argument, the prefix name of the book to list subpages of. The template transcludes the "Special:Prefixindex" special page to do the dirty work. I've thrown together some lousy CSS formatting to make it look pretty. --Whiteknight(talk) (projects) 23:15, 11 June 2006 (UTC)
[edit] User:Whiteknight/New Book Guide
I've created a guide for the creation of new wikibooks, based on my experiance here. It's just a collection of my thoughts and ideas, so it shouldnt be taken too seriously. These are the general guidelines that I follow when I create books, and since I have created many (and intend to create many more), This will give some insite into what I do and how/why i do it. comments are appreciated. --Whiteknight(talk) (projects) 00:57, 12 June 2006 (UTC)
[edit] Banned WP editor Primetime
An editor, called "Primetime" on en.Wikipedia, has been revealed as a serial plagiarizer and liar. He was banned by Jimbo Wales himself and has also been banned from Wiktionary, and perhaps other projects too. His first contribution to Wikipedia was "Letter writing", which was eventually transwikied here. After he was banned several admins begain checking his contributions and we found that all substantive contributions were plagiarized. "Letter writing" was copied from World Book Encyclopedia, available online by subscription or free through libraries. I marked it as a copyvio but user:Trgj56 has been reverting it. He is undoubtedly a sock puppet of Primetime.
Here is a list of his known sock puppets on various projects. Wikipedia:Long term abuse/Primetime. The WP:AN/I page is still here [4] for the moment. And there's some on his user page and plenty on his talk page, for anyone wanting more information.
Could a Wikibook admin help with this vandal? Cheers, -Will Beback 07:27, 12 June 2006 (UTC)
- Thank you for this. I have now deleted the page. If you're aware of him adding any more copyrighted material to Wikibooks, please let us know (either by adding {{copyvio}} or by reporting it in WB:VIP), Jguk 08:02, 12 June 2006 (UTC)
- I left a notice on the discussion page of User:Trgj56. Not that I think it will matter. Now that we know there is a problem, we can be more vigilant about it. --Whiteknight(talk) (projects) 19:29, 12 June 2006 (UTC)
[edit] We need graphics
Wikibooks has three years and we still do not have decent promotional media like web banners and buttons. I am convinced that there are people on Wikibooks who know how to prepare such graphics and I hope we can cope with it. We should make Wikibooks more recognised, start writing about it in the Internet, promote at school or university. Having banners and buttons which users can put on their websites and blogs would be really useful.
It would also be great to have our own "favicon" - a small image displayed on the web browser's list of bookmarks or on tabs in browsers using tabbed browsing. Now we are using Wikipedia icon, but I think we should distinguish ourselves from Wikipedia and prepare our own icon. The problem is that current SVG logo of Wikibooks does not look well when scaled down to 16x16 pixels:
. We need to find somebody who will do it better. --Derbeth talk 21:49, 12 June 2006 (UTC)
- Scaled down, that favicon doesn't look that bad. I used to try and make pretty ones for my old website, and they never look great. It's 16x16 pixels, there is a limit to how nice it can possibly be. I have photoshop, and I can start to make some banners or buttons or something, but i'm no artisit. I'll give it a best effort attempt, however. --Whiteknight(talk) (projects) 22:18, 12 June 2006 (UTC)
- I take that back, I'm heading off to vacation, and wont have any time to work on this for another week. If nobody else takes up the challenge before then, I'll try to make some quick images. --Whiteknight(talk) (projects) 23:06, 15 June 2006 (UTC)
I am skilled at photoshop, but that is about all so I can create graphics, ect. someone else will have do to the work after that. --Je suis 00:16, 17 June 2006 (UTC)
[edit] New logo proposal
I invite all Wikibookians to consider my proposal at m:Wikibooks/logo. Ramir 04:23, 14 June 2006 (UTC)
I like it becuase it is easy to translate to a new language for the other (international) wikibooks, which are currently using the English logo, except the Spanish "wikilibro". --Je suis 00:25, 17 June 2006 (UTC)
[edit] Help me choose a bookshelf
I've been working on and off on and I've now been asked to put it on a bookshelf. I can't see one that fits. One option is to add a category 'file formats' to the Computer Software Bookshelf . Ideas anyone? --DuLithgow 11:52, 14 June 2006 (UTC)
- I would probably recommend the Wikibooks:Computer software bookshelf for this. Seems like a good fit to me. --Whiteknight(talk) (projects) 12:33, 14 June 2006 (UTC)
[edit] Page Move Vandalism
There was another wave of Willy-on-wheels style page move vandalism today. This causes me to raise the question, why do we allow all users to move pages? Moving pages, like deletion, is an action that can fundamentally alter the structure of wikibooks. Page move vandalism also, is more difficult to undo then other types of spam or vandalism. Not only do we need to move a page back to it's original location, but we also need to delete the new page, which usually has a very obscene title. I can't think of a possible reason why move operations shouldnt be restricted to sysops, especially in this place where we have plenty of sysops to handle move requests. In a similar manner to the deletion process, regular users could request page moves through use of a {{moveto}} template, or something similar. Because a deletion operation is required to fix page-move vandalism, ordinary users can't even clean it up! they have to request that a sysop clean the vandalism, so a sysop always has to be involved anyway. And i know people are going to say things like "Oh but it isn't a big problem here now" or "but it just doesnt happen enough to warrant a restriction". What do other people here think? --Whiteknight(talk) (projects) 15:48, 14 June 2006 (UTC)
- I think that there should be both time and edit number restriction for new users; they should not be able to move pages immediately after registering. I have heard from people fighting vandalism at English Wikipedia that new users aren't able to make page moves there (as far as I know, an user is considered "new" by the software for some time, I don't know how long it is). I hope we can quickly gain consensus here and ask developers from #wikimedia-tech to introduce such restriction here too. --Derbeth talk 17:04, 14 June 2006 (UTC)
- Yeah, I'm all about the whole "freedom of editing" thing, but i can't think of a reason why a new user would need to be able to move pages, and I can't think of any way to justify giving all new users that kind of power. If new users were prohibited from moving pages, that would cut down on the number of willy-on-wheels sockpuppets (and workalikes). If people don't need to log in to vandalize, they will more likely do it anonymously, and we can easily block the IP addresses. It just makes more sense to me. Maybe we should start a new proposed policy page or something? --Whiteknight(talk) (projects) 17:41, 14 June 2006 (UTC)
- I am curious. I am not a Wikibookean. I intend to work primarily at Wikiversity when it activates. Obviously occasionally our notes will turn into a product worthy of Wikibooks. More often we will have groups and lesson plans and learning trails pointing at Wikibooks. Has it ever been considered at Wikibooks to add a group tag to the user accounts such that the existing authors of specific books can acknowledge each other as contributors to that book and be given editing priveleges? It would seem to me that most people probably work on a few books at a time and this might be a useful characteristic to add to the site. Might hamper free ranging editors and typesetters types if you have such. Just a thought I will check back for reactions. Thanks. user:lazyquasar
- For most established wikibooks, it's probably not useful to allow brand new users to move things. For more loosely-organized books, like the Cookbook, moving pages isn't such a big deal and doesn't need to be restricted (though it wouldn't hurt either). Also, for new books, reorganization is more likely and probably okay even for new users. Kellen T 12:04, 15 June 2006 (UTC)
I was under the assumption that brand-new users couldn't do page move immediately. This seems to be something that is inconsistant and apparently buggy with the MediaWiki software, and a reason why these sort of page move attacks have been cut down quite a bit from the past: a vandal can't waste the time necessary to build up a reputation of good edits on an account in order to kill the account with such nonsense. If this is something that needs to be "turned back on" for Wikibooks by developers, I strongly support that decision and would like to add my name to a petition to put that restriction back in for Wikibooks. Or make that something tweakable by admins & bureaucrats. --Rob Horning 11:13, 15 June 2006 (UTC)
- To me this seems categorized incorrectly. If I were a new author making great progress on a text or had just uploaded a draft used in my college classes that needed final polishing I would find it disconcerting for someone new to the material or text to be able to shuffle it and then find I could not fix the damage but must consult an administrator. Are the regular contributors to wikibooks really routinely so friendly and reasonable with newcomers that such an expert could easily talk to the shuffler and have the damage reversed easily? Has wikibooks evolved a configuration management and change process to protect known excellent material or is that still for the future as it is in Wikipedia?
user:lazyquasar—The preceding unsigned comment was added by 70.110.43.185 (talk • contribs) .
- What the hell is this strikethrough of my signature tag bullshit? If I choose to sign the old way and let the automation tracking software record one of varying IP addresses that is my business is it not? Lazyquasar 04:12, 18 June 2006 (UTC)
- Interesting, the link to the Wikipedia explicitly states it is a guideline preferred by some participants not a binding policy: "This page is considered a guideline on Wikipedia. It illustrates standards of conduct that many editors agree with in principle. Although it may be advisable to follow it, it is not policy. When editing this page, please ensure that your revision reflects consensus. When in doubt, discuss first on the talk page."
I looked at my own contributions, Special:Contributions/Kernigh. Now I am one of those users who read wiki much before editing it, and my edit at 1 October 2005 was my very first edit to any wiki - logged in or not - except for one wiki running UseModWiki which I edited in 2004. Yet at 2 and 3 October 2005, I was already moving pages! I think that new users should be able to move pages because page moves are common on Wikibooks. In particular, a user that wants to move pages to satisfy the Wikibooks:Naming policy should be able to register an account and immediately start moving. --Kernigh 03:50, 18 June 2006 (UTC)
- I just thought this was already built into MediaWiki software and an unofficial "policy". While I can see some value into "allowing" a user to move pages, it was felt (and I agree with this idea) that reverting page moves is somewhat more difficult of a task than simply reverting a page edit. It can also cause a huge amount of confusion when it is done like the WoW attacks. By requiring a minimum number of days and edits (not necessarily something like being added to a special user group like being a sysop), it would help in cutting down the blatant forms of abuse. Generally speaking, what we are encouraging is that people help us out with adding and editing content. Doing a page move should require at least a little knowledge of site policies and standardized naming conventions.
- BTW, the same arguments are being offered for even allowing page creation (adding the first edit on a new page) for new users. I for one support that same argument here, but that is something that we, as a Wikibooks community, should debate and shouldn't be up to just one person. --Rob Horning 16:53, 22 June 2006 (UTC)
[edit] Template:PokemonOldGrass
Can an admin please copy this over to WikiKnowledge for me please. It looks like I forgot it when I moved the Wikibooks Pokédex. Thanks, Gerard Foley 17:57, 14 June 2006 (UTC)
- Done. I used the "old" version, not the one it redirected to later. --Derbeth talk 18:12, 14 June 2006 (UTC)
- Thanks very much! Gerard Foley 18:18, 14 June 2006 (UTC)
[edit] Template:PokemonOldIcePsychic
I forgot this one too. Can someone copy it over to please. Gerard Foley 18:39, 14 June 2006 (UTC)
- Thanks again. I'll let you know if I find anymore missing templates. Gerard Foley 19:03, 14 June 2006 (UTC)
[edit] Template:Poke-stub
This is the last pokemon-related template I could find. Doesn't look like it will be that useful to you, but maybe it needs to get transferred over too. --Whiteknight(talk) (projects) 19:26, 14 June 2006 (UTC)
- I don't think that one will be needed. Delete away. Gerard Foley 12:02, 15 June 2006 (UTC)
[edit] Template:Unblock
I have imported this template from wikipedia. Mirtone
- On this wiki, blocked users cannot edit any pages – they cannot edit their own talk pages. So this template is useless. --Kernigh 01:25, 18 June 2006 (UTC)
[edit] Template:PokemonBugWater
This one too. Please copy it over to . Thanks, Gerard Foley 10:20, 23 June 2006 (UTC)
I'm still waiting for this template. Thanks, Gerard Foley 16:02, 4 July 2006 (UTC) P.S. Sorry about all the red.
[edit] Administrator Inactivity Decision
With all the arguing over inactive administrators over at Wikibooks:Requests_for_adminship#Requests_for_de-adminship, I thought it would be a good idea to add some guidelines to the role of an administrator. The general concensus from the discussions was that a period of twelve months of inactivity could be used in order to be sufficiently inactive for de-adminship. This inactivity would also include spare random edits. For example, if an admin made three edits (administrative or non-administrative) six months ago but was inactive for two years, the "period of inactivity" would still hold as two years. The only solid opposition towards blocking the current de-adminships was in reference to there not being any set guideline for inactivity. I'd like for a discussion to develop here and get someone to add the decision to the WB:ADMIN page. Discussing how often de-adminships could be listed is another important thing I hope can be decided here. Once a decision is rendered I'd like to clear the old de-adminship listings and re-vote within the appropriate time period using the established guidelines. -Matt 21:54, 15 June 2006 (UTC)
- This is a good point, and it's something that we definately do need to nail down in policy. I think that 1 year of inactivity, or 1 year of sparce activity without using any admin actions (page deletion, user blocking, page protecting, etc) should be ground to start de-adminship proceedings. I would say that "sparce activity" would entail less then 5-10 edits per month, especially if all 5 edits happened on only a single day each month. 1 year of active contributions without using any admin actions should probably raise some kind of flag, and perhaps we could request the user to voluntarily relinquish adminship in that case, but we wouldn't put it to a vote. Once de-adminship votes are called, the admin in question should be notified by email, if available, and on their user talk page as well. If, during the vote, the admin in question comes in to defend themselves (hopefully with a good excuse, and a promise to be more active in the future), the vote can be terminated. That's my solution to the problem, although it might not be the best. suggestions? --Whiteknight(talk) (projects) 22:13, 15 June 2006 (UTC)
- I see this as good reasoning. At the rate of which this place changes policy, anyone inactive for an extended period of time would be behind in policy. :-/ Overall, that sounds about right. --Dragontamer 22:17, 15 June 2006 (UTC)
- I like all of that but think a little clarification on returning admins is in order. If an admin comes back to defend his/her adminship, appropriate evidence towards becoming active should be provided. I don't like that if an admin simply comes back to argue the de-adminship that everything will go away. The admin will be listed there for a reason and shouldn't get a ticket out just because they came to comment. -Matt 00:04, 16 June 2006 (UTC)
I would like to point out that several admins have been de-sysoped as a result of the various requests. Those that were more or less unanimous in the support to remove sysop status have been dealt with by the stewards. At this point we are dealing with the other admins whose support for deadminship was mixed.
On the whole, I think somebody who has not been participated at all for a very long period of time should not have admin privileges, or should have to "reapply" to get them back. We are not talking about blocking these users, just that they need to spend some time getting reaquainted with Wikibooks before making decisions like deletion of content or other potentially controvercial actions. In addition, if somebody was inactive for two or more years but came back and asked for adminship again in this situation, I would ask why and what they plan to do with the privileges, but being an admin previously would go a long way in terms of proving responsibility. They should definitely have a lower bar to pass to become admin again than somebody who has never been an admin before.
As far as what the formal standard should be for deadminship, that is up to interpretation. I recommended the 1 year of inactivity as a reasonable term, but this is certainly something that should be discussed further. There were people with much more experience than I have which argued that admins should never be desysoped, and that it was silly to even try. See the archives of the Staff Lounge for details, but there were some strong reasons given to not do the deadminship ever, at least for inactivity reasons. Certainly deadminship is something that shouldn't be rushed except on a temporary basis if there has been some blatent wheel warring, and that to help calm down the situation. --Rob Horning 13:04, 16 June 2006 (UTC)
- What the previous de-admining discussions showed was that for some inactive users, there was clear support for de-admining where there was inactivity - but some required a clear rule. May I suggest that for all admins inactive for 12 months (inactivity to be defined as 20 or fewer edits in the year, 5 or fewer of which are in the month of the nomination), that they would be de-admined on nomination of one Wikibookian unless within 1 month they gave a good reason to keep it (which was then agreed to be a good reason by the community). For others with similar levels of inactivity but not meeting those requirements exactly, they can be nominated but without an automatic bias (ie there has to be a separately shown consensus for them to be de-admined for that to happen). All those nominated for de-admining to be notified on their usertalk pages and by email (if activated). Users de-admined in this way may then apply to be an admin again at a later stage without prejudice. The other case where we should have de-admining automatically are where a user requests it, or agrees to it - last time round we had a user so agreeing, but he remained an admin because (really as part of some wider issues) others opposed the de-admining (possibly without being aware of that user's assent), Jguk 21:21, 16 June 2006 (UTC)
I have added the concensus of this discussion to the administrators page. We now have a clear standard for activity requirements now. -Matt 16:31, 1 July 2006 (UTC)
[edit] How-tos
The how-tos section has come under attack last time I was active here, has there been a conclusion on this subject? IIRC, the issue was that the majority of how-tos were... how to say, unprofessional. I personally think that how-to should stay under the idea that they are stubs and can be easily integrated into a future textbook. --Dragontamer 22:27, 15 June 2006 (UTC)
- I don't think that there ever was a gigantic issue about it. There did happen to be alot of garbage on the How-To bookshelf, just like there is on every other bookshelf. However, the How-To bookshelf doesnt really have a "patron saint" to look over it and clean out the bad parts. The ones that were garbage got removed, and general concensus is that the rest are fit to stay at wikibooks. --Whiteknight(talk) (projects) 23:04, 15 June 2006 (UTC)
As far as I'm concerned, this is a silly topic to even discuss in terms of a wholesale removal of content. Any How-to books that are being removed without a VfD (or removed for other policies unrelated to being a how-to book) is a violation of trust by the Wikibooks community and admins doing that are abusing their power. This should not be happening, and I don't see why these books should be removed. Period. Certainly not without an extensive discussion about the topic, and perhaps the creation of a whole new Wikimedia sister project. The experience that I've had with Wikiversity would make me not want to go that course anyway, at least for a very long period of time. Besides, how-to books seem like instruction books to me, and fit within the defintion (perhaps loosly) of what could be considered a textbook. It is a topic book that covers instruction about how to perform a task.
I would agree that there are some How-to books that are incredibly poor in terms of quality, however. Many of these started out as Wikipedia articles, and never really fit on Wikipedia, which is exactly why they are here on Wikibooks. Admins on Wikipedia "felt good" that content was moved to Wikibooks and that they didn't have to completely piss off the How-to guide contributors, considering that at one time How-to guides were a major Wikiproject on Wikipedia. This is yet another reason to not move them again without a very good cause, but I do believe that standards can be improved.
To help with raising standards for how-to content, I propose that we establish some guidelines that would go into depth and help us cull the very poor quality How-to guides, but provide consistant policies that would encourage new how-to books to be created that would be of higher standards. To do this, I have created the following project page:
Wikibooks:How-to book guidelines
I would encourage participation in developing these guidelines, and I hope that we as a community can come up with what sorts of how-to books would be considered acceptable here on Wikibooks. There is trash here, unfortunately, and I say that we get rid of that junk. Let's be consistant, however, and give a positive message that how-to books can be on Wikibooks provided that you meet reasonable quality standards and are not using Wikibooks as a vanity press. A one-page book on how-to build a bomb is not going to be acceptable. --Rob Horning 13:59, 16 June 2006 (UTC)
[edit] Images can now be undeleted!)
[edit] Could an Admin Please Help
I posted the following on the discussion page of the main page of Wikiversity. I have since calmed down and tracked down the person who made the change, apologised for strong language and reaction and asked them to change it back or to compromise phrasing while we discuss our differences. I have no way to tell who protected the main page and why and it really does not matter as we should have a resolution of Wikiversity's status as an independent project within a few months. The point is an admin is necessary to back this large error out of the Wikiversity front page as soon as possible because I do not wish to lose ANY student activity or participation from random browsers. Thank you.
user:lazyquasar —The preceding unsigned comment was added by 70.110.43.185 (talk • contribs) .
- See my reply at Talk:Wikiversity. - dcljr 08:13, 17 June 2006 (UTC)
- This matter has been resolved to my satisfaction. User:Dcljr and I are in effective communications about appropriate revision of the bullet in question and he has correctly pointed out that registered users can edit the page. This is satisfactory to me, this issue can be erased or archived from the Staff Lounge as per your community's standard practice. Thanks! Lazyquasar 03:37, 18 June 2006 (UTC)
[edit] I want a CommonsTicker
- This discussion is now at Wikibooks talk:CommonsTicker ... please consider leaving a comment! --Kernigh 22:53, 20 June 2006 (UTC)
[edit] Is Wikiversity Cleanup Appropriate at this Time?
This tag is appearing at various places in the Wikiversity pages:
Is it really appropriate to be shuffling Wikiversity pages now when it is about to be approved as an active project? Many of the newcomers returning upon receipt of the good news it is a go might be irritated to find their content or materials or structures familiar to them have been shuffled a few weeks prior to receipt of authority to proceed after a couple of years of negotiation with opponents of the project. Lazyquasar 04:38, 18 June 2006 (UTC)
- Do you disagree with the message? If anything, clearly separating Wiikibooks content from Wikiversity courses will facilitate Wikiversity's transition. And as Wikiversity has effectively been dead for the past few months, I doubt contributors will remember anything they wrote. With the exception of a few courses, no great deal of work has been put into Wikiversity, so I doubt contributors will cherrish any of their content (which is all still there, so I don't see what your complaint is). --hagindaz 04:57, 18 June 2006 (UTC)
- I do not disagree with the message. My concern is that Wikiversity materials should remain accessible from the existing link mazes. Certainly it is useful to identify them explicitly. Particularly if strays have gotten disassociated. What will not be useful is if material which should move to the new Wikiversity domain is delinked from the Wikiversity main page and shuffled off to another virtual organization. Fairly empty looking pages that serve an organizational placeholding function get deleted leaving a bunch of empty links. Material on Wikiversity is already spread out between meta and Wikibooks as a result of the chaotic stretched out project evaulation with impacts evolving new project procedures. Many of the people who spent time setting the current links and initial outlines of courses were essentially newbies like myself without a clear understanding of how meta or Wikibooks was organized. It is just an another potential obstacle which should not be tossed at a fragile overdue project. Might also want to consider that the initial activists at Wikiversity will set much of the initial tone towards Wikibooks. Do you wish to risk initial alienation of your natural customer base and source of proof reading and comments regarding your textbook products and even new text authors? Certainly there is nothing (with a few exceptions, a couple of schools are currently active) that cannot be easily recreated with participants. The interface between two emerging communities might be harder to repair. Perhaps my concern is overblown. I do not use Wikibooks with the exception of a couple of books linked to from elsewhere so I am not really familar with your organization or implications of this "tidying up". It sounds a bit ominous given some of the serious opposition I have seen towards a successful Wikiversity over the past year and a half. Lazyquasar 09:40, 18 June 2006 (UTC)
- If I have done something that has hurt Wikiversity, I sincerely appoligize. But what is it that have I done that "risked initial alienation" of Wikiversity participants? I certainly have not been adding extra obstacles for Wikiversity, nor have I deleted any Wikiversity content. And as far as I can tell, you agree with me on that. --hagindaz 18:46, 18 June 2006 (UTC)
- My apologies Hagindaz. My absolutely did not mean to imply that you were personally by intent damaging Wikiversity or its material that Wikibooks has hosted. I was trying to raise awareness of how the total activity of the Wikibooks community members mixed with the currently confused state of the initial Wikiversity implementation could have large future impacts on both community projects. My biggest specific concern would be Wikibookeans finding fairly empty pages with a few links (there are a huge number of them) and decide to "tidy up" by deleting them. People returning to the Wikiversity project after waiting for a year for it to get a serous start could receive a negative impression. Much of what is there is "trash/building scaffolding" and will be modified and updated by the Wikiversity community rather rapidly as long as it does not get sidetracked. Roberth has proposed a brilliant plan on the textbook-l mailing list to simply duplicate the Wikibooks initially on the new Wikiversity wiki and allow each community to delete or modify material it does not want in the database. I think your tags will be useful no matter what approach is used. Thanks for your assistance. Lazyquasar 09:45, 19 June 2006 (UTC)
I think it is important to Wikibooks to regularise its content and tidy these things up. As Hagindaz notes, if anything this should also aid the set up of Wikiversity. I am concerned, however, at the eagerness to say that anything with "course" in it is Wikiversity. It is not - we have textbooks, and should have textbooks, linked into particular courses (GCSEs, A-levels, SATs, etc). Also, every textbook has its own scope, and "course" could be synonymous with textbook. Wikibooks wishes to be left with whole textbooks - not extracts of them because elements are duplicated by a new Wikiversity project.
The other question is what Wikibooks should call its (future) bookshelf for university-level books. "Wikiversity" would be confusing - maybe "Wikiuniversity"? Any other ideas? Jguk 07:26, 18 June 2006 (UTC)
- I agree. The material should be duplicated at both sites where it is not easily left in one and the content managed in a way that works for both. Regarding the name perhaps "Undergrad Library", "Graduate Library", "Research Library", or "Undergrad Electromagnetism", "Leading Edge Supercollider Reports", "Peer Reviewed Advances in Chaos Mathmatics", etc. Personally I picture Wikibooks as eventually larger than the Library at Alexandria or the U.S. Library of Congress. Indeed, if Wikiversity and Wikibooks work together effectively it is certainly possible the Library of Congress will insist on mirroring Wikibooks so as to be complete in their mission. Lazyquasar 09:40, 18 June 2006 (UTC)
- You misunderstand, Jguk. I agree with your comment about courses. Indeed, I have called my French Wikibook a "French Language Course." Wikibooks regularising its content and tidying these things up is not a concern or an issue for Wikiversity. Anything with the "Wikiversity:" prefix belongs to Wikiversity. But some Wikiversity schools have had links to Wikibooks mixed in with links to Wikiversity courses on their "Courses available from this Wikiversity school" lists. (As an example, imagine if Wikiversity:School of History had its "references" section mixed with the "course listings" section, with no clear distinction on what was what.). Solving that problem would help Wikiversity become an autonomous project.
- On the subject of what to call the bookshelf for university-level books, I think that Wikibooks should have one "books by audience" (or possibly "books by age level") link in the sidebar, right below "books by subject." On that page, there should links to pages entitled for example "Wikibooks:University-level bookshelf." I don't think we should be wikiprefixing everything. That will only lead to unnecessary confusion (and our existing bookshelves would then have to be renamed for consistency, such as a "Wikihistory" page). --hagindaz 18:46, 18 June 2006 (UTC)
- I disagree that everything with a Wikiversity: prefix belongs to the Wikiversity project - many of these are textbook pages. See, for example, Wikiversity:High School Physics/Motion - Kinematics. Is that not a page in a textbook which therefore belongs on Wikibooks? There are Wikiversity project pages with a Wikiversity: prefix, of course there are - but not everything with the prefix is a Wikiversity project page.
- I don't disagree with the point that Wikiversity can decide what to do with its own content - it's more the point that Wikibooks can decide what to do with its content - and we should keep textbook pages currently prefixed with Wikiversity:. If Wikiversity wishes to duplicate content, that is up to it - although I guess the WMF may be concerned if Wikiversity is to a large extent intending to host textbook material, Jguk 21:58, 18 June 2006 (UTC)
- How is Wikibooks able to decide what to do with the content of a sister project? If content was created for Wikiversity, by Wikiversity, then Wikiversity has control of it, rather than Wikibooks. If a page is not suitable for Wikiversity, then Wikiversity contributors (including you and me) should either delete it or transwiki it to a sister project (which in this case would be Wikibooks). --hagindaz 22:20, 18 June 2006 (UTC)
- Hagindaz has summarized my position on Wikiversity nicely. However, there have been vocal participants in the proposal/scope definition who are adamant about no duplication of materials. Whether they have a significant prescence in the soon to be active Wikiversity community remains to be seen. Regarding the page as per Jguk's query. This is precisely a page of notes not a page in a textbook unless someone does some work preparing to insert it. It has three links only, all of them to defining articles at Wikipedia regarding fundamental concepts. Since it is FDL'ed material there is no problem forking it into a page in a textbook with appropriate credit back to its originator at Wikiversity. There is a severe problem from someone grabbing this unilaterally (editing boldly) and moving over to the middle of a Wikibook in the middle of a small group of students trying to study asynchronously over the internet and leave a valuable learning trail behind. It would not take much disruption from activities like this for student/participants to decide the overhead is too high, Wikiversity is non viable concept, that they should study in private elsewhere. Lazyquasar 09:45, 19 June 2006 (UTC)
[edit] Help please on new book project: "Basic Book Design"
In 2002 I wrote a book about designing books on a computer. At the time there were no such books. There were a few old, pre-computer-era books about book design. There were a few newer books showing examples of radical, cutting edge book designs. But there was nothing showing you how to take the manuscript you wrote in MS Word and turn it into a professional-looking book.
Many, if not most, books I see that are designed by pro book designers are badly designed (e.g., a bibliography instead of referencing sources, or type too small for an average person to read without eye strain). Self-published books often look really awful. I saw a statistic that something like 12,000 new publishers registered with R.R.Bowker last year. The self-published/small press market is exploding. There's a real need for a book about designing books.
But I'm not a pro book designer. For example, under "choosing fonts" I more or less said to use serif fonts for text and san-sarif for headings. An expert would have far more to say on that subject.
Also I showed how to use MS Word. Pro book designers were shocked—SHOCKED!—when I said that MS Word can do everything, and more, and easier, than LaTEX, PageMaker, etc. It was like I'd gone to a Harley rally on my Kawasaki. The book could really use some LaTEX and InDesign experts expanding sections to say, "Here's how you do this in MS Word, and here's how you do it in LaTEX, and here's how you do it in InDesign" etc.
I shopped the manuscript around to several publishers, and they said "good idea, but it's not the kind of book we publish." Yes, I knew that. That's why I write books different from they publish. If I wrote books the same as what they publish, I wouldn't be writing new books. Speaking of which, I'll post another topic with some ideas for books that no one has written.
IMHO this is a perfect project for a Wikibook. If anyone has way too much time on their hands, please download the manuscript from my website. If you like what you see, feel free to reformat it and start a WikiBook. Is there a form I should sign to release my rights?
In the next few days I'll post a similar request regarding a relationships book I wrote that no one bought.--Thomas David Kehoe 18:51, 18 June 2006 (UTC)
- It would be great if you are willing to donate this book (essentially by releasing it under GFDL. All I think we need to do is ask you to confirm a few things:
- Do you own all the rights? For example, if your company has exclusive publishing rights, it would also have to waive these.
- Are you willing to release the book under GFDL? (This could severely restrict your ability to make money out of the text in the future)
- Are you willing for the text, once on Wikibooks, to be edited mercilessly by others (potentially in ways with which you may disagree)?
- If you can answer all three of those questions in the affirmative, then just say here that you are releasing the book under GFDL (in view of your existing history here at WB, I think we have enough to go on to confirm your identity! :) ). It would also be useful to know what sort of review has been done on the text - is it just you, or has it been peer reviewed (and if so, details of this review would be helpful)?, Jguk 21:11, 18 June 2006 (UTC)
Yes, yes, and yes. Except that the chapter on fonts must say that Times Roman is the greatest font ever. Just kidding. :-)
I read the GFDL but I can't say that I understand all of it. What happens with parallel books? Let's say that I hand over "Basic Book Design" to Wikibooks and a cadre of geeks rewrites it recommending LaTEX. Then I buy a bunch of Adobe stock, rewrite the book ro recommend InDesign, and sell a zillion copies to get more people to buy Adobe products. Meanwhile, the original version, which recommends MS Word, develops a cult following and is passed hand to hand, as a cherished object, among devotees who find hidden messages in MS Word. They annotate my manuscript with these hidden messages. Three versions are then developing and circulating in parallel. Is anyone violating any license?
For reviews, I posted on a Usenet newsgroup for desktop publishing how to download the book from my website and send me comments. One woman was very helpful. Someone else was extremely offended that I recommended Times Roman and MS Word. The usual response from the Usenet.--Thomas David Kehoe 03:36, 19 June 2006 (UTC)
- I've started importing this on Basic Book Design. It would be useful, if you've got the time, to go through it and make sure that it looks out to you in wikitext format. I'm also (through incompetence, I think) to mimic some of the formatting you use in your examples. Where this is happening, I'm placing the books in Category:Formatting help required. If anyone knows how to sort this out - please do so! :) Jguk 07:54, 20 June 2006 (UTC)
- "I read the GFDL but ...". If you mean 'can I stop that happening?' then the answer is yes but only as long as you don't publish it here. The 'solution',if it is one, would be to use invariant sections to fix certain parts of the book. But the rule here is 'No invariant sections, no front or back cover texts', see Wikibooks:Copyrights. --kwhitefoot 10:28, 20 June 2006 (UTC)
- The GFDL uses copyright law to assure that anyone else who modifies your material and publishes must also use the GFDL. However, you are the original copyright holder and you can put it out under parallel licenses. If you do this the book will essentially fork and begin evolving in different directions. The GFDL'ed version here at Wikibooks could be downloaded and forked to a new version using the GFDL with new materials added but only you can put out a fork using a different license on the original material. Lazyquasar 04:28, 22 June 2006 (UTC)
[edit] Some ideas for books no one has written
As a small publisher I notice subjects about which no books exist. Some of these may be good Wikibook projects. I haven't checked if there are good Wikipedia articles on these subjects.
- Birth control. There are zillion fertility books, for women who want to get pregnant. There are no books for women who don't want to get pregnant. The closest is an excellent chapter in "Our Bodies, Our Selves." My guess is that publishers are afraid of getting boycotted by Catholics or something.
- Scientific research into astrological phenomenon. There's been excellent research in this field, disproving some astrological claims (e.g., that newspaper Zodiac columns are non-fiction), yet proving other astrological claims beyond the shadow of a shadow of a shadow of a doubt (e.g., that champion athletes tend to be born after Mars rises or passes overhead).
- A book (or website) about videos children can make. E.g., how to make your own horror movie. Or simple special effects, such as climbing along a fallen tree with cross-fades of a standing tree making it look like the kid is climbing high in a tree.--Thomas David Kehoe 19:31, 18 June 2006 (UTC)
- All of these are interesting topics. I would urge caution with the astrology book as it would skirt very close to the original research prohibition on WB:WIW, and I have extreme skepticism that there is anything at all to astrological research. Newspaper Zodiac columns, from my viewpoint, are fiction. While some famous astronomers also were published astrologers, that was more than 400 years ago when it was widespread. If you (or anybody) could show a published research paper on astrological phenomena that was published in A) the last 50 years or so and B) was in a major respected scientific journal such as the New England Journal of Medicine, Nature, or even Sky and Telescope, I would be very much interested in reading that research. It doesn't have to be these exact journals, but certainly something that is widely recognized within the scientific community as a respected publication. The Russian Science Academy may have some articles on the topic now that I think about it. --Rob Horning 19:23, 19 June 2006 (UTC)
- Do you have any children? Do you remember how you thought when you were one? Not sure encouraging them to make horror movies would be a good idea just think how realistic they could get :-)) . On astrology and the Mars effect etc, you could start by going through the weekly New Scientist magazine for pointers, I think that the Mars effect has actually been discredited or at least it is certainly not proven beyond a shadow of a doubt (not even one shadow let alone three), take a look at for a discussion of the various experiments that have been performed in this field. As the effect is still very much disputed it would hardly be a suitable topic for an astrology text book but it would be very suitable as a case study for an advanced statistics text. I can't see any wikibooks policy reason why a textbook of astrology as such should not appear on Wikibooks. After all there are astrologers who do the various arcane calculations and presumably they don't all invent them for themselves so there must be texts that treat the subject, that is textbooks. "astrology classes" gets 19k hits on Google so there are classes and presumably textbooks too. Is there a rule that says that the claims made for an activity must be true before a textbook in that field can be created on Wikibooks? This could be an interesting dicsussion. --kwhitefoot 10:18, 20 June 2006 (UTC)
- Actually, yes, I do have children. Six of them to be exact. And yeah, I guess there is some reverse psycology that could be at play here. Still, my arguments that we should avoid cranks and unproven research still should apply to such a book. As has been pointed out, the gravitational pull of the doctor (or nurse midwife) was greater at your birth than that of Mars. I have extreme skepticism on this particular topic being anything other than pure fiction, and I would consider it such if a bunch of horoscopes appeared on Wikibooks proporting to be anything else. As far as a solid book to discuss astrology as a Wikibook topic, I have no problem for such a book to exist. I might not contribute, but it would be interesting to see a book on the history of Astrology. It would be very difficult, from my viewpoint, to maintain NPOV or no original research standards. --Rob Horning 16:43, 20 June 2006 (UTC)
- The 'do you have children' question was really addressed to Thomas David Kehoe; I often fail to make it clear to which of the participants of the conversation I am addressing a remark, sorry. How do you have time to do any Wikibooks stuff with six, I only have three and that's plenty. --kwhitefoot 18:10, 20 June 2006 (UTC)
I wasn't intending to start a flame-war about astrology. What I've read is the Gauqulin book on the Mars Effect, the article in the Skeptical Enquirer (winter 1991-92) by Suitbert Ertel about the Mars Effect (the quote I remember from the editor was "astrology is the only field of the paranormal that, when scientifically investigated, is vindicated"), plus the Center for Scientific Investigation of Claims of the Paranormal (CSICOPS, founded by Carl Sagen) study of the Mars Effect.
Regarding children and horror movies, I was thinking of fun cheesy horror movies. E.g., how to make a coffin out of cardboard boxes, how to make a vampire costume and fake blood, what lines Dracula always says ("I vant to drink your blood!"). But introducing play therpy is also possible, i.e., making a horror movie about what really does scare kids. E.g., your grandpa who always wants to show you his heart surgery scar, show you how he takes his insulin shots, show you his false teeth, etc. could be a zombie who wants to show you his heart surgery scar, etc. Or a kid is scared about moving to a new town and starting in a new school. What if his fears are justified -- the kids at the new school really are vampires and werewolves? :-)--Thomas David Kehoe 20:48, 22 June 2006 (UTC)
[edit] Refashioning Conic Sections
I would like to transform Conic Sections into a more organized, complete module. I would like, at least, to dedicate myself to it until it's done enough. With the approval of any who read this, I propose to do the following:
- Send the current page to Conic Sections/Old Page by copying the Wiki source code
- Set up another main page, with the following basic contents (in any order):
- Planes though Cones
- Circles
- Ellipses
- Parabolas
- Hyperbolas
- Features of Conic Sections
- Eccentricity
- Foci, Loci, Directrix
- In Analytical Geometry
- General Equation Form (Ax^2+Bxy+...Ey+F=0)
- Translation of Axes
- Horizontal/Vertical Scaling
- Rotation of Axes
- Rotation of Axes: Examples
I know a fair amount about Conic Sections, mostly from individual work.
Also, I have much experience with WikiFormatting, so this shouldn't be a problem. I've read several of the WikiBook editing and content guidelines. Unfortunately, I'm not that good at LaTeX, Although I'm proficient at a half-similar language (Open Office Math). I've picked LaTeX up easily before, and then forgotten it, so why shouldn't I learn it easily again?
So that's my situation and proposal. Any good? Gracenotes T § 01:25, 21 June 2006 (UTC)
- The wiki already saves old pages in the history, so I would not make an "/Old Page". Just begin editing the main Conic Sections page! --Kernigh 02:19, 21 June 2006 (UTC)
[edit] copyvio
Can someone delete Study Guide for CFA Exam Level III and "Study Sessions" section of Study Guide for CFA Exam Level 1? These modules were mentioned as copyright violation by a CFA lawyer writing to Wikipedia's OTRS, which was reported to me at IRC by Amgine. At the moment I don't have time to do it. --Derbeth talk 19:32, 21 June 2006 (UTC)
- Can somebody explain why this wasn't done through the normal copyvio process on Wikibooks? While I understand that lawyers and others want to make official cease and desist letters and threaten the WMF, it should still be something that is investigated and the user who added the content allowed to defend why the content was added. At the very least, and something that Amgine could have done was to simply add the {{copyvio}} tag to the page. Is this an unusual process for other Wikimedia projects?
- I can understand, however, that the OTRS didn't want to make a big deal out of this on the Staff Lounge. I also don't want to do a knee jerk deletion of all content simply because somebody has asked for it to be deleted. It is at least possible that some content on Wikibooks is being copied onto other websites or places first, or that it may be "politically incorrect" to have some Wikibooks content that the contributors here are just fine with. In addition, I would question all of the edits of known copyvio users. --Rob Horning 17:07, 22 June 2006 (UTC)
[edit] Wikibooks:Featured books
At present, there's no single list of completed books or books usable in a classroom in use. I have attempted to combine {{Highlighted}} and Wikibooks:Book of the month bookshelf / Wikibooks:Book of the month on Wikibooks:Featured books, which I modeled after w:Wikipedia:Featured articles. The page has forty-two books listed (fifteen books of the month and twenty-seven other books), about the number of one bookshelf. The inclusion criteria I chose is that a listed book should be:
- Usable in its current state to effectively teach the subject in a classroom
- Near completion (few red-linked chapters or stubs)
- Organized effectively by chapters, following Wikibooks style rules and naming convention, and containing no confusing pages, encyclopaedia articles, or orphans
- Accompanied by a PDF version of at least 100 pages
Voting on a book would be needed in order to bold a book. As the number of quality books grows, I see the featured status criteria becoming more strict, and books that haven't improved to meet the new criteria being removed. I have also included a list of good Wikibooks that should become featured in the near future. I would like the page to replace the {{Highlighted}} template and to be listed on the sidebar, like the Wikipedia version. So what does everyone think? --hagindaz 00:33, 22 June 2006 (UTC)
- Some of those requirements (like having a 100+ page PDF) are pretty steep, but it seems like it could be done over time. I just don't think too many books will become featured as of now if we stick to those requirements. A good goal for books perhaps. I'm positive some of the listed books don't meet all the requirements, but maybe you listed them as future possibilities. -Matt 01:44, 22 June 2006 (UTC)
- I plan to create PDFs for books already listed that don't already have them sometime soon. If a book doesn't meet any other requirement, then please remove it or move it down to "Good Wikibooks." As for the criteria, if anything, I thought that there would be comments on them being too lenient. But my view is that they should start at "barely passable as a textbook" and slowly improve, so please lower them if you like. --hagindaz 02:23, 22 June 2006 (UTC)
This is very useful and will help readers. It is a shame there are not more high school books in the list. RobinH 17:03, 22 June 2006 (UTC)
[edit] Link to wikibooks
Can someone help me? How do I link to wikibooks when editing a page? If I use Whatever, it links only to pt.wikipedia. How do I link to that page in wikibooks?
Bonafé 22:59, 22 June 2006 (UTC)
- Well, from Wikibooks in a different language, you can do it like this Wikijunior. This will provide an interwiki link to the English wikibooks, but not on the sidebar. For the one above (from a different language Wikibooks to the Portuguese Wikibooks), you could do whatever. Or did you mean from a different language Wikipedia to English Wikibooks? Clarification? --LV (Dark Mark) 01:37, 23 June 2006 (UTC)
- Try pt:Whatever. --Whiteknight(talk) (projects) 13:41, 25 June 2006 (UTC)
[edit] Would it be alright to add a manual on...
Improvised munitions? I don't want to turn this into WikiAnarchistCookbook, but I would like to write a guide or two on various improvised explosive devices.
- A book on fireworks would be amusing but be careful of modern anti-terrorist legislation. In many countries you could be jailed just for having a book on how to make munitions. (What has the world come to?) RobinH 08:07, 27 June 2006 (UTC)
- The jails will be full of physicists (one of the standard exercises for undergraduate physics sudents is to determine the necessary quanties of plutonium or uranium required to make a small nuclear bomb) and chemical and automotive engineers (details of explosive mixtures are rather important if you want to make sure that an internal combustion engine goes or want to make sure that an oil refinery stays). --kwhitefoot 11:03, 27 June 2006 (UTC)
Cyclonite synthesis, and thermite synthesis, as well as another explosive synthesis are on wikibooks, under chemical synthesis(I have still not got the hang of linking on wikibooks). They were controversial, but are now accepted. Try to avoid stuff looking at it from the point of view of the wrongdoer. Expressions such as 'enough to blow your hand off' are probaly ok as a safety warning, 'enough to blow up a car' probably not ok. I have taken an interest in what ssort of stuff like this has been allowed. If you want to contact me please leave message on my talk page here, or on the english wikipedia(Dolive21), which I use more.Dolive35 18:41, 27 June 2006 (UTC)
[edit] Voting on Wikijunior
Hi, I'm new. I'd like to add my vote to a proposed wikijunior book about the alphabet but the page for current voting on this (3rd) quarter seems to be locked?? and no one can edit? to add their vote. I'm I missing something or did I do something wrong? Thanks. Christystockman 07:49, 24 June 2006 (UTC)
- It was meant to be protected to prevent unregistered users from editing. However, I tested this with a new account, and it was preventing that from editing too. I have therefore unprotected the page - so you should be able to edit it now. If the problems that gave rise to the initial protection recur, we will need to consider reprotecting, Jguk 08:13, 24 June 2006 (UTC)
[edit] New Logo Discussion
There is a logo discussion for Wikibooks going on at Meta. See meta:Wikibooks/logo. Dbmag9 19:29, 25 June 2006 (UTC)
- I'm curious about a couple of things regarding this:
- Who is starting this movement to replace the logo? i.e. what is the motivation behind why this needs to change?
- What is wrong with the existing logo?
- If this was the outgrowth of a discussion here on the Staff Lounge or the mailing list (perhaps even an IRC chat), I might be more inclined to support a move like this. At this time we have a project logo that has been used for a couple of years, so IMHO changing the logo should be for something much cleaner and simpler, or add a huge value to the project. It is not that I have anything against this, just that it seems to have come from people external to typical Wikibooks contributors that are pushing for this change. --Rob Horning 17:29, 28 June 2006 (UTC)
- You make a good point that I didn't even consider. The wikibooks logo might not be perfect, but does it really need to change? Who want's it changed and why? What benefit will a new logo bring us, that this logo doesnt? --Whiteknight(talk) (projects) 18:14, 28 June 2006 (UTC)
- Apparently (after some digging around), some of the motivation was by a Russian Wikibooks editor who didn't like what he was seeing, especially when he was trying to translate the Wikibooks "slogan" (Think free. Learn free.) into Russian. I've said on many occasions that Wikibooks was more than en.wikibooks, and this apparently is one of those situations. The idea of changing this slogan I would support much more than changing the logo, and that is something that has come up here on the Staff Lounge in the recent past. Any ideas for a better slogan? --Rob Horning 12:03, 30 June 2006 (UTC)
- "OMG Free Books!". --Whiteknight(talk) (projects) 12:26, 30 June 2006 (UTC)
- Most of the current logo discussions (there are 5) are stimulated in some way by User:Nightstallion. I'm not sure of the exact circumstances about this one. Answers to both questions at the meta page. Dbmag9 19:40, 12 July 2006 (UTC)
[edit] Automotive Books
There are a number of books in the Automotive Engineering bookshelf that don't actually have anything to do with engineering. Some of these books can certainly stay, but alot of them are simply shelved in the wrong place. I have considered moving some of them to the How-To bookshelf, but not all of them can go there. I would like to keep the entire body of automotive materials together, but I need a good home for them. Should I move them to the Miscellaneous bookshelf? Here is a list of books that I am talking about:
- Automobile repair
- Car maintenance
- Choosing a Car
- Engine repair
- Ford Escape Hybrid
- Nissan 240SX Performance Modification
- Suzuki UC125
- Toyota Prius
--Whiteknight(talk) (projects) 05:42, 26 June 2006 (UTC)
- "Automotive books" in the edit summary made me think that these should be lively books. Methinks a "transport" category is necessary at least (we also have books on bicycles that could go in that category too). I'll start one. Anyone who wants to help me turn Category:Transport into a bookshelf is welcome, Jguk 07:04, 26 June 2006 (UTC)
- That makes good sense to me, Wikipedia does have a "transport" portal for topics like this. I am thinking perhaps that this solution is short-sighted. Howabout the creation of a "technology" bookshelf, for things that are technological in nature, but that dont fit the definition of "science" or "engineering"? We could certainly include a Transport section on that bookshelf. I'll snoop around a bit and see if there are enough technology-related books on other shelves that could be moved to a new technology bookshelf. --Whiteknight(talk) (projects) 14:18, 26 June 2006 (UTC)
- Being Bold. I created such a bookshelf at Wikibooks:Technology bookshelf. I don't know what all headings to include on it, but we can work on that later. --Whiteknight(talk) (projects) 15:53, 26 June 2006 (UTC)
[edit] Blanking copyvio images
I am trying to fond out if it is permitted to blank image pages that are allegedly copyvio. I found some who blanked an (alleged) copyvio, and do not know if this is allowed. the page is File:Servalcat.jpgDolive35 18:45, 27 June 2006 (UTC)
I now see that it has not been blanked, and that I have made a complete idiot of myself.Dolive35 18:47, 27 June 2006 (UTC)
- I don't know the answer to your query, but well done. Oh and you can simply link to the image by putting a colon in front of it, like so: [[:Image:Servalcat.jpg]]. Which looks like Image:Servalcat.jpg Kellen T 20:22, 27 June 2006 (UTC)
[edit] Upper level category organization
The current version is at Category:Main page. I think it needs some tidying up (or is the current organization acceptable?), so here's my attempt, based on w::Category:Categories.
- Categories
- Bookshelves and departments
- A simple listing of all bookshelves and departments
- Books by topic (or subject)
- (Possibly list bookshelves here)
- All immediate subcategories would be named only after bookshelves
- Arts
- All immediate subcategories would be named after level two headings of the bookshelf
- Visual Arts
- All pages in this category would only be books listed in the bookshelf section (under the L2 heading)
- (Possibly: All subcategories would only be named after books and contain book pages for books that wish to use categories)
- Perfoming Arts, etc
- Biology, etc
- Book categories
- All subcategories would only be named after books and contain book pages for books that wish to use categories
- Books by other organization schemes
- Includes alphabetical classification, books with print version, books by reading level, Wikistudy, and Wikiprofessional
- Wikibooks administration
- Wikibooks maintenance
- Wikiversity (temporary)
- Wikijunior
Numbers 1, 2, and 3 could possibly be combined. Comments? --hagindaz 01:27, 28 June 2006 (UTC)
- I like the way it is now, to tell you the truth, although maybe it can stand a little tidying up. I guess it doesn't make sense to have some subjects listed up there at the top level, however. Here is my suggestion:
- Wikibooks
- Bookshelves
- Books with Print Version
- Books with PDF Version
- etc...
- Wikijunior
- Wikiversity
- Schools, Courses, etc
- Users
- Subjects
- Art
- Science
- Computing
- Games
- etc...
- After this rearrangement, we could essentially keep all the lower-level categories the same. Categories for individual books (as far as i am concerned) can just stay inside their respective categories. --Whiteknight(talk) (projects) 01:59, 28 June 2006 (UTC)
[edit] New Relationships book
Thanks to Jguk for working on my Basic Book Design. As I said last week, I'd also appreciate help putting my book "Hearts and Minds: How Our Brains Are Hardwired for Relationships" up as a Relationships wikibook. I put up the table of contants and the first chapter up.
You can download the book in MS Word format or in text format. Don't upload the photos. Most are public domain but some I had to pay for. I'll upload the public domain photos.--Thomas David Kehoe 22:50, 28 June 2006 (UTC)
[edit] Rewrite of What is Wikibooks
I have a draft rewrite of WB:WIW at Wikibooks:Inclusion criteria/Proposal. I have already exposed it on the WIW talk page and on the mailing list, and some amendments have been made in response to comments offered in response.
I'd now like to highlight it here. It is not intended to change Wikibooks' scope at all in practice. Instead, it is meant to define what Wikibooks is for in positive terms - and make it easier for new readers to understand what content is and is not suitable for this wiki.
All constructive comments are welcome. I'd like to see all of those dealt with and then make the page live, Jguk 11:53, 29 June 2006 (UTC)
[edit] vfd making an island
So now my book is going through it's second vfd (which according to the policy and votes it would seem to have been long enough to take it off the vfd list. There doesn't seem to be any solid policy on that anymore though.) unless someone actually tries to do it that book wont make much advancements into a textbook. All that can be done for that is simply to wait untill someone does it. If it's deleted some one that actually does it could come back and have to rewrite the wikibook from scratch. I have to ask why the user Jguk has posted 11 vfds on that page, many of them resonable, but still that is a lot, it seems like someone looking for stuff to delete.--V2os 19:05, 30 June 2006 (UTC)
- I think an overall deletion run is okay (but not optimal) for Wikibooks, especially when there were a couple of changes in policy (Jimbo did mandate for video games to leave, so they are in the process of going by now). So I guess Making an Island was just part of that. I'm kinda in "waiting" to see what happens with that book and policy in general right now, to see if the concensus changes. Either way, I think that book is a boarderline case, and arguments can be made for both sides. --Dragontamer 20:46, 30 June 2006 (UTC)
- I would like to point out that what Jimbo asked to be removed was video game walkthroughs, not simply just video game books. And the video game guidelines that were being developed before Jimbo stirred up the pot had discussed and even recommended that books which concentrated on just a walkthrough were not really acceptable on Wikibooks. As for this specific Wikibook, that is a discussion better left for the VfD page itself. As a general rule, I beleive that a book which has survived a VfD had better have a very good reason such as a policy change that has achieved project-wide concensus. The narrow textbook-only policy certainly didn't do that or even what that meant had achieved concensus. And if you are going to cite an ambiguous policy such as not being a textbook, it helps if you can prove the content violates other policies as well in order to remove the content, such as being a soapbox or containing copyright violations. There are many reasons to remove content from Wikibooks, and not being a textbook is way down the list of priorities. Content that has serious NPOV problems is something that I think deserves much more attention before you remove it for non-textbook reasons. --Rob Horning 12:37, 2 July 2006 (UTC)
V2os, I do a lot of adminny, tidying up type things. During the course of this I come across a lot of content which is or may be unsuitable for wikibooks. That explains why I have nominated much to VfD - to allow a discussion as to whether some of the material where I am not sure whether it is suitable or not should remain, Jguk 21:54, 30 June 2006 (UTC)
[edit] New books go to the top of WHAT list?
Template:New says: "When you create a new book, you should add a new entry to the TOP of the list."
Uh, what list? The page doesn't have a link or anything to indicate what list is refered to.--Thomas David Kehoe 00:51, 1 July 2006 (UTC)
- The list that's after the <noinclude>. At the bottom of the module. The list that gets transcluded with that template. Kellen T 01:14, 1 July 2006 (UTC)
[edit] collaboration and book of the month
Someone please write some nice descriptions for these books. I have made basic templates, but I am a failure at exciting writing. Kellen T 01:25, 1 July 2006 (UTC)
- Also, we should close voting earlier and make the templates before the actual day they change. That way we won't get redlinks like we had this evening. Kellen T 01:33, 1 July 2006 (UTC)
- I tried to add a bit to the Muggles' Guide section. I wish there were more Commons HP images. -Matt 13:20, 2 July 2006 (UTC)
[edit] Computer Help Wiki
I work for a charity called U Can Do IT (see) which provides computer tuition for disabled people in the UK. I am currently trying to start up a wiki at wiki.ucandoit.org.uk which will fulfill various functions for the charity. Perhaps the most important of these is the provision of a set of Instructions and Course Notes for U Can Do IT students to use while they are taking the course.
I have publicised the wiki amongst other U Can Do IT tutors, but I don't think many of them are particularly experienced in editing wikis. Would any experienced wikipedians be able to provide any help in building the UCanWIKI? Accounts are by invitation only, but there's a link on the main page of the wiki from which you can email me an account request. Even if you don't want to contribute, any general advice (e.g., already-extant sources of computer instructions, general design tips etc etc) would be great.
Thank you! --Jim0203 10:53, 1 July 2006 (UTC) wiki.ucandoit.org.uk
[edit] Help with Orthopaedic Surgery
I have been working for some time on Orthopaedic Surgery. Recently I created two navigation templates. One is for the page navigation {{OrthoSimplePageNav}} and the other is for chapter navigation {{OrthoMainTOC}}. The idea of having the Chapter navigation was to allow the user to access another chapter in the module easily, which they can do right now. But the problem is the user is not aware of the chapter he is in unless its the first page of the chapter as the link in the template leads to that page. One way to do that would be to probably categorise it into the concerned chapters. However, it would make sense to do it through the navigation template. If anyone has ideas about how to sort this out, I would be more than grateful if you let me to it. Any other ideas are also welcome. Thanks in advance! BDB 04:32, 3 July 2006 (UTC)
- Well, you could add a parameter to {{OrthoMainTOC}}, that would be the name of the chapter that the individual page belongs to. Then, you could add a series of conditional expressions to the navigation template to selectively apply formatting (such as bolding, or underlining, or whatever) to the chapter. This would require some effort, however, so it might be a better idea to simply take an extra parameter, that would be a backlink to the chapter, and then display a "Current Chapter: " statement at the bottom of the TOC. if the parameter isn't provided, you don't print out anything. I'll work out a quick example for you, and see if you like it. --Whiteknight(talk) (projects) 21:36, 5 July 2006 (UTC)
- Okay, I made a quick version that can be found at {{OrthoMainTOC/test/}}. This one takes, as an optional parameter, the name of the current chapter, and displays a note at the bottom of the template, as to which chapter the reader is currently in. The formatting is simple and lousy, but you should get the general idea. --Whiteknight(talk) (projects) 21:43, 5 July 2006 (UTC)
- Seems like a splendid idea! Why not put it into action? We can always modify it if necessary.BDB 17:46, 6 July 2006 (UTC)
- Well, I made attempts but failed to make any progress. Some help would be appreciated. Thanks in advance.BDB 18:12, 6 July 2006 (UTC)
- Whew!! That was a lot of experiementing. I finally got it right ...hope it works. But surely this is not the final tweak. Any suggestions are still welcome. BDB 17:07, 7 July 2006 (UTC)
- That looks pretty good! I'm glad you got it to work correctly. Let me know if you need any more help with it. --Whiteknight(talk) (projects) 18:09, 7 July 2006 (UTC)
[edit] Help me turn on DynamicPageLists for wikibooks
Hi everybody; We at the cookbook would like to turn on m:DynamicPageList for the Cookbook so we can create context-sensitive lists from our categories. User:Brion VIBBER has indicated that it would be okay on the bug; but he wants a "Yes, turn this on" from other wikibookians. So please, say "YES" here. Kellen T 20:59, 5 July 2006 (UTC)
- You mean, you want to display newest pages in each category? --Derbeth talk 21:09, 5 July 2006 (UTC)
- I have to admit that I have no idea what DynamicPageLists are, what they do, why they currently aren't turned on, or why it would be an issue. A little more information, and perhaps I will agree. --Whiteknight(talk) (projects) 21:29, 5 July 2006 (UTC)
Ah, sorry for the lack of extra explanation. Here's a fuller description: DPLs would allow us to create transcluded lists of modules by category. The important part is that we can use multiple category filters; so you can generate a list which has pages which are in BOTH category A and category B. This, for instance, would be useful in automatically creating a listing of recipes for an ingredient, diet, etc ("Breakfast recipes containing apples" "Desserts containing apples" "Vegan Desserts" "Chicken Stews", etc, etc, etc). For the cookbook, this should signal the end of hand-maintained lists of recipes -- instead we can maintain the categories only but still have nicely formatted views for each page.
They are not currently turned on because DPLs are a mediawiki extension, so they a developer to turn them on explicitly. There shouldn't be any real issue with having them on, but Brion wanted some show of consensus from wikibookians. Kellen T 01:15, 6 July 2006 (UTC)
BTW, something to keep in mind here is that Wikinews uses this feature as well for the "local bureau" pages that are able to pull up the list of say sports stories that happened in New York. Look at wikinews:Portal:New York for a Wikimedia project example of how this is used. I'm not sure what would happen to these pages or indeed much of the Wikinews infrastructure if this were turned off. --Rob Horning 12:16, 7 July 2006 (UTC)
- Well if that's the case, then we should turn them on, but immediately employ them to a large degree so that they can't be turned off again. --Whiteknight(talk) (projects) 12:27, 7 July 2006 (UTC)
[edit] Yes, turn on DPLs for wikibooks
- Kellen T 20:59, 5 July 2006 (UTC)...
- You got my vote. Why not? --Rob Horning 03:02, 6 July 2006 (UTC)
- Based on the above discussion, and the fact that I don't really understand the issue. I'll just cast my vote here. If they do have a performance penalty, and we do need to turn them off in the future however, it will be bad if we have built too much infrastructure around them. --Whiteknight(talk) (projects) 23:52, 6 July 2006 (UTC)
- Yes. pfctdayelise 01:03, 8 July 2006 (UTC)
- Gentgeen 19:07, 9 July 2006 (UTC)
[edit] No, do not turn on DPLs
[edit] Huh?
- Still not sure what all this means, though the WP article on "transclusion" was a mildly interesting read. Johnny 10:17, 8 July 2006 (UTC)
- It means we can use this piece of software to generate lists from categories that show up on normal pages ('transclusion' is maybe not quite correct). In the cookbook, this means that we don't end up with two pages that a user must visit for an ingredient, e.g. the page on Lemon, Cookbook:Lemon and the category containing lemon recipes Category:Lemon recipes. Instead we can use DPLs to put the recipe listing right on Cookbook:Lemon. Kellen T 10:44, 8 July 2006 (UTC)
- Slightly more wiki-savy users could also use the feature to do recipe sorts. A user could use DPL on one of their user subpages to find, for example, all very easy through medium dificulty beverage recipes that use Oranges and Chocolate, but no milk, or any other combination of types, ingredients, and ratings. The editors could use such a list to prepare print-version Cookbook Pamphlets, with perhaps 100 recipes and enough technique and ingredient pages for the recipes involved. I think it would be a very useful feature. Gentgeen 19:06, 9 July 2006 (UTC)
- Yes, you can do that in certain cases. Consider Category:Dessert recipes, instead of lemon ones. A listing of dessert recipes would benefit from greater organization (see Cookbook:Dessert), which just isn't possible with flat categories. We can categorize the recipes all we want, but the truth is that the categories are not a particularly accessible or obvious way for our users to view recipes. Kellen T 12:13, 10 July 2006 (UTC)
[edit] Template:PokemonBugWater
I'm still waiting for this template. Please copy it over to . Thanks, Gerard Foley 11:48, 6 July 2006 (UTC)
- Thanks to whoever copied it over. I finished cleaning up all the Pokémon pages, so that was the last missing template. I should now have no reason to ever return here so bye all! Gerard Foley 14:30, 6 July 2006 (UTC)
[edit] Wikibooks is reaching three years
On 10th July 2003 main page of Wikibooks has been established and the new Wikimedia project started. I think that third birthday of Wikibooks is a good opportunity to promote this website. Perhaps we should write a longer news at Wikinews or even create some kind of 'press release' summarizing what have we achieved during these three years, as well as showing things that still are need to be done? --Derbeth talk 12:21, 6 July 2006 (UTC)
- I'm all for it. I think we could probably get a little bit of press over at slashdot as well, along with some other websites (possibly). 3 years is a long time, and it's amazing to me that i've only been here for 1 year or so. --Whiteknight(talk) (projects) 12:25, 6 July 2006 (UTC)
- July 10th. This is an excellent idea Derbeth. Do you want to make up a draft? Kellen T 12:38, 6 July 2006 (UTC)
To accomplish this task, I started this page, which may be something to look into and see if we can expand it somewhat, and perhaps turn it into a formal press release:
Wikibooks:State of the Project/2006
BTW, it would be nice to get information from other Wikibooks projects in other languages besides English and Polish (thanks Derbeth for the Polish information). --Rob Horning 13:45, 6 July 2006 (UTC)
- I wrote to the German edition asking them to help preparing the state of the project report. Unfortunately, I am not able to help any more since I am going to two-week mountain trip and will be completely offline then. --Derbeth talk 20:22, 6 July 2006 (UTC)
- I added some statistics about the german wikibooks project onto the page-link above. Please leave a message on de:Wikibooks:Projekt, if you need some more oder even additional information about our project. -- ThePacker 23:45, 6 July 2006 (UTC)
[edit] Featured books list update
Over the past few days I've added some books I found and removed others. Since I am not familiar with all subjects, books that appear nearly complete to me might actually be missing a lot of content, so I would appreciate it if others would look through the list and double-check that it's accurate. I've also simplified the listing criteria and been bold and added a link to the page to the sidebar, so browsers won't have to look through lots of stubs to find a good book to read. Between the featured books and the good books list lower on the page, Wikibooks seems to have acquired a very impressive collection! And in only three years! Thanks in advance, hagindaz 02:01, 7 July 2006 (UTC)
- I looked over the list. I am a little disheartened to see that only one of the books to which I am the main contributor made it to the list. Although, I realize that alot of my books are in pretty early states of development. I didn't see many books that definately dont belong on the list, although there were a few books with a few more red links then I would have liked. Overall, I think it is a good idea to pick out some of our "best and brightest" books, and put them smack-dab on the main page. Good job with that. --Whiteknight(talk) (projects) 02:20, 7 July 2006 (UTC)
Discussion moved to Wikibooks talk:Publication of the Month --Whiteknight (talk) (projects) 18:43, 4 August 2006 (UTC)
[edit] Decimal Classification
Your Card Catalog Office is quite cool and the German Wikibooks are growing. So soon or later we will have to use decimal classification too. But at current state our man-power ist too small, to make large steps forward. I would like to ask whether there was a vote or even the idea discussed, to make a Free-Decimal-Classification-Index e.g. on meta. I ask this because the Dewey index is not free and we don't have the knowledge to apply the smaller numbers after the first three (from the main category). Thanks for your attention. -- ThePacker 23:51, 8 July 2006 (UTC)
- I would note that the Library of Congress classification is free (as in speech and beer) to use and is a decimal classification system. The Dewey Decimal Classification system is free.... as long as you are using the classification catalog from before 1920. That is the largest problem with the DCC and free projects like Wikibooks and Wikisource, where many categories have been created that are much more modern in origin, even though the basic 100's level classification have been unchanged.
- In terms of trying to come up with a more "modern" classification system, that is an interesting point. It would be interesting to come up with a categorization system that would allow you to look up a book in multiple langauages that would show related books of the same topic. Right now the best developed original system that comes close is the current bookshelf system in use here on en.wikibooks, and that basic philosophy has been carried over into other language Wikibooks projects as well. This is something that does need some more attention and establishing a "classification" system is not a trivial task. --Rob Horning 12:34, 9 July 2006 (UTC)
- The german wikibook also has bookshelves, they even have their own namespace. I know that a new decimal classification systems is a huge task, because it depends on knowledge. But a consistent decimal classification of articles and books would even improve the basic search mechanisms in a wiki. Imagine what could be, if you choose the topic in the search-mask, rather than searching for special words in a huge database. Anyway, i wanted to know whether this idea could interest some people, to start such a project. maybe it will be discussed in the next month on your wikibooks, we discuss this topic also. -- ThePacker 15:36, 9 July 2006 (UTC)
- I would be interested in helping out with the creation of a new classification scheme. It certainly won't be easy, but if we can come up with a good one, and implement it here on wikibooks, it would be a benefit to the community as a whole. Of course, implementing it would involve the creation and adoption of a new policy, and we are historically slow at doing that kind of stuff here on en.wikibooks. --Whiteknight(talk) (projects) 14:04, 10 July 2006 (UTC)
- I don't know how to make such a project on meta. But at first: we will have to invest some amount of time into such a system, i mean before any voting to adopt it into an enforced policy. Everybody must be sure about it. It could be, that our efforts will not be accepted at all. Everyone must be sure about this too. The Classification System must come first. -- ThePacker 18:44, 10 July 2006 (UTC)
- Don't go to fast with that. A free classification would be a big think, exspecialy if it should be ready for international usages. (maybe the old DDC isn't the best base for that) A free classification can be used in many projects in the net and would be it own big project, so don't do it fast and ugly ;-) --PatrickD 13:54, 24 July 2006 (UTC)
[edit] Happy Birthday Wikibooks!
Today's the big day... should this be announced on the maiin page of WP (maybe under "on this day...")? SB_Johnny 11:41, 10 July 2006 (UTC)
- I added a note on our Main Page, but someone else should write some better text. Kellen T 14:05, 10 July 2006 (UTC)
- I added notes at both wikipedia and wikinews, but if someone wrote an actual press release, it'd be more likely to get used. Kellen T 14:42, 10 July 2006 (UTC)
- If i knew how to write a press release, and if i would even know where to send it, I would offer to help. Beyond that, you have my best wishes! --Whiteknight(talk) (projects) 15:46, 10 July 2006 (UTC)
[edit] Candidates for Speedy Deletion
Could someone clean out candidates for speedy deletion? Some pages have been there for 19 days. Thanks in advance. --Think Fast 14:26, 10 July 2006 (UTC)
- I'll get a few of them now. --Whiteknight(talk) (projects) 23:02, 10 July 2006 (UTC)
- I deleted a bunch. I didn't delete the ones that I could find reason not to, and there are some at the end of the list that I would have deleted, but I got tired of deleting. --Whiteknight(talk) (projects) 23:21, 10 July 2006 (UTC)
- only 19 days. Wow! Somebody has been being a diligent admin here! Seriously, don't get a stress attack over this. It takes somebody with a lot of time to keep up with the speedy deletes and often it is a thankless and boring job as well. I also like to let speedy delete markups ferment for some time just to let contributors know that there are objections to the content and potentially respond if they think it should remain, such as turning it into a VfD discussion instead. This isn't to say that a speedy delete should be up for six months without being culled, but don't expect the same speed of service that you might expect on Wikipedia. This isn't Wikipedia and I hope that the deletionist culture there never gets to this project. In other words, relax and trust that it will eventually be dealt with. --Rob Horning 12:20, 11 July 2006 (UTC)
[edit] AIM Link
I want to try and make an AIM link, for instant messenger. Hyperlinks need to have the form of: "aim://goim?username=", or something (I forget the exact syntax, but it's similar to this). Anyway, mediawiki absolutely refuses to let me create a link like this. Any idea how to make it happen, or if it is even possible? --Whiteknight(talk) (projects) 03:08, 11 July 2006 (UTC)
- Yeah, I tried <nowiki> tags, I tried every different variation of hyperlink format that mediawiki offers (at least that I know about), and nothing will just come out working. It's just interesting to me, because I know that "irc://" style links are allowed. --Whiteknight(talk) (projects) 19:40, 11 July 2006 (UTC)
- I don't know much about mediawiki, but I suspect there's a whitelist for link types somewhere. I'm not quite sure of what security implications AIM links might have (can't think of any except perhaps sending messages to the wrong people), but perhaps we can turn them on. Kellen T 22:27, 11 July 2006 (UTC)
- Yeah, I tried the pure link, and I tried embedding it in <nowiki> tags. I tried single brackets, and even double brackets. I'm wondering if there is some kind of CSS class i can utilize, but if there is, i dont know about it. --Whiteknight(talk) (projects) 21:18, 12 July 2006 (UTC)
[edit] Transwiking Video game books
When Jimbo ordered the removal of all video game books, people started to move them to StrategyWiki, because it was a gaming wiki and used the same license. However now it looks like the license is being changed to a custom version, so what will happen to the game books that have yet to be moved? Will a new wiki have to be found for them or something? Gerard Foley 14:40, 12 July 2006 (UTC)
[edit] Protected project
Hi, I used to work on a project called Players_guide_for_Star_Sonata before it was moves to Strategy Wiki. Now all that remains is a link SW. However, the strat-wiki project has been depreciated and replaced with a dedicated wiki. I was wondering if I could get an admin to add a link to the project page, since it's protected. The address to the new wiki is and it's proper title is "The Lyceum Archives".
Thanks, ArrowHate 19:51, 12 July 2006 (UTC)
- I'll simply put a link to both. Hows that? --Dragontamer 15:32, 14 July 2006 (UTC)
- Yeah, that would be great. 63.80.111.2 17:21, 18 July 2006 (UTC)
[edit] Category:Candidates for speedy deletion
Ehm... very full there. It could be useful to delete some of the listed pages and images. Most of the images are in the list since nearly a m onth or so. Thanks. -- 85.176.120.162 14:41, 16 July 2006 (UTC)
- Please see Wikibooks:Staff_lounge#Candidates_for_Speedy_Deletion. Wikibooks is slower than wikipedia as we have fewer admins and less public attention overall. These will eventually be cleaned up. Kellen T 18:05, 16 July 2006 (UTC)
- Sure, but one month is a bit long anyway (It would be too long in my eyes with one admin as well). -- John N. 19:56, 16 July 2006 (UTC)
- It is certainly not, and please, stop posting these type of messages here on the Staff Lounge. The next one I see here I will simply remove from the Staff Lounge altogether. This is expecting service and behavior of paid staff when we are all a group of volunteers, and I've railed against that in the past too. It will be dealt with, and leave it at that. It is not like admins aren't aware of this category and have never removed content from here in the past. --Rob Horning 13:18, 17 July 2006 (UTC)
|
http://en.wikibooks.org/wiki/Wikibooks:Staff_lounge/Archive_21
|
crawl-002
|
refinedweb
| 22,212
| 68.2
|
In this Hadoop interview questions post, we included all the regularly proposed questions that will encourage you to ace the interview with their high-grade solutions. Whereby the market is continuously progressing for Big Data and Hadoop masters.
Earlier, companies were particularly concerned regarding operational data, which signified less than 20% of the entire data. Succeeding, they understood that investigating the entire data will provide genuine business insights & decision-making aptitude. This was the period when big giants like Yahoo, Facebook, Google, etc. began utilizing Hadoop & Big Data associated technologies. In particular, nowadays the identity of each fifth organization is prompting to Big Data analytics. Therefore, the requirement for jobs in Big Data Online Training Hadoop is increasing like anything. Accordingly, if you desire to boost your career, Hadoop and Spark are presently the technology you want. That would ever give you a great start either as a fresher or experienced.
Top 50 Bigdata Hadoop Interview Questions And Answers Pdf
Big or small, are looking for a quality Big Data and Hadoop training specialists for the Comprehensive concerning these top Hadoop interview questions to obtain a job in Big Data market wherever local and global enterprises, Here the definitive list of top Hadoop interview questions directs you through the questions and answers on various topics like MapReduce, Pig, Hive, HDFS, HBase and, Hadoop Cluster .
Here is the top 50 objective type sample Hadoop Interview questions and their answers are given just below to them. These sample questions are framed by experts from SVR Technologies who train for Learn Hadoop Online Training to give you an idea of the type of questions which may be asked in an interview. We have taken complete interest to provide accurate answers to all the questions.
1. What is fsck?
Answer: fsck is the File System Check. Hadoop HDFS use.
[-list-corrupt file blocks |
[-move | -delete | -openforwrite]
[-files [-blocks [-locations | -racks]]]
[-includeSnapshots]
Path- Start checking from this path
-delete- Delete corrupted files.
-files- Print out the checked files.
-files –blocks- Print out the block report.
-files –blocks –locations- Print out locations for every block.
-files –blocks –rack- Print out network topology for data-node locations
-include snapshots- Include snapshot data if the given path indicates or include snapshot table directory.
-list -corruptfileblocks- Print the list of missing files and blocks they belong to.
2. Can free form SQL queries be used with Sqoop import command? If yes, then how can they be used?
Answer: Sqoop allows us to use free form SQL queries with the import command. The import command should be used with thee and – query options to execute free form SQL queries. When using thee and –query options with the import command the –target dir value must be specified.
3. Explain about ZooKeeper in Kafka?
Answer:.
4. Differentiate between Sqoop and dist CP?
Answer: DistCP utility can be used to transfer data between clusters whereas Sqoop can be used to transfer data only between Hadoop and RDBMS.
5. Is it suggested to place the data transfer utility sqoop on an edge node?
Answer:.
6. Does Flume provide 100% reliability to the data flow?
Answer: Yes, Apache Flume provides end to end reliability because of its transactional approach in the data flow.
7. How can Flume be used with HBase?
Answer:. BaseSink-
AsyncHBaseSink implements the AsyncHBaseEventSerializer. The initialize method is called only once by the sink when it starts. Sink invokes the setEvent method and then makes calls to the get increments and get actions methods just similar to HBase sink. When the sink stops, the cleanUp method is called by the serializer.
8. Explain the different channel types in Flume. Which channel type is faster?
Answer:.
9. What are the limitations of importing RDBMS tables into Hcatalog directly?
Answer: There is an option to import RDBMS tables into Hcatalog directly by making use of –catalog –database option with the –catalog –table but the limitation to it is that there are several arguments like –as-profile, -direct, -as-sequence file, -target-dir, -export-dir are not supported.
10. Which is the reliable channel in Flume to ensure that there is no data loss?
Answer: FILE Channel is the most reliable channel among the 3 channels JDBC, FILE and MEMORY.
11. Does Apache Flume provide support for third-party plug-ins?
Answer: Most of the data analysts use Apache Flume has plug-in based architecture as it can load data from external sources and transfer it to external destinations. ( data science online training )
12. Name a few companies that use Zookeeper?
Answer: Yahoo, Solr, Helprace, Neo4j, Rackspace.
13. Is it possible to leverage real-time analysis on the big data collected by Flume directly? If yes, then explain how?
Answer:
Data from Flume can be extracted, transformed and loaded in real-time into Apache Solr servers using MorphlineSolrSink.
14. Explain how Zookeeper works?
Answer:.
or more independent servers collectively form a ZooKeeper cluster and elect a master. One client connects to any of the specific servers and migrates if a particular node fails. The ensemble of ZooKeeper nodes is alive until the majority of nods are working. The master node in ZooKeeper is dynamically selected by the consensus within the ensemble so if the master node fails then the role of the master node will migrate to another node which is selected dynamically. Writes are linear and reads are concurrent in ZooKeeper.
15. Differentiate between FileSink and FileRollSink?
Answer: The major difference between HDFS FileSink and FileRollSink is that HDFS File Sink writes the events into the Hadoop Distributed File System (HDFS) whereas File Roll Sink stores the events into the local file system.
16. What is the role of Zookeeper in HBase architecture?
Answer: In HBase architecture, ZooKeeper is the monitoring server that provides different services like –tracking server failure and network partitions, maintaining the configuration information, establishing communication between the clients and region servers, the usability of ephemeral nodes to identify the available servers in the cluster.
17. List some examples of Zookeeper use cases?
Answer: Found by Elastic uses Zookeeper comprehensively for resource allocation, leader election, high priority notifications, and discovery. The entire service of Found built up of various systems that read and write to Zookeeper.
Apache Kafka that depends on ZooKeeper is used by LinkedIn
The storm that relies on ZooKeeper is used by popular companies like Groupon and 6) Explain about the replication and multiplexing selectors in Flume.
Channel Selectors are used to handling. The multiplexing channel selector is used when the application has to send different events to different channels.
Twitter.
18. What are the additional benefits YARN brings in to Hadoop?
Answer: Effective utilization of the resources as multiple applications can be run in YARN all sharing a common resource. In Hadoop MapReduce, there are separate slots for Map and Reduce tasks whereas in YARN there is no fixed slot. The same container can be used for Map and Reduce tasks leading to better utilization.
YARN is backward compatible so all the existing MapReduce jobs.
Using YARN, one can even run applications that are not based on the Map-Reduce model.
19. What are the modules that constitute the Apache Hadoop 2.0 framework?
Answer: Hadoop 2.0 contains four important modules of which 3 are inherited from Hadoop 1.0 and a new module YARN is added to it.
Hadoop Common – This module consists of all the basic utilities and libraries required by other modules.
HDFS- Hadoop Distributed file system that stores huge volumes of data on commodity machines across the cluster.
MapReduce- Java based programming model for data processing.
YARN- This is a new module introduced in Hadoop 2.0 for cluster resource management and job scheduling.
20. What are the different types of Znodes?
Answer: There are 2 types of Znodes namely- Ephemeral and Sequential znodes.
The Znodes that get destroyed as soon as the client that created it disconnects is referred to as Ephemeral znodes.
Sequential Znode is the one in which sequential number is chosen by the ZooKeeper ensemble and is pre-fixed when the client assigns a name to the node.
21. Explain about cogroup in Pig?
Answer:.
22. How to use Apache Zookeeper command-line interface?
Answer: ZooKeeper has command-line client support for interactive use. The command-line interface of ZooKeeper is similar to the file and shell system of UNIX. Data in ZooKeeper is stored in a hierarchy of Znodes where each node can contain data just similar to a file. Each node can also have children just like directories in the UNIX file system.
Zookeeper-client command is used to launch the command-line client. If the initial prompt is hidden by the log messages after entering the command, users can just hit ENTER to view the prompt.
23. What are different modes of execution in Apache Pig?
Answer:.
24. What are the watches?
Answer: Client disconnection might be.
25. How can you connect an application, if you run Hive as a server?
Answer: a different programming language like PHP, Python, Java, C++, and Ruby.
26. Explain the differences between Hadoop 1.x and Hadoop 2.x?
Answer: In Hadoop 1.x, MapReduce is responsible for both processing and cluster management whereas in Hadoop 2.x processing is taken care of by other processing models and YARN is responsible for cluster management.
Hadoop 2.x scales better when compared to Hadoop 1.x with close to 10000 nodes per cluster.
Hadoop 1.x has a single point of failure problem and whenever the NameNode fails it has to be recovered manually. However, in case of Hadoop 2.x StandBy NameNode overcomes the SPOF problem and whenever the NameNode fails it is configured for automatic recovery.
Hadoop 1.x works on the concept of slots whereas Hadoop 2.x works on the concept of containers and can also run generic tasks.
27. What are the core changes in Hadoop 2.0?
Answer: a larger number of jobs.
28. What problems can be addressed by using Zookeeper?
Answer:.
29. What does the overwrite keyword denote in Hive load statement?
Answer: Overwrite keyword in Hive load statement deletes the contents of the target table and replaces them with the files referred by the file path i.e. the files that are referred by the file path will be added to the table when using the overwrite keyword.
30. For what kind of big data problems, did the organization choose to use Hadoop?
Answer:.
31. What is SerDe in Hive? How can you write your own custom SerDe?
Answer: Dynamic SerDe rather than writing the SerDe from scratch.
32. Differentiate between NFS, Hadoop NameNode and JournalNode?
Answer: the local file system is accessed by applications.
Namenode is the heart of the HDFS file system that maintains the metadata and tracks where the file data is kept across the Hadoop cluster.
StandBy Nodes and Active Nodes communicate with a group of lightweight nodes to keep their state synchronized. These are known as Journal Nodes.
33. How can native libraries be included in YARN jobs?
Answer: There are two ways to include native libraries in YARN jobs-
1) By setting the -Djava.library.path on the command line but in this case, there are chances that the native libraries might not be loaded correctly and there is a possibility of errors.
2) The better option to include native libraries is to set the LD_LIBRARY_PATH in the .bashrc file.
34. What are the various tools you used in the big data and Hadoop projects you have worked on?
Answer: Your answer to these interview questions will help the interviewer understand your expertise in Hadoop based on the size of the Hadoop cluster and number of nodes. Based on the highest volume of data you have handled in your previous projects, the interviewer can assess your overall experience in debugging and troubleshooting issues involving huge Hadoop clusters.
The number of tools you have worked with the help an interviewer judge that you are aware of the overall Hadoop ecosystem and not just MapReduce. To be selected, it all depends on how well you communicate the answers to all these questions.
35. How is the distance between two nodes defined in Hadoop?
Answer: Measuring bandwidth is difficult in Hadoop so the network is denoted as a tree in Hadoop. The distance between two nodes in the tree plays a vital role in forming a Hadoop cluster and is defined by the network topology and java interface D N Sto Switch Mapping.1.
36. What is your favorite tool in the Hadoop ecosystem?
Answer: a Pig, Hive, HBase, Sqoop, flume then it shows that you have knowledge about the Hadoop ecosystem as a whole.
37. What is the size of the biggest Hadoop cluster a company X operates?
Answer:.
38. What are the features of Pseudo mode?
Answer: node are the same.
The pseudo mode is suitable for both for development and in the testing environment. In the Pseudo mode, all the daemons run on the same machine.
Data Quality – In the case of Big Data, data is very messy, inconsistent and incomplete. a very expensive
case of hardware failure. It provides high throughput access to lightweight processing like aggregation/summation.
YARN- YARN is the processing framework in Hadoop. It provides Resource management and allows multiple data processing engines. For example real-time streaming, data science, and batch processing.
Easy to use – No need of the client to deal with distributed computing, the framework take care of all the things. So it is easy to use.
39. How were you involved in data modeling, data ingestion, data transformation, and data aggregation?
Answer:.
40. What are the features of Fully-Distributed mode?
Answer: In this mode, all daemons execute in separate nodes forming a multi-node cluster. Thus, we allow separate nodes for Master and Slave.
We use this model in the production environment, where ‘n’ number of machines forming corresponding NodeManager accordingly.
41. In your previous project, did you maintain the Hadoop cluster in-house or used Hadoop in the cloud?
Answer: Most of the organizations still do not have the budget to maintain Hadoop cluster in-house and they make use of Hadoop in the cloud from various vendors like Amazon, Microsoft, Google, etc. The interviewer gets to know about your familiarity with using Hadoop in the cloud because if the company does not have an in-house implementation then hiring a candidate who has knowledge about using Hadoop in the cloud is worth it.
42. What are the modes in which Hadoop run?
Answer:
Apache Hadoop runs in three modes:
Local (Standalone) Mode – Hadoop by default run in a single-node, non-distributed mode, as a single Java process. The local mode uses the local file system for input and output operation. It is also used for debugging purpose, node are the same.
Fully-Distributed Mode – In this mode, all daemons execute in separate nodes forming a multi-node cluster. Thus, it allows.
43. Compare Hadoop and RDBMS?
Answer: Apache Hadoop is the future of the database because it stores and processes a large amount of data. Which will not be possible with the traditional database. There is some difference between Hadoop and RDBMS which are as follows:
Architecture – Traditional RDBMS have ACID properties. Whereas Hadoop is a distributed computing framework has two main components: a distributed file system (HDFS) and MapReduce.
Data acceptance – RDBMS accepts only structured data. While Hadoop can accept both structured as well as unstructured data. It is a great feature of Hadoop, as we can store everything in our database and there will be no data loss.
Scalability – RDBMS is a traditional database which provides vertical scalability. So if the data increases for storing then we have to increase particular system configuration. While Hadoop provides horizontal scalability. So we just have to add one or more node to the cluster if there is any requirement for an increase in data.
OLTP (Real-time data processing) and OLAP – Traditional RDMS support OLTP (Real-time data processing). OLTP is not supported in Apache Hadoop. Apache Hadoop supports large scale Batch Processing workloads (OLAP).
Cost – Licensed software, therefore we have to pay for the software. Whereas Hadoop is an open source framework, so we don’t need to pay for software.
If you have any doubts or queries regarding Hadoop Interview Questions at any point you can ask that Hadoop Interview question to us in the comment section and our support team will get back to you.
44. How is security achieved in Hadoop?.
45. What are the features of Standalone (local) mode?.
46. What are the limitations of Hadoop?
Answer: Various limitations of Hadoop are:
The issue with small files – Hadoop is not suited for small files. Small files are the major problems in HDFS. A small file is significantly smaller than the HDFS block size (default 128MB). If you are storing these large number of small files, HDFS can’t handle these lots of files. As HDFS works with a small number of large files for storing data sets rather than a larger number of small files. If one use the huge number of small files, then this will overload the namenode. Since name node stores the namespace of HDFS.
HAR files, Sequence files, and Hbase overcome small files issues.
Processing Speed – With parallel and distributed algorithm, MapReduce process large data sets. MapReduce performs the task: Map and Reduce. MapReduce requires a lot of time to perform these tasks thereby increasing latency. As data is distributed and processed over the cluster in MapReduce. So, it will increase the time and reduces. Hadoop does not support cyclic data flow. That is the chain of stages in which the input to the next stage is the output from the previous stage.
Vulnerable by nature – Hadoop is entirely written in Java, a language most widely used. Hence java.
The core Hadoop Interview Questions are for experienced, but freshers and Students can also read and refer them for advanced understanding
47. Explain Data Locality in Hadoop?
Answer: data nodes in the Hadoop cluster. When a user runs the MapReduce job then NameNode sends this MapReduce code to the datanodes on which data is available related to MapReduce job.
Data locality has three categories:
Data local – In this category data is on the same node as the mapper working on the data. In such a case, the proximity of the data is closer to the computation. This is the most preferred scenario.
Intra – Rack- In this scenarios mapper run on the different node but on the same rack. As it is not always possible to execute the mapper on the same data node due to constraints.
Inter-Rack – In this scenarios mapper run on the different rack. As it is not possible to execute mapper on a different
48. What are the different commands used to startup and shutdown Hadoop daemons?
Answer:
• To start all the hadoop daemons use: ./sbin/start-all.sh.
Then, to stop all the Hadoop daemons use:./sbin/stop-all.sh
• You can also start all the pdfs daemons together using ./sbin/start-of.sh. Yarn daemons together using ./sbin/start-yarn.sh. MR Job history server using /bin/Mr-job history-daemon.sh, start the history server. Then, to stop these daemons we can use
./sbin/stop-dfs.sh
./sbin/stop-yarn.sh
/bin/Mr-job history-daemon.sh, stop history server. history-daemon.sh start history server
49. What does jps command do in Hadoop?
Answer: The jobs command helps us to check if the Hadoop daemons are running or not. Thus, it shows all the Hadoop daemons that are running on the machine. Daemons are Namenode, Datanode, ResourceManager, NodeManager, etc.
fs.checkpoint.dir is the directory on the file system. On which secondary NameNode stores the temporary images of edit logs. Then this Edit Logs and FsImage will merge for backup.
50. How to debug Hadoop code?
Answer: First, check the list of MapReduce jobs currently running. Then, check whether orphaned jobs is running or not; if yes, you need to determine the location of RM logs.
First of all, Run: “ps –ef| grep –I ResourceManager” and then, look for log directory in the displayed result. Find out the job-id from the displayed list. Then check whether the error message.
Note: Browse latest Bigdata Hadoop Interview Questions and Bigdata Tutorial Videos. Here you can check Hadoop Training details and Hadoop Training Videos for self learning. Contact +91 988 502 2027 for more information.
All Bigdata Hadoop Interview Questions
Instructor Led TrainingDuration: 25+ hours
- Experienced Faculty
- Real time Scenarios
- Free Bundle Life time Access
- 100% Hands-on Classes
- Sample CV/Resume
- Interview Q&A
- Instructor Led Live Online Classes
- Instant Doubt Clarification
|
https://svrtechnologies.com/top-50-bigdata-hadoop-interview-questions-and-answers-pdf/
|
CC-MAIN-2020-29
|
refinedweb
| 3,463
| 65.93
|
YARD - Code Metadata And Documentation Generation for Ruby
- |
-
-
-
-
-
-
Read later
Reading List
One distinguishing feature is the support for metadata in comment strings. Users of tools like Javadoc will find the meta tag notation familiar (from the YARD Readme):
# Reverses the contents of a String or IO object.
#
# @param [String, #read] contents the contents to reverse
# @return [String] the contents reversed lexically
def reverse(contents)
contents = contents.read if respond_to? :read
contents.reverse
end
The
contentsparameter shows how to add type hints to the method argument. The argument can either be a
String, or any class with a
#readmethod (InfoQ previously discussed different approaches to type annotations in relation to protocols and duck typing).
Both RDoc 2.1 and YARD provide ways to document metaprogrammed methods. With RDoc, a comment starting with "##" defines a comment for a metaprogrammed method. The heuristic is to ignore the identifier immediately following the comment and take the following token as the method name:
##
# Does stuff.
add_method :foo
Methods that don't actually appear in the source code, but are part of the class interface can be documented like this:
##
# :method invisible_method
YARD is built with pluggable handlers - in fact, the basic functionality is implemented as handlers. A new handler can be added by extending the
YARD::Handlers::Baseclass, and overriding the
processclass, which allows to write handlers for custom constructs, such as internal DSLs or behavior similar to RDoc's solutions.
Running YARD on a project creates a
.yardocdatabase which caches the gathered code structure and data. YARD's
yritool, which works like
riand uses this database to allow for interactive documentation lookup. YARD can also use the cached information in the database to generate output in multiple formats, without having to analyze the repeatedly. YARD's cache is similar to code indexes created by IDEs to allow for advanced code search (i.e. searching for language constructs, not just fulltext search), code browsing, or for refactoring tools that need to be aware of all code in a project.
The idea to provide metadata about methods in comments has been around for some time in the Ruby space. Some projects promote optional type annotations in comments. Merb, for instance, has the following documentation guideline:
All methods (public and private) are required to provide a clear method signature, including the types for any parameters, and the possible values for any options hashes, as well as return types and other information.Merb currently uses a different notation to specify the method signature.
Another approach is supported in SapphireSteel's Ruby In Steel IDE: Type Assertions The metadata uses a different format than the
@tagof Javadoc or YARD, but it's indexed and used for documentation and to help IntelliSense (Screencast describing how to help IntelliSense with Type Assertions).
Rate this Article
- Editor Review
- Chief Editor Action
Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered.
Get the most out of the InfoQ experience.
Tell us what you think
Change the language, not the documentation style
by
Daniel Berger
For example, let's say we implement some form of Duby, with annotations to boot. You'll end up with something like this:
def reverse(String contents) => String
...
end
Meaning contents is a String, and the method returns a String. In addition to making it easier for document generators, it provides additional, inspectible metadata for that method.
Note that I'm not even suggesting that the argument type be enforced. It could be strictly for document generation, or it could merely emit a warning or whatever we want it to do.
|
https://www.infoq.com/news/2008/07/yard-documentation-generator
|
CC-MAIN-2017-13
|
refinedweb
| 609
| 53.41
|
render: function () { //custom stuff before view.prototype.render.call(this); //custom stuff after }
$ git clone $ cd moonboots_hapi/ $ git checkout v7.0.0 $ npm install $ npm test 23 tests complete Test duration: 1900 ms The following leaks were detected:Reflect Coverage: 85.29% (25/170) index.js missing coverage on line(s): 6, 16, 19, 31, 37, 48, 54, 55, 56, 57, 58, 59, 77, 78, 79, 80, 81, 96, 97, 98, 99, 100, 121, 128, 142 npm ERR! Test failed. See above for more details.
import {LayoutView} from 'backbone.marionette'; import template from './template'; export default LayoutView.extend({ modelEvents: { change: 'render' }, template: template, templateHelpers() { return this.model.toJSON(); } });
save()on my ampersand-model the derived and children fields are posted to the api along with the props. is there a way to stop this happening? I have a
projectIdprop and a
projectis one of the model's children. I only want to send the
projectIdto the api on save, not the
projectmodel
My friend told me that when
constructor is used in a function, Webpack spits out an error on the console saying
Uncaught TypeError: _WEBPACK_IMPORTED_MODULE_0_ thatmodule__.a is not a constructor. He showed me the error. He said, in order to fix it, the word
constructor should be replaced with
initialize. The module being used and exported is
ampersand-state and it's in src/profile/index.js.
import State from 'ampersand-state'; export default State.extend({...... // blah blah, code removed for clarity
Then on the outer index.js(not src/profile/index.js), the code is
import Profile from 'src/profile'; export default module.exports = Profile;
What are your thoughts on the constructor issue?
|
https://gitter.im/AmpersandJS/AmpersandJS?at=59ab4c85bac826f0547428ee
|
CC-MAIN-2022-33
|
refinedweb
| 276
| 60.31
|
ik Stiklestad
Norwegian University of Science and Technology Department of Computer and Information Science
Problem Description The Creek system has an architecture that facilitates combined case-based and model-based reasoning. jColibri, developed in the CBR group of Universidad Complutense in Madrid, contains a library of CBR system components intended for sharing and reuse, and an ontology (CBROnto) of CBR methods for explicit modelling of a CBR system's operation. In this master degree project, the Creek framework and the jColibri structure shall be compared with the aim of developing a mechanism for importing jColibri components into Creek, so that they can be integrated into a running Creek system. The mechanism shall be exemplified through selection of a few (two or more) specific components, and integration of these components into an implemented demonstrator system.
AbstractThe Creek system has an architecture that facilitates combined case-based and model-based reasoning. The jColibri system, developed by the CBR group of Universidad Complutense in Madrid, contains a library of CBR system components intended for sharing and reuse. The system also contains an ontology (CBROnto) of CBR tasks and methods for explicit modelling of a CBR systems, in addition to general CBR terminology. In this master degree project, Creek and jColibri are compared with the aim of developing a mechanism for importing jColibri components to Creek, so that they can be integrated into a running Creek system. The mechanism is exemplied through selection of a few specic components, and integration of these components into an implemented demonstrator system. In addition, eorts needed to bring Creek into the jColibri framework are identied.
PrefaceThis document presents the work by Erik Stiklestad in TDT4900, which is a Master's thesis in Computer Science (Datateknikk). It is written for the Articial Intelligence and Learning Group (AIL) at the Norwegian University of Science and Technology (NTNU), and the software company Volve AS. The goal is to analyze and compare Creek and jColibri with the aim of developing a mechansim for importing jColibri components into Creek, so that they can be integrated into a running Creek system. In addition, eorts needed to bring Creek into the jColibri framework shall be identied. I would like to thank my supervisor Agnar Aamodt from NTNU and coadvisor Frode Srmo from Volve AS for their good and patient guidance. Thanks also to the employees of Volve AS, for letting me work in their ofces during the most technical period.
ii
Contents1 Introduction1.1 1.2 1.3 1.4 1.5 2.1 2.2 2.3 2.4 2.5 3.1 Background and Motivation Goals . . . . . . . . . . . . . Methodology . . . . . . . . Structure of the Report . . . Summary . . . . . . . . . . Case-Based Reasoning Ontologies . . . . . . . COLIBRI . . . . . . . Creek . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 2 2 3 3
2 Research Focus
5 6 7 9 10 11 12 12 14 15 16 17 18 21 25 25 27 27 28 28 29 30
3 Software Analysis
3.2
jColibri . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Representation . . . . . . . . . . . . . . . . . . 3.1.2 The Core . . . . . . . . . . . . . . . . . . . . . 3.1.3 Data Types . . . . . . . . . . . . . . . . . . . . 3.1.4 Cases . . . . . . . . . . . . . . . . . . . . . . . . 3.1.5 Connectors and Case Bases . . . . . . . . . . . 3.1.6 Helper Functions . . . . . . . . . . . . . . . . . 3.1.7 Tasks and PSMs . . . . . . . . . . . . . . . . . 3.1.8 Creating and Executing an Application . . . . . VolveCreek . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Ontologies . . . . . . . . . . . . . . . . . . . . . 3.2.2 Entities . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Cases . . . . . . . . . . . . . . . . . . . . . . . . 3.2.4 Relations . . . . . . . . . . . . . . . . . . . . . 3.2.5 Reasoning . . . . . . . . . . . . . . . . . . . . . 3.2.6 Comparison Controller . . . . . . . . . . . . . . 3.2.7 Creating and Running a VolveCreek Application iii
11
3.3
Comparing VolveCreek and jColibri 3.3.1 Representation . . . . . . . 3.3.2 Model . . . . . . . . . . . . 3.3.3 Cases . . . . . . . . . . . . . 3.3.4 Comparison Components . . 3.3.5 Problem Solving Methods . 3.3.6 Transforms . . . . . . . . . 3.3.7 Reuse . . . . . . . . . . . . Summary . . . . . . . . . . . . . . Helper Functions . . . . . . Data Types . . . . . . . . . Problem Solving Methods . 4.3.1 Import Focus . . . . 4.3.2 Usage Focus . . . . . 4.3.3 Method Construction Demonstrator System . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . with . . . . . .
. . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
33 33 34 35 35 36 37 37 38
4 Construction
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Usage Focus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4142 43 44 44 45 46 48 49 51 53 55 56 56 57 58 59 61 62 64 65 66 67 67 68 69 69
5 Implementation
Helper Functions . . . . . . . . . . . . . . Data Types . . . . . . . . . . . . . . . . . Problem Solving Methods . . . . . . . . . Demonstrator System . . . . . . . . . . . . 5.4.1 Using the New Data Type . . . . . 5.4.2 Using the new Similarity Functions 5.4.3 Invoking a Method . . . . . . . . . Summary . . . . . . . . . . . . . . . . . .
51
6 Testing
Similarities . . . . . . . . . . . . . . . . . . . . . . . . . . . The Method and the Data Type . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . VolveCreek . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . jColibri . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
7.2
Importing jColibri Components to 7.1.1 Helper Functions . . . . . 7.1.2 Data Types . . . . . . . . 7.1.3 Methods . . . . . . . . . . VolveCreek Extending jColibri . . 7.2.1 Models . . . . . . . . . . . 7.2.2 VolveCreek Components in
65
iv
7.2.3 8.1
Example Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
70
7575 75 76 77
8.2
Bibliography
77
vi
List of Figures2.1 2.2 3.1 3.2 3.3 3.4 3.5 3.6 3.7 6.1 6.2 7.1 The four-step CBR cycle . . . . . . . . . . . . . . . . . . . . Integrating domain ontologies and CBR processes . . . . . . The jColibri Core . . . . . . . . . . . . . . The jColibri connector architecture . . . . The CBR task and method structure . . . Creating the Case Structure . . . . . . . . An overview of the jColibri architecture . . An example semantic net from VolveCreek, The VolveCreek domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . and a frame . . . . . . . . . . . . . . . . . . . . . . . 6 8 13 17 19 22 24 26 37 62 63 73
Results from the similarity functions . . . . . . . . . . . . . Screen shot from the VolveCreek Knowledge Editor with the demonstrator system loaded . . . . . . . . . . . . . . . . . . Conguring the CreekExample in jColibri . . . . . . . . . .
vii
viii
Chapter 1 IntroductionThe introduction chapter describes the background and motivation for the project, before dening its goals. The methodology is also described, and a structural overview of the project report is provided.
1.1
One of the main research areas for the Articial Intelligence and Learning Group (AIL) at NTNU is Case-Based Reasoning (CBR). The research has a focus on knowledge-intensive approaches, and the Creek system has been developed. Creek facilitates combined CBR and Model-Based Reasoning (MBR). The model contains knowledge about a domain in general, while a collection of cases describe specic problem situations. Recent development of Creek by Volve AS1 customizes the system to be used while drilling for oil. The system will help avoid unwanted events by giving a warning when real-time data is getting dangerously similar to problem situations. The problem situations are based on both historical data and the experience of domain experts. To reason within this domain there is built a general domain model, while the problem situations are represented as cases. Combining the two, we have CBR and MBR in one system. jColibri, developed by the Group for Articial Intelligence Applications (GAIA2 ) at the Universidad Complutense de Madrid, contains a library of CBR system components intended for sharing and reuse. In addition, it has1 2
1. Introduction
an ontology (CBROnto) containing general CBR terminology and knowledge about tasks and Problem Solving Methods (PSMs). jColibri attempts to formalize CBR and become the standard for CBR system development. The Creek developers became interested in jColibri after learning about its goals. They would like to see if the two systems are able to cooperate on some level, since that would be positive for both systems. In addition, because of jColibri's goal to formalize CBR, it is also interesting to see if Creek can adapt to jColibri's framework.
1.2
Goals
In this project, Creek and jColibri will be analyzed and compared with the goal of developing a mechanism for importing jColibri components to Creek. The mechanism will be exemplied through the selection of a few specic components, and integration of these components into a demonstrator system. In addition, eorts needed to bring Creek into the jColibri framework will be identied. Analyze Creek and jColibri, and identify their similarities and dierences; Construct a mechanism able to import jColibri components into the existing Creek implementation; Create and test a demonstrator system featuring a few selected components; Discuss eorts needed to bring the Creek system into jColibri.
1.3
Methodology
This project will be based on analytical and experimental methods, with some general background theory covered at the start. A lot of time will be used to analyze Creek and jColibri. The analysis will be both conceptual and close to the implementation, and will result in a comparison essential to the project. Based on the comparison, the rest of the project will consist of looking at various solutions to accomplish the project goals. Reusing External Library Components in the Creek CBR System
The rst chapter of the project report is this introduction, which denes the project goals and describes how and why we want to accomplish them. Chapter two describes and introduces the main research areas. This includes the two systems that are thoroughly analyzed in the third chapter, in addition to a short introduction to CBR and ontologies. Chapter three is closed with a system comparison, which is used by the fourth chapter to construct a possible solution and a demonstrator system. This solution is implemented in chapter ve. The demonstrator system is used to test the solution in the sixth chapter. The report continues with a discussion in chapter seven, before being closed with further work and a conclusion in the eighth and nal chapter.
1.5
Summary
This project is motivated by NTNU's focus on CBR, and the recent interest in the jColibri system. The goals are to import components from jColibri to Creek, and to identify eorts needed to bring Creek into jColibri. To accomplish these goals, the two systems will be analyzed and compared. A demonstrator system will be implemented to test a possible solution.
2.1
Case-Based Reasoning
Case-Based Reasoning (CBR) is an approach to problem solving and learning. When solving a new problem, this approach makes use of previously solved problems when reasoning. The previously solved problems are also referred to as experiences. After the new problem has been solved, it is retained in the system as additional experience. The latter step represents learning. A problem is what we refer to as a case in CBR, and both new problems and old experiences are cases. A case is described by a set of features which in sum denes the problem. A feature can be anything giving relevant information about the case. CBR is typically done in a four-step cycle as shown in gure 2.1 taken from [AP94]. First, we retrieve all learned cases (experiences) that are relevant for solving a new problem that entered the system. To nd out which cases are relevant, we compare the new problem to the learned cases by comparing their features. The best matching case or cases are chosen, which nalizes the retrieval step. Second, we reuse the chosen case's solution by copying it or adapting it to t our needs. If several cases were retrieved then we adapt a solution by combining parts of the retrieved cases. Third, we revise how 5
2. Research Focus
Figure 2.1: The four-step CBR cycle well the new problem was solved with the solution we just reused. This is done by testing it in the real world or some kind of test scenario. Finally, we retain this new experience as a case in our system for future problem solving. This approach is very similar to how humans reason when solving problems.
2.2
Ontologies
A common denition of an ontology states that it is "a specication of a conceptualization" [Gru93]. Related to computer science, an ontology can be seen as a data model representing a set of concepts within a domain, and the relationships between them. We can use ontologies as a form of knowledge representation for our domain, and use its components when reasoning. An ontology generally describes individuals, classes, attributes and relations. The individuals are the basic components of the ontology and may include concrete objects such as specic cars, or abstract things like words. Reusing External Library Components in the Creek CBR System
2.3. COLIBRI
An ontology does not necessary have any individuals, since an ontology may be created to provide a way to classify individuals in several systems sharing the same model (a general purpose ontology). The classes collect individuals and other classes. E.g., a class car can collect individuals car#1 and car#2, while the class vehicle can collect classes car and truck. This would make car and truck subclasses of vehicle. Attributes are characteristics of an individual. E.g., car#1 can have the attribute red to describe its color. Relationships describe how things relate to each other. A possible relationship between car#1 and car#2 could be that one is the successor of the other. If we create general ontologies about a domain, we can reuse the ontologies in all systems reasoning within that domain. Application specic ontologies can later be mapped to the more general ontologies, creating several layers of specicity. Reasoning mechanisms dened for a general concept or relation of an ontology can then automatically be used for more specic ones because of inheritance. If we want to integrate several systems, or simply make them work with each other at some level, it is a huge benet if they are based on the same ontology. This is the essential motivation behind CBROnto in the COLIBRI system which will be presented in the next section.
2.3
COLIBRI
In 2002, Beln Daz-Agudo proposed a domain independent architecture called COLIBRI1 in her PhD thesis directed by Pedro Gonzlez-Calero. COLIBRI tries to formalize CBR, and provide design assistance when creating KI-CBR systems. A system may combine domain specic knowledge with various knowledge types and reasoning methods common to all domains. Very important to the COLIBRI system is CBROnto, which is an ontology containing general CBR terminology. It is also a task and method ontology. The root of CBROnto is CBRTerm, which all concepts of the ontology are specialized from. The idea is that COLIBRI should be based on Knowledge Acquisition (KA) from a library of application independent ontologies, and that these ontologies should be mapped to CBROnto by the system designer. More specifically, the designer should classify the domain knowledge's concepts and1 Cases
Figure 2.2: Integrating domain ontologies and CBR processes relations to CBROnto. Since we are dealing with a hierarchical structure, only the top level concepts and relations of the domain knowledge need to be classied. The rest is solved automatically through inheritance. Since CBROnto describes both tasks and methods, there are no gaps between the system's goals (tasks) and the Problem Solving Methods (PSMs) used to accomplish them. This has been an issue with older systems. The tasks dene the structure of an application, and how it will be executed. A PSM can either decompose a task into subtasks, or solve it directly. Subtasks are in turn solved by other PSMs, and this process continues until all tasks are solved. This means that we need a resolution PSM for all tasks that are not decomposed, or the system will not be able to execute successfully. The approach is inspired by the Components of Expertise methodology [Ste90]. Further, the gap between the PSMs described by CBROnto and the domain knowledge is removed by the classication done by the system designer. See Figure 2.2 taken from [Dia00]. This solves another important issue with earlier CBR systems. It is also important to address a PSM's dependency on knowledge. If a resolution PSM with the competence to accomplish a certain task does not have the knowledge necessary to do so, we have a problem. This is solved using the classication mechanism, checking if the necessary knowledge is available when a certain PSM will be executed. We know what knowledge Reusing External Library Components in the Creek CBR System
2.4. Creekis needed since each PSM is dened with a set of conditions.
The idea behind CBROnto is to create a common data model which is able to represent all CBR systems. This way, CBROnto is an attempt to generalize and formalize CBR by making a domain independent ontology. CBROnto tries to contain all general CBR terminology, and support the semantic needs for all CBR systems. When creating a CBR system, CBROnto will guide the case representation, and help describe exible, generic and reusable PSMs. At the top level are the four well known CBR steps which can be seen in Figure 2.1: Retrieve, Reuse, Revise and Retain. Each task can be decomposed or solved directly like mentioned earlier. CBR concepts from CBROnto are implemented as abstract classes or interfaces in the COLIBRI framework. Typically, the is-a relations from the ontology are implemented using inheritance between classes, and the part-of relations are implemented as a composition of classes. By doing this, we have an implementation which represents concepts from the ontology, providing two main things. First o they give us an abstract interface for CBR methods and tasks. They can be developed independently from the actual CBR components such as case structure, similarity functions and so forth. Second, they serve as hooks where new types of building blocks can be added. COLIBRI's implementation is based on the reasoning capabilities of Description Logics (DL). Originally, it was implemented with LOOM2 which is a knowledge representation language developed specically for articial intelligence. It contains a set of advanced tools for knowledge representation and reasoning. The LOOM implementation assumed rather advanced users, and that is why a new implementation was started. The new implementation was given the name jColibri3 , and it has a distributed architecture, a DL engine, a GUI for non-technical users and an object-oriented framework in Java. The ontologies are represented using the Web Ontology Language (OWL). This implementation will be analyzed in the next chapter.
2.4
Creek
In 1991, Agnar Aamodt developed the CREEK4 architecture in his PhD thesis, normally written "Creek". It is an architecture for knowledge intenbe written jCOLIBRI, but for improved readability we will write jColibri in this project report 4 Case-based Reasoning through Extensive Explicit Knowledge3 Should 2
10
sive problem solving and sustained learning [Creek]. Since the Creek system is well known to the readers of this project report, it will be given less focus than the COLIBRI system. The basis of the Creek knowledge representation is a graph. A graph is a pure mathematical model with a set of objects (called nodes, points or vertices ) connected by links (called edges or lines ). If we give meaning to the objects and links, we get a semantic network which is what Creek is using. Henceforth, objects will be referred to as entities and the links as relations. If we collect all relations connected to a certain entity, we have a frame. In short, Creek's frame based knowledge model is a semantic net of entities interconnected by relations. Creek is also using ontologies in an attempt to generalize certain domains. At the very top level, we have the Thing concept. Everything in the world is a Thing, meaning that it is the most general term in the model. The knowledge located in the ontologies can be about CBR in general, or well established knowledge about a specic domain. This knowledge may be application independent, and can potentially be reused. Recently, it has been created a case model for Creek. This model contains general CBR terminology. The model is important to this project, since it contains knowledge needed to use recently developed packages for Creek. This involves general CBR things like cases, attributes and so forth. The version of Creek which will be used in this project, is a development snapshot from Volve AS and will be called VolveCreek in this report. Some components of the system are not fully developed. VolveCreek will be analyzed in the next chapter.
2.5
CBR and ontologies are important research areas to this project. CBR is an approach to problem solving and learning, while ontologies are used to create knowledge models. The two systems which will be analyzed both conceptually and close to their implementation in the next chapter are COLIBRI and Creek. COLIBRI uses the reasoning capabilities of DL, and is based on the use of CBROnto which is an ontology with general CBR terminology, in addition to being a task and method ontology. OWL is used to represent the ontologies. Creek uses a frame based knowledge representation which is a semantic net of entities interconnected by relations.
3.1
jColibri
jColibri is a Java implementation of the COLIBRI system introduced in section 2.3, and is intended for a large audience. Anyone from new students to technically advanced users should be able to use it on some level. It is possible to prototype and test CBR systems very quickly using jColibri's GUI, which makes it signicantly more user friendly than its predecessor implemented in LOOM. It is also designed to support anything from the simplest CBR systems to the large and complex ones. With its focus on reuse, jColibri makes it easy to take advantage of past designs when creating a new system. It is semi-complete and ready to be extended for custom applications. This chapter will analyze all major components of the jColibri framework, and describe how they work together when creating and running a CBR application. Components that are particularly important to this project will be analyzed in greater detail. These are tasks (CBRTask), PSMs (CBRMethod) and similarity functions (CBRSimilarity). They all implement the CBRTerm interface which represents the most general concept of CBROnto. Data types are also important. 11
12
3. Software Analysis
We will start by explaining how the representation is supported, before moving on to the specic components.
3.1.1
Representation
The jColibri system is based on Description Logics (DL) [NB03], which can be translated to rst-order predicate logic. It is, in other words, a representation with logic-based semantics. This type of representation works best in domains with a strong domain theory. This is typically domains that can be modelled in a formally well-understood way [DINS96] [GGDF99]. To support the representation (see section 2.3), and as a parallel to the use of ontologies, jColibri has an interface called Individual. An individual has a collection of relationships to other individuals in addition to parents and a value. A class SimpleIndividual has been implemented and is currently being used by the example applications, but new individuals can of course be implemented. The class IndividualRelation implements the relation concept between individuals. A relation has a description, target and a weight. When, e.g., giving a case a set of attributes, we create a relation from the case to its attributes. Both the case and the attributes are Individual objects, and their relationships between them are IndividualRelation objects. This is a very general way to support the representation. How the individuals are used will be described further in later sections, and in particular section 3.1.4 about cases and section 3.1.6 where their comparison functions are described. The OWL DL reasoner used by jColibri is called PELLET [SPGKY07].
3.1.2
The Core
The core of jColibri is called CBRCore, and can be seen in gure 3.1 taken from [jColibri]. It is the most important component of the framework. The core is in charge of the application, and must always be present for an application to run. It handles the conguration, and also executes the application. To do all this, the core is divided into three main components which will be described in turn: state, context and packages. Reusing External Library Components in the Creek CBR System
3.1. jColibri
13
StateThe state, which is called CBRState, handles the conguration of tasks and methods. It will always have the current conguration status of the CBR application.
ContextThe context, called CBRContext, acts as a communication blackboard where methods can share data during the execution of an application. The context will have the case base and the working cases. What the working cases are depends on the execution step, but they can, e.g., be newly retrieved cases, adapted cases, and so forth. jColibri also features a context checker, which ensures that the components congured for an application are compatible with the context at every moment during development. The nal application conguration is sure to satisfy each component's conditions because of the context checker. E.g., a PSM may be given preconditions and postconditions, dening which conditions the PSM is dependent on before and after its execution. This is part of the solution for an issues that was introduced in section 2.3: a PSM's dependency on knowledge to accomplish a task.
PackagesThe remaining components of the system are located in packages. Examples of components located in these packages are data types, similarity functions, case structures, PSMs and so forth. Since these components are rather Erik Stiklestad
14
complex and important to this project, they will be described in further detail in the following sections. Each package may contain a set of one or more components. jColibri comes with a few rather stable packages at this time: core, textual, description logics and web. The core package should always be enabled for an application, and the GUI does this by default and in addition lets the system designer enabled others as the rst step. An essential goal of this project is to import parts of these packages to the VolveCreek system.
3.1.3
Data Types
Data types are important in any computer system. Knowing the data type of something enables us to make assumptions, and this is important also in CBR. One obvious example is when we want to compare two cases by applying a similarity function to two values. Which function to use is highly dependent on the data type. To dene which data type each attribute is, we specify them already when we create the case structure. jColibri comes with a set of data types in the core package which covers all common data types, and in addition the DL extensions provides a ConceptType data type. The system designer is able to congure new data types using the GUI, or by writing the XML conguration le manually. A data type is congured with a name, Java object and the identity of a GUI editor. The name can be anything, the Java object must exist and hold this specic type of data, and a GUI editor should be chosen to let users enter values when this data type is asked for. Example XML conguration format for two data types follows. The data types are Boolean and String, provided by the core package.< DataTypes > < DataType > <Name > Boolean </ Name > < Class > java . lang . Boolean </ Class > < GUIEditor > jcolibri . gui . editor . BooleanEditor </ GUIEditor > </ DataType > < DataType > <Name > String </ Name > < Class > java . lang . String </ Class > < GUIEditor > jcolibri . gui . editor . StringEditor </ GUIEditor > </ DataType > </ DataTypes >
1 2 3 4 5 6 7 8 9 10 11 12
3.1. jColibri3.1.4 Cases
15
The most important thing in any CBR system is the cases, and hence it is also important how they are represented. With jColibri it is possible to create anything from simple plain cases to the most complex hierarchical structures with attributes connected. This case structure is important. It is for example used when loading cases into the system from a case persistency (see section 3.1.5), and when obtaining a query1 from the user before the retrieve CBR step. Following the denition provided by CBROnto, a case has a Description, a Solution and a Result. Description describes the problem by using a set of attributes. Solution is also a set of attributes, but describes the solution of the problem. Result stores the consequence of applying the solution in the real world or a test scenario. The result may be good or bad, depending on if the solution actually solved the problem, or if it did not. We can see how this denition of a case makes sense if comparing it to the description of CBR in section 2.1. An attribute can be either simple or compound. The case structure can be compared to a tree structure, where leaf-nodes are simple attributes, internal nodes are compound attributes and the Description and the Solution are the root nodes. Simple attributes have a name, type, weight and local similarity function. The name can be anything, type is a data type, weight says how important the attribute is relative to the others, and nally the local similarity function refers to a similarity function used to compare two instances of this attribute. Compound attributes collect simple attributes, and has a name and global similarity function. The name can again be anything, while the global similarity function is a function calculating the collected similarity of all simple and compound attributes below it in the case structure. jColibri stores the case structure in an XML le, which can be written manually or generated by a GUI tool. The GUI makes it easy by listing only the available data types and similarity functions, and it will also load ontologies and let the system designer select concepts directly when building the case structure. An example case structure from [Sti06] follows.
query has the same structure as a case, and will be compared to the other cases as if it was a case.
1 The
161 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
< Case < Description < SimpleAttributeConcept < SimpleAttributeConcept < SimpleAttributeConcept </ Description > < Solution < SimpleAttributeConcept < SimpleAttributeConcept </ Solution > < Result /> < Reasoner > <Type > PELLET </ Type > < Source > src / jcolibri / application / CreekExample / Ontology . owl </ Source > </ Reasoner > </ Case >
The case structure is for a car being described with attributes colour, batteryStatus and engineStatus. The solution can either be ignoreIt or rechargeBattery. The case structure is created to retrieve car cases directly from an OWL le by using the PELLET reasoner. [SRDG05] provides more details about the case structures in jColibri.
3.1.5
When loading cases into the system, jColibri uses a two-layer model which can be seen in gure 3.2 taken from [jColibri]. A connector is an object which has the ability to access and retrieve cases from a specic case persistency when given the case structure, and give those cases to the CBR system in a standardized way. Because of this, jColibri can deal with any case persistency as long as a connector is provided. The rst layer is the case persistency and can be plain text, XML, ontologies, a relational databases or anything else we have a connector for. Since all connectors feed cases into the system in the same way, it does not matter to the CBR system which case persistency is used. The second layer is the in-memory organization. Reusing External Library Components in the Creek CBR System
17
Helper functions assist the PSMs when they try to accomplish tasks, and may be domain dependent although they do not have to be. Similarity functions are the most important helper functions, but there may also be others. Although jColibri does not include any other helper functions than the similarity functions at this time, an example could be adaption functions.
Similarity FunctionsWhen comparing two cases, the PSMs use the similarity functions to compare each attribute. The local similarity functions are used for simple attributes, and global similarity functions are used for compound attributes. The local similarity and global similarity values given to attributes in the case structure like explained in the previous section, must refer to the name of an implemented similarity function. When using the GUI tool to create the case structure, this constraint is taken care of by only letting the user select similarity functions that are available and implemented. Implemented similarity functions may be unavailable to a certain application if the function is part of a package that is not enabled. E.g., a textual similarity function like TokensContained is not available unless the textual extension is enabled. Following CBROnto, jColibri has a CBRSimilarity abstract class which implements CBRTerm. It represents the similarity concept inside the Java framework. A CBRSimilarity has a name and a class name which refers to the Erik Stiklestad
18
class implementing the similarity function. There are two classes extending CBRSimilarity, and those classes are of course CBRGlobalSimilarity and CBRLocalSimilarity. The actual similarity functions are imported by the global or local similarities, and they are implementations of the interface SimilarityFunction. This interface does not exist in CBROnto, but is part of jColibri to ensure that the similarity functions are implemented in a consistent way. Each similarity function must have a compute method taking two individuals as parameters, and then returning the similarity between the two individuals as the data type double. The similarities also have parameters, and they are implemented a class called CBRSimilarityParam. The similarity parameters simply have a name and the value. These parameters are imported by CBRSimilarity to make them available to both global and local similarities. It is typically used to set variables in functions that depend on, e.g., the size of the case base or anything else that may be used to compute the similarity.
3.1.7
This section describes the tasks and methods on a conceptual level, before looking at the implementation. These components describe the structure of the CBR system, and hence also the behavior. They are very important and centralized in the jColibri system.
ConceptualCBROnto has a task decomposition structure like mentioned in section 2.3. Everything starts with a root task which is decomposed into the four well known CBR tasks from [AP94]: retrieve, reuse, revise and retain. Each of these are decomposed or solved directly by PSMs, and this process does not stop until all tasks are either decomposed or have been assigned a resolution PSM. See Figure 3.3 from [Cha90] and [Dia02]. The CBR system's designer will do this process manually, and again the GUI is of great help. When selecting a task, the designer is given a list of methods with the competence to solve the task. The competence is dened in the ontologies, and decomposition methods also have the new subtasks dened here. If a decomposition method is selected, the new subtasks will appear immediately in the GUI. Reusing External Library Components in the Creek CBR System
19
Figure 3.3: The CBR task and method structure Normally, a method is divided into three main parts: competence, operational specication and requirements. The competence says which tasks the method is able to accomplish (what can be solved). The operational specication describes the method's way of delivering a specic competence (how something can be solved). Finally, the requirements describes which knowledge is needed to achieve the given competence by going through its reasoning process. The competence and requirements of a method are described using ontologies, which provide two advantages. A formal description giving precise meaning combined with reasoning support is the rst advantage. The second, which is also a general focus in jColibri, is that it enables reuse since they can be used by dierent systems. In CBROnto there is a method concept, and each method is an instance of it. The internal reasoning process of the methods are not formalized in the ontologies, but each method has an associated Java class which implements it. The methods have a name, informal description, type and a relationship to a number of other concept instances. In jColibri, the name is equal to the class implementing the method. This creates an association between the ontology and the Java implementation. How it is implemented will be discussed in the following section. The informal description can be anything. The method type can be either resolution or decomposition. In the case of decomposition, it will also have a set of subtasks, which will be solved but other methods later. To represent a method's competence, it has a relation Erik Stiklestad
20
to an instance of the task concept it is able to solve. The methods may also have several parameters, which may typically be a case structure, a connector or simply an attribute from the Description or Solution concepts of the case (see section 3.1.4). Instances of the task concept only has a name and a description. They are identied by their name.
ImplementationAgain following CBROnto, the classes CBRTask (tasks) and CBRMethod (methods) implements the CBRTerm (terms) interface. A task object is really a prototype task which cannot be used directly in an application. Instead, a new instance must be created by cloning the prototype. A task object has a task name, description and name of the task instance. Optionally, it has a reference to a method assigned to solve the task. This reference is kept up to date at all times. A method object has a name, informal description, instance name, type and a boolean value saying whether or not the method's implementation is available. Similarly to the tasks, also methods are prototypes and needs to be cloned before being used. Every method must have an execute method with the context as a parameter, and also the context as return type. A method has a set of parameters. These parameters are implemented in an own class called MethodParameter, and imported to a method through class methods. A parameter has a name, description, data type and an object with its value. Parameters also have some restrictions implemented in a class called MethodRestrictions. This class will typically make sure that a specic parameter does not have more than a certain amount of maximum occurrences, and not less than a minimum amount of occurrences. As discussed earlier, there are two types of methods: resolution and decomposition. This is implement in a simple class called MethodType, which says whether a certain method solves a task directly or if it decomposes it into subtasks. When solving a task, the task's solve method should be executed and given the current context. What happens next is that the method instance assigned to the task is executed and given the context (if no PSM is assigned to solve the task, the context is returned unchanged). The method returns the updated context which the task further returns to the core. Since an application is congured to handle everything through the core like discussed Reusing External Library Components in the Creek CBR System
21
in section 3.1.2, it is easy to see that a task is solved by a method, and that the eects it has is applied to the application's context and kept up to date at all times.
3.1.8
Finally, as both a summary and to improve the understanding, we will look at how a simple application is created and executed step by step. Specic details about the code logic are left out. To create a parallel to later chapters of this report, the following example will be based on a previous project [Sti06] which used the same domain as this project. It is a car domain created in the main example of the VolveCreek system, which will be studied later. In [Sti06], the car model was exported as ontologies which jColibri is able to import, but the export mechanism in VolveCreek simplied the original model. This simplication was necessary because of a fundamental issue related to representation. Nevertheless, the project ended up with an application which will be used in this section.
22
Figure 3.4: Creating the Case Structure Conguring the application mainly consists of creating the task structure, and assign methods to solve them. Adding new tasks to the tree structure is done by having a decomposition method for one of the tasks, creating any number of subtasks. The GUI makes this simple by listing methods for each task, and it is then given that they are available and have the competence to solve the task. The system designer can then create instances of the methods, and continue this process until all tasks and subtasks are solved. Depending on the type of method, it may also be necessary to provide the value of some parameters. A typical parameter may be the case structure or the connector, but it can also be anything else. In this example we have 5 resolution tasks. The rst is in the precycle task list that loads cases into the system from the ontologies like described in section 3.1.5. This method needs the location of the case structure as a parameter. The second resolution task, and the rst task of the CBR cycle, is to obtain a query from the user. This query also needs the case structure as a parameter, as the query looks exactly like a case, and will be compared to other cases later in the retrieval process. The query should typically look like the problem we wish to solve. Retrieve was the only step Reusing External Library Components in the Creek CBR System
23
of the four well known CBR steps congured for this application, and consist of three subtasks. First, all the working cases are selected to be used for comparisons. The second subtask of retrieve is to compute the similarity between the query and each of the working cases that were loaded into memory. Finally, the last subtask is to select the best task which will be the retrieved case, which could be used further in a reuse step had the application not been a pure retrieval system. It would also be possible to select several of the best cases, as that is up to the selection method and the later adaption methods. Assigning methods to solve a task is done through the core instance. Although the GUI has made sure that chosen methods have the required competence and are given the parameter values they need, this instance does a nal check. A GUI should always help the user with such constraints, but it should not be responsible for their satisfaction. It is also possible to write an application without using the GUI, so this nal check is necessary. The core rst checks if the method is compatible with the task, then adds the method instance to the context for execution, and nally updates the state. These components were explained in section 3.1.2. Once the conguration is completed, jColibri generates the application. This is based on an application template, and the result will be a Java class which is completely separated from the jColibri framework. The application will communicate with jColibri only through its core instance as we will learn in the next step where the application is executed.
24
Figure 3.5: An overview of the jColibri architecture Creates a new context which will be used by the methods later when they accomplish the tasks. Each method will return the updated context when their execution process has completed. This means that it will always will be up to date at any moment during the application's execution. Each application has three lists of tasks which are given a root task. The three lists are as follows: 1. The precycle task list contains tasks to be accomplished before the CBR can commence. Typically involves loading cases from the case persistency and other tasks preparing the system. 2. The CBR cycle task list will typically have the three well known CBR reasoning steps: retrieve, reuse, revise and retain. 3. The postcycle task list has tasks that are done after the CBR reasoning is completed. Since the application is fully congured, the task structure can be traversed and each method can be executed. The traversal behaves like a Depth First Search (DPS). Each PSM's execute method will be invoked as each task is being solved, and this method must return the updated context. When all Reusing External Library Components in the Creek CBR System
3.2. VolveCreek
25
tasks are resolved, the application is nished. Depending on the congured tasks we will end up with some kind of result. In this example, the case most similar to the query will be selected. A model of the entire jColibri architecture can be seen in gure 3.5 taken from [jColibri].
VolveCreek
VolveCreek is the Java implementation of the Creek system introduced in chapter 2.4. Since VolveCreek is well known to the readers of this report, the analysis will be less extensive than that of jColibri. The VolveCreek version used in this project is incomplete, as it is a development snapshot from Volve AS. As already introduced in section 2.4, the representation of VolveCreek is a semantic net consisting of entities and relations. They are dened by the ontologies, and implemented in the VolveCreek framework in interfaces and classes. Figure 3.6 shows an example of a semantic net and one of its frames. The semantic net is in the background, partly covered by the frame. The colour entity which is used for this example frame, is highlighted in the semantic net (red dot). As we can see in both the semantic net and the frame, colour is an instance of the top level Symbol entity, and it has two instances called red and blue. In this chapter, the case will be given extra attention in an own section, as its representation has changed recently. Cases are no longer nodes in the semantic net. Specic components will also be given extra attention as they are important to reach the project goals, and some are very recent development. Finally, we will create an run an example application. We will rst look at the ontologies, and then describe the implementation of entities and relations in the following sections.
3.2.1
VolveCreek has a top level ontology which denes our view of the world in which our application will be executed. The model contains knowledge about what exists, and how those things relate to each other. At the very Erik Stiklestad
26
Figure 3.6: An example semantic net from VolveCreek, and a frame top level, we have the Thing concept. Everything in the world is a Thing, meaning that it is the most general term in the model. VolveCreek has implemented an interface which can be used by the knowledge model, and it contains the top level ontology. Typically, a new knowledge model is the rst thing to be created in an application, and it is initialized with such an ontology. The ontology used in this initialization is either the aforementioned top level ontology, or an extension of it. This ensures that the ontology includes everything that is completely necessary. Necessary things are the establishment of entities, relations and other things needed by the representation and basic inference system. The model which will initialize the knowledge models in this project is a case model. This model establishes things like as cases and other things common to all CBR applications. Below the top level, VolveCreek currently has a mid-level which is specic to a certain type of usage. It can be a specic domain, but it will be rather general concepts from the domain if so. The concepts are of course mapped to the top level. Further, there is a lower level which is mapped to the middle level. This is a domain specic vocabulary, which is used when dening cases and other domain specic entities and relations. Reusing External Library Components in the Creek CBR System
3.2. VolveCreek3.2.2 Entities
27
The entities are nodes in the semantic network. They have references to all relations coming to and from it, and that collection will dene a frame. All data for an entity is accessed through an entity data interface. It species that each entity data can be encapsulated by several entity objects, and that all manipulations must be done from the entity object or a knowledge model. There are several types of entities dened in the ontology, such as numbers, strings, and URLs.
3.2.3
Cases
In earlier versions of VolveCreek, cases used to be nodes in the semantic network, and they were implemented as a type of entity. Recent changes has taken the cases out of the semantic network, and they are now separate objects. The case structure has recently been dened by a DTD2 , as a formal description of their structure is needed when using XML to deal with the cases. Looking at the XML tree structure, the case element can have any number of entries and sections. Entries can be seen as leaf-nodes in the tree, and sections collect the entries by being internal nodes. The root node is simply case. A case element has two required attributes: name and status. The status can either be solved, unsolved or processed. The sections of a case, which may contain more entries or sections, only has one required attribute which is its name. An entry may contain a symbol value or a data value.. A symbol value is typically taken from ontologies, while a data value can be, e.g., letters and numbers. Entries have six attributes of whom two are required: parameter and source. The parameter identies which which part of the case the entry is representing a value for, and the source is from where the value was obtained. An example can be the color of a car with value red found by a human observer, as we will see in an example in section 3.2.7. The remaining four attributes of an entry are all implied: data condence, statistical weight, expert relevance and learned relevance. Symbol value which an entry may contain, is simply some parsed character data originating from the model.2 Document
Type Denition
28
Data value which the entry may also contain is parsed character data as well, but must in addition have a value type saying what kind of data we are dealing with (data type).
3.2.4
Relations
Relations are the links between nodes, and denes what the relationship is between them. Each relation has an associated relation type dening what kind of relation it is. The relation type also has an inverse so the relationship can be interpreted in both directions. The relations are extremely important to the system. Similarly to the entities, all data for a relation is accessed through a relation data interface. Each relation data can be encapsulated by several relation objects, and all manipulation must be done from the relation object or the knowledge model. Examples of relations existing in the case model are has section, causes, implies, has similarity to and so forth. The inverse relations would be, e.g., caused by and implied by. Example usage could be that an empty battery attribute of a case may have a relation causes linking it to an engine that will not start. Because of this, it is possible to assume with a certain probability, that this car case will not start because of its battery state. If we already know that it does not start, which is probably why it is a problem case, we can go the other way and use it to assume that the battery is the reason for the car not starting if it is indeed a at battery.
3.2.5
Reasoning
Reasoning in VolveCreek is a three-step process which could potentially be done for each of the four steps in the CBR cycle. Only retrieve and partly reuse are implemented at this stage, so the rest would have to be done by domain experts at this point, but that is likely to change. Since there are not many new changes to this model, some of the below is taken from [Sti06] which was written by the same author as this report. 1. Activate relevant parts of the semantic network (knowledge structures); 2. Explain the hypothesis (candidate facts); 3. Focus (select) one of them and make it the conclusion. Reusing External Library Components in the Creek CBR System
29
There are several good reasoning mechanisms in VolveCreek, and they work particularly well in open and weak theory domains. VolveCreek uses abductive reasoning (inference to the best explanation) which is a process where the explanation which makes most sense (based on the known facts) is chosen. Such reasoning can never be monotonic, as that would fail to adjust the explanation when new knowledge enters the system. VolveCreek supports inheritance and even plausible inheritance. With plausible inheritance, we can have inheritance without having one concept dened as a subclass or instance or the other. VolveCreek adds up and compares the weights assigned to relations transferring other relations, and concludes whether it is plausible or not to inherit a given relation. Causes is typically a relation which can lead to plausible inheritance. With default reasoning, which is support in VolveCreek, it is also possible to draw conclusions from the lack of contradicting evidence. This may happen unless there is a local value overriding an inherited value. Since it is all non-monotonic, such a conclusion will be invalidated once contradicting evidence enters the system. What happens during each of the three steps will be described more in section 3.2.7, which presents a complete example with the retrieve CBR step.
3.2.6
Comparison Controller
The comparison controller is an important component in VolveCreek. It decides which similarity measure, transformation method and attribute weight metric we will use in comparison operations. These three components are described below.
Similarity MeasureSimilarity measures are functions used to compare two case entries. A case entry may have symbol or data values like described in section 3.2.3. Which attribute of the entry that should be compared is given as a parameter together with the two case entries. The similarity function will return a value between 0 and 1, ranging from no similarity to completely equal. Erik Stiklestad
30 Transformation Method
A transformation method will transform the structure of a case. Typically, it may expand the section of a case by adding entries found through plausible inheritance. Like mentioned earlier, this can be done by following the causal relation and its inverse. The degree of belief to the new entry is equal to the strongest path supporting it.
We will now describe how an example application can be created, and what happens during execution. VolveCreek has a nice editor which makes it signicantly more user friendly. The system designer can use this editor to create most of the below, however it will be described closer to the code level for understanding. [BSAB04] contains a good introduction where the Creek Knowledge Editor is used.
Creating a ModelCreating a knowledge model is the rst thing to do. The new model is not completely empty, but contains a top level ontology. It will establish entities, relations and relation types which the representation and the rest of the system is implemented to use. The model which will be used in this project is the case model. It is a rather simple model extending the basic model, which has only the minimum requirements. Like mentioned earlier, the case model adds all kinds of CBR components.
Adding a VocabularyNow that we have a model with some top-level terms, it is time to create a domain. Before describing the domain, we need a vocabulary to do so. Reusing External Library Components in the Creek CBR System
31
These new components will be mapped to the already existing top-level model. One example could be colour which we have looked at earlier, and which can be seen highlighted in the semantic net in gure 3.6. To accomplish this, we rst create a new entity called colour, and add it to our model. We map it to our top-level model by adding an instance-of relation between colour and Symbol. The latter is one of the most general things in our model. In addition, we set colour to be a subclass of attribute, so it can be used to describe cases (see 3.2.3). The relation used here is subclass-of. Finally, we need some instances of the new entity colour. This is also done by adding instance-of relations between colour and other entities such as red or blue. The two colors must be added to the model in the same way we added colour itself, before we create the new relations. This process is repeated for everything we want to have in our domain. In this project the vocabulary also includes an engine status, battery status, age and several others which are used to describe the car domain.
Adding CasesWhen adding cases to the model, we rst create a type of case which will generalize all our specic cases. It may simply be called car case which suits this example. We could create many types of cases like that, and use it to categorize our more specic cases. As usual, we map it to our toplevel model by creating an instance-of relation between the newly created component, and this time case which is dened by our case model as an Erik Stiklestad
32
essential component of CBR. Several cases are now created by added them to the model, given the type car case plus a unique name. Further, we want to describe each case in details. Recently developed is an own class taking care of this, and we can use it in combination with the case model. This is basically sections and entries described in section 3.2.3. To create an entry for a case, we simply use what we need from our existing model, and sections collect several entries. An example may be that we give car case 1 an entry colour with the value red.
33
Finally, the focus step shrinks the number of case comparisons in the vector by leaving only those that are found to be relevant. This is done by using a threshold. The case or cases producing the highest similarity and hence not being ltered out by the focus step will form the retrieved case. This case's solution may be reused in the next step, which is not yet implemented.
Because of the project goals, it is essential to compare VolveCreek and jColibri. The comparison will shed light on possibilities for reuse of jColibri's components in the VolveCreek system. The comparison will, like the previous chapters, look at one component of the system at a time. After an overview in this chapter, the next will construct a possible solution to accomplish the project goals. A rough estimate of how dicult or easy each component will be to import is also provided.
3.3.1
The representation is a fundamental issue, and there is no easy solution to this problem. A big part of this is the type of domains the systems are intended for. jColibri is intended for strong domain theories, where everything is logic-oriented. VolveCreek is intended for weaker and more open domain theories, where default reasoning3 may be used. Description Logics (DL) is the knowledge representation language used by jColibri, and it can be translated to rst-order predicate logic. The development of DL emerged from the lack of clear semantic rules in semantic networks. jColibri does not have default reasoning, which is the oposite of VolveCreek which uses default reasoning. VolveCreek's representation language is called CreekL [Aam94Nov], which uses frames to describe the nodes in a semantic network. A knowledge model from VolveCreek was imported into jColibri in [Sti06]. The rst step was to export the model as ontologies represented by the Web Ontology Language (OWL). Unfortunately, this export mechanism is forced to exclude certain things from the original model. After every export,reasoning is to assume something because of the lack of contradicting evidence. Systems using default reasoning should never use monotonic logic, as the system must change when new evidence is discovered.3 Default
34
the resulting model represented using OWL will be a simplication if the original VolveCreek model. This simplication does not have to be extensive, however, and certain domain models may not lose anything at all. As mentioned about the VolveCreek system earlier, its strengths are within open and weak theory domains. It is models from such domains that will suer the most if exported. There are several things that can be done to improve this, but they are not strictly related to the fundamental representation, so they are described in later sections. Solving the representational issues directly seems improbable. Both representations have their strengths, and they both cover their own arenas much better than the other. Perhaps the best alternative is to have both representations available in one system.
3.3.2
Model
Both systems are using a top-level ontology, and attaches more and more specic terms to the top-level terms. jColibri has CBROnto, while VolveCreek has a case model. These two serve much of the same purpose, and they have many common terms. The approach of jColibri is to use CBROnto to describe everything related to CBR, and use other ontologies to describe the rest of the world. CBROnto is represented using OWL, and this is an advantage. This means that it will be able to share knowledge with other projects related to the Semantic Web4 , which is a result of international eorts to create a standards for web content which can be interpreted and used by software agents. Ontologies that are domain independent are mapped to CBROnto. Further, it is possible to create domain specic ontologies and extend this hierarchy any way necessary. VolveCreek may not be able to cooperate with Semantic Net projects quite as easily because of the representation, but it can cooperate with many projects. We call jColibri's approach an advantage because the Semantic Web eort includes W3C recommendations [W3C01] [DGGG05]. VolveCreek's ontology has many of the qualities found in CBROnto, but it is not as mature yet. VolveCreek's approach can most likely match CBROnto, and potentially use some of the possibilities with its representation to oer something unique that is not possible with CBROnto.included under Semantic Web is the Resource Description Framework (RDF), RDF Schema, Web Ontology Language (OWL) and others.4 Specications
35
The case structures are rather similar. They can both be seen as a tree structure, getting a similarity function assigned to each node at some point. jColibri has local similarity functions for leaf-nodes, and global similarity functions for the root and internal nodes. VolveCreek has an entry comparison for leaf-nodes, section comparison for internal nodes and a case comparison for the root nodes. Which similarity function will be used in VolveCreek is decided by the comparison controller. If not used directly, the case structures can certainly be translated from one system to the other. CBROnto guides the case representation in jColibri, and the structure is stored in an XML le. VolveCreek is using a DTD to dene how cases may be constructed. The value of each case component is a symbol value or a data value in VolveCreek. Symbol values are taken from the model, while the data values can be of any known data type. jColibri is not very dierent, and although using a slightly dierent approach, the attribute values should be quite easy to work with if both systems have the data types used to represent them. VolveCreek does not divide the case into a description, solution and result like jColibri, but something equivalent can be done. The case has constants dening if it is solved, unsolved or currently processed, and the description, solution and result can be given in the case's data. How all of this will be solved is not completely certain at the time of writing this report. The new version of VolveCreek is making quite dramatic changes to the case representation, and the changes are not completed.
3.3.4
Comparison Components
jColibri's distributed architecture makes it very pleasant to work with. It is easy to get an overview of how the system will execute, and which comparison component will be used. The similarity function is specied in the case structure, attached to each attribute and the Description concept. jColibri's GUI makes it easy to chose from a list of available and implemented functions, assuming the user knows which one should be used. It could be an advantage if jColibri also ltered out similarity functions that does not work with the values of certain attributes, but in all fairness this is something the system designer should be able to sort out. VolveCreek was earlier not quite as well organized, but has come a long way with the recent development. The new comparison controller takes care of Erik Stiklestad
36
assigning similarity measures to entries based on their attribute type. This is a more exible system than that of jColibri, since we have the potential to implement a number of controllers and not specify the similarity measure explicitly like done in the jColibri case structures. On the other hand, we could use several case structures to do the same thing in jColibri. Although it is too early to see the full potential of the comparison controller this early in its development, it does appear to be a very good idea which perhaps also jColibri could benet from. All in all, however, this does not represent any problems for the import of components, as the assignment of similarity functions are not tangled within the rest of the code. The interfaces for similarity measures in the two systems are also very similar. jColibri's similarity function interface has a compute method returning the data type double based on the input of two individuals. VolveCreek's similarity measure interface has a similarity method also returning a double based on two case entries and an attribute saying which attribute of the two case entries should be compared. If VolveCreek simply sent the attribute values instead of the entries, it would be the same as that of jColibri. It is rather safe to already now conclude that we can quite easily make the jColibri similarity functions work in VolveCreek.
3.3.5
VolveCreek does not facilitate a lot of PSMs. It is not obvious how, e.g., jColibri's textual extension would be implemented for VolveCreek. It could surely be done, but there is no organization set up for it at this stage in the development. The closest thing would be the abstract class CBRReasoningStep being extended by RetrieveResult. VolveCreek does not have a task hierarchy like jColibri, so there is no task versus method competence. VolveCreek developers are likely to create some kind of organization for methods as they come further in the development process. Some of the advantages VolveCreek could gain from this is discussed later in this project report. jColibri on the other hand, has several problem solving methods, and they are well integrated into the system as extensions of the abstract class CBRMethod. New methods, like other components, can be added to the system by placing them in packages as discussed in section 3.1.2. The methods also have a standard way of communicating with the rest of the system in jColibri, but again VolveCreek has not come that far in Reusing External Library Components in the Creek CBR System
37
Figure 3.7: The VolveCreek domain development which makes it hard to compare them. jColibri methods are described in section 3.1.7. Importing methods to VolveCreek may be require some eorts since they are very centralized in jColibri and communicate with many other components. At the same time, the VolveCreek system is not well prepared to welcome the new components.
3.3.6
Transforms
VolveCreek has a big advantage supporting plausible inheritance. Practically, this is done by transforming cases before comparing them using a transformation method. The transformation uses the causal relations and its inverse, which is found in the case model. When such a relationship is found between two symbols, an inferred entry is added with a weight equal to the strongest path supporting it. This is not available in jColibri.
3.3.7
Reuse
Reuse is an important focus in jColibri, but it is also possible with several parts of VolveCreek. Figure 3.7 is a gure of VolveCreek's domain taken Erik Stiklestad
38
from [Aam04]. The three levels, like described in section 3.2.1, are the top level concepts, the general domain concepts mapped to the top level, and nally the cases describing specic problems which are mapped to the general domain knowledge. If we look at what is implemented in each system, we can argue that VolveCreek has a focus in a branch rather low in that gure although being present over the whole scale. It does have very general concepts in place in the ontologies, but the implementation is for now focused a bit domain specic, or at least towards specic types of systems. The implementation is generalized as much as possible, however, without slowing down the development process signicantly. jColibri on the other hand has focused on the the top level. This enables a broader reuse of both knowledge and code. jColibri's textual and web extensions are examples of things that are a bit lower on the gure. Both systems has the potential to be a complete solution over the whole board. Both systems are able to reuse its own knowledge and code, but because of the representational issues, jColibri has problems reusing things from VolveCreek. This was studied in [Sti06], and it is clear that we lose some information or knowledge in the transition. The other way around may be a bit harder to do in a general way, but we should not have the same loss of information or knowledge. jColibri's distributed architecture should make it much easier for VolveCreek to reuse its components. This is something which will come clear in later chapters, as the key goals of this project is to do just that.
3.4
jColibri is the Java implementation of the COLIBRI system. Major components are: The core consisting of a state, context and packages; Data types such as numbers and strings; Connectors and case bases; Helper functions such as the similarity functions; Tasks and methods which together congure and guide the execution of a jColibri application. Reusing External Library Components in the Creek CBR System
3.4. Summary
39
VolveCreek is the Java implementation of the Creek system. The implementation used in this project is a development snapshot from Volve AS, and is incomplete. The most important aspects of VolveCreek to this project are the ontologies, entities, relations, reasoning capabilities and the comparison controller. A comparison between the two systems shows that the main issue is representation, while several of the major components are fairly similar. jColibri has come further in its development, and some its components does not yet exist in VolveCreek. It is likely that VolveCreek will be developed in a direction which will eventually cover most of what jColibri oers. Reuse is a huge focus in jColibri, and hopefully this will make it easier for VolveCreek to reuse its components. The other way around is harder, and some things are lost in the transition [Sti06].
40
Chapter 4 ConstructionThis chapter describes how jColibri components can be imported to VolveCreek, and how we can create an application to demonstrate our results. It will be implemented and evaluated in the next chapters. The components should be usable inside the existing VolveCreek system, so tasks will not be imported. VolveCreek would benet from having a task hierarchy like jColibri, and this will be discussed later. The following three components are the focus in addition to the demonstrator system itself: Helper functions Data types Methods From the comparison in chapter 3.3, we concluded that the systems are quite similar in many ways, but they are also built on dierent foundations. Using the components directly is not possible without some kind of bridge between the two systems. The bridge will have to take care of the representational issues somehow. The more general the solution is, the more useful it will be. We will start each section by identifying the requirements for the component, before constructing a possible solution. To construct a mechanism which enables VolveCreek to use jColibri components, we must look at what the components are dependent on to be able to execute successfully. Finally, we will look at the construction of the demonstrator system where these components will be used. Requirements from the VolveCreek system will be taken care of once the components are available to the system. Note that the solution attempts to 41
42
4. Construction
import already existing components from jColibri. Some restrictions may not necessarily be with the jColibri system itself, but with the existing components.
4.1
Helper Functions
Similarity functions are the only helper functions we can import since there are no other helper functions in jColibri yet. The functions are implemented almost the same way in both systems, so it should be fairly easy to import them. We will need a way to represent the VolveCreek symbol and data values as jColibri individuals. This can be done by using a wrapper. This wrapper should wrap the VolveCreek component, and implement jColibri's Individual interface. If this new implementation of the Individual interface is able to access the VolveCreek values, then the similarity functions should be possible to use directly. Each jColibri similarity function implements the SimilarityFunction interface, which means that it will have a compute method taking two individuals are parameters. At the end of its execution, it will return a double data type which indicates the similarity between the two individuals. This value is between 0 (no similarity) and 1 (equal). The implemented similarity functions are naturally implemented to compare two values of data types existing in jColibri. This means that we must make sure that the data type from VolveCreek is in the same format and that it is compatible with the similarity function. Much of this is also taken care of in the similarity function itself, by returning 0.0 similarity if the data types cannot be compared by that specic similarity function. This is only a way to make the code error-tolerant however, and we should still make sure that we are using the right data type. To make all jColibri similarity functions available, we can implement a new similarity measure for VolveCreek. The similarity measure constructor can get the name of the jColibri similarity function as a parameter, and use it to create an instance of it (provided that it exists). jColibri similarity functions also use parameters as described in section 3.1.6. To use them, we should also be able to send a list of parameters to the similarity measure. The parameters can then be used when creating an instance of the jColibri similarity function. A lot of this is very similar to what jColibri does when using its own similarity functions. The class CBRSimilarity has a method getSimilFunction Reusing External Library Components in the Creek CBR System
43
which does much of what we desire to do in our new VolveCreek similarity measure. This can be used as an inspiration also in the implementation phase.
4.2
We will try to import the data type Text from jColibri's textual extension. Data types are implemented as own classes in jColibri, and they can be accessed directly. To use jColibri's data types in VolveCreek, we have to create a new entity type, and also add it to the ontologies. The conguration les explained in section 3.1.3 can not be used in VolveCreek. The implementation should be fairly straight forward, but must be done to each data type in a similar fashion to what is already done with native Java data types used in VolveCreek. jColibri has a DataFilter data type which is located in the textual extension. This data type stores data in a general way using hash-tables instead of class attributes. A type of data lter is Text. It is composed by a collection of Paragraph instances. Each paragraph contains a collection of Sentence instances, and each sentence by a collection of Token instances. Each of these are implemented in separate classes. The textual extension will be used several times in this project, and it is described in further detail in [RDGW05]. When creating the new entity type for Text, the most important method is matches in addition to some constructors. This method is used to check if a given entity matches the representation to this entity type. This can be checked by making sure that a value exists, and that it is an instance-of the jColibri Text data type. After constructing the entity type and dening how it should be matched, the Text data type class in jColibri can be accessed directly. One problem which is related to saving the model once a text entity type is used, is that it needs to be serializable. The data types are not serializable in jColibri, but have to be if we want to save the model as a binary le which is what VolveCreek does. A simple modication of the Text class in jColibri solves this issue. We let the class implement Serializable. We do not want to change anything in jColibri since the components should be imported like they are implemented, but this is the only change to jColibri in this project and it is a trivial matter. Erik Stiklestad
444.3 Problem Solving Methods
PSMs require more consideration than other components because of their centralized location within the jColibri system. See section 3.1 for more details on the methods. Their operations cooperate with several other components, and this complicates the import. We will rst look at two dierent approaches, and then describe the construction in greater detail for one of them. The rst approach focuses on a strict import of the methods, while the second focuses on making them usable within VolveCreek.
4.3.1
Import Focus
Following the kind of approach done with other components, we can try to import methods from jColibri into VolveCreek directly. Since the methods use several other components, we also have to import those components for the methods to execute successfully. The most important component is the context, which in turn contains the equally important collections of cases. Both can be specialized by extending existing jColibri classes and interfaces to make them work with the VolveCreek system. Once we have a context, a case base and perhaps a case evaluation list, we can actually start to execute the methods. None of these components are dicult to specialize (extend) or wrap in some way. When looking at specic methods, however, a serious problem arise. All methods are implemented to use very specic aspects of the jColibri system directly during execution. Although we can provide all methods with what they need to execute, their execution will only be partial unless we continue to import almost everything from the jColibri system. Most likely, it is also necessary to modify the methods, which is not something we want to do unless it is a very small and trivial matter. This includes the individuals and other things implemented to support jColibri's representation, and it is also soon obvious that we are running into a representational issues sooner or later. These issues will have to be solved rather specically for each type of method or application, and some cannot be solved. This is a well known issue with these two systems, and it was also the major dierence in section 3.3.1. Future extension packages are also likely to be dicult to import using this approach. While this is still interesting for many applications, we basically have to import the whole jColibri system before we are done. It does not seem like Reusing External Library Components in the Creek CBR System
45
an attractive option to rst wrap and specialize many components, and not end up with something that can be used. At least not when knowing that we can deal with the representation right away, and then use the jColibri components as they are. This is covered by the remaining sections of this chapter, which attempts to use the jColibri system in a small temporary jColibri environment without strictly importing them.
4.3.2
Usage Focus
The essence of this solution is that we can create a minimal native environment for the jColibri methods to execute in, and apply the result back to VolveCreek once the execution has nished. In other words, instead of importing almost everything from the jColibri system to VolveCreek, we rather move some data over to a small jColibri application which executes and returns the result. A typical method will be modifying parts of the context somehow, and return the updated context. Normally, since this is CBR, the cases are either modied, transformed or we could even end up with completely new cases. After a method has nished executing, the resulting cases can be found in the context as working cases, i.e., the cases we are currently working on. These cases may be used to update the original cases that are located in the case base or a case persistency. A method may also simply retrieve a few cases based on some lter, and this collection of cases may be used for further processing. In fact, most methods are using the working cases instead of the case base directly, because they assume some kind of retrieval method to lter out unnecessary cases that does not need that method's type of processing. The retrieval methods will of course use the case base directly. To get cases into the case base from the case persistency, a method using a connector will typically be executed before the retrieval method to ll the case base with cases if they is not already there. This is normally done in the precycle, which happens before the CBR cycle (see sections 3.1.8 and 3.1.8 where the precycle task list is used and accomplished by such methods). This means that we need to create an environment with a context and cases. The context and the cases should be easy to work with for the jColibri methods, while they should also reect the current situation of the VolveCreek application. The jColibri representation should be used. Since a VolveCreek application will be trying to use a jColibri component at Erik Stiklestad
46
some time during execution, we need some way of having an updated context at that time. Since the cases and the state of a VolveCreek application changes continuously during execution, there is no reason to initialize or keep the context updated at all times. Instead, it would be better to load the cases into the context when we need them, and use the result after the method has executed. This way, we can also minimize the amount of cases we have to transfer to the case base. This can be compared to some kind of retrieval function, but it will go both ways and we can use the jColibri Connector interface to achieve what we want. Connectors can work with any case persistency like explained in section 3.1.5. Implementing a custom connector for VolveCreek is the best solution, and once a connector is in place, the jColibri methods can work with the normal context using the connector as a bridge between the two systems. Note that this is because the specic jColibri methods themselves use the cases. It would not be necessary to do all of this if we just wanted to invoke a jColibri method from VolveCreek in a general way and not worry about what it actually does. Inspired by jColibri's application template, we can create a class to deal with the execution of a jColibri method from the VolveCreek system. Such a class would have the following requirements. Have a constructor taking at least two parameters: an instance of the jColibri method we wish to execute and some kind of lter (an entity type or something else we can use as a lter); Use a special VolveCreek connector to retrieve the wanted cases; Create a context where the cases will be kept during execution, and initialize it with the cases we want to apply the method on; Execute the method aecting the context; Use the VolveCreek connector to transfer the aected cases back to the VolveCreek model;
4.3.3
First we must create the custom connector which can import VolveCreek cases into the jColibri context. This new connector should extend jColibri's Connector interface. The most important methods of this new connector class are the ones fetching cases from VolveCreek and translating them to Reusing External Library Components in the Creek CBR System
47
jColibri cases, and others taking jColibri cases and storing them in a VolveCreek format. The method rst will be used before the method execution, and the second will be used after the execution to let VolveCreek know about the results. To lighten this operation we only send a minimum amount of cases. Most of the time the system designer should be able to limit the number of cases quite a lot. This lter may, e.g., be an entity type, which is frequently used as a lter in VolveCreek. For each VolveCreek case, we can construct a new jColibri case with an identication equal to the case's name. We may then continue by iterating through all entries and section of the case, and transform them into jColibri attributes (described in section 3.1.4). This can be done using a recursive method since both systems' case structures are basically the same (trees). The case attributes may have two dierent types of values: symbol value (from the model) or data value (any data type). We must deal with the two types a bit dierent, but it is unlikely that it will be a signicant problem. VolveCreek stores cases in XML structures, but they are also available directly from the application during execution. Where the connector fetches the cases does not really matter, and we can create several connectors to suit dierent requirements. Once the cases are in the context, we can execute the method instance and give it the context as a parameter. The method should now execute successfully and aect the cases in our context. The selected method which will be used in this project is called StemmerMethod. The stemmer method takes a Text data type, and transforms each
word (token) to its stem (base/root form). The stemmer is actually another project called SnowBall1 , which jColibri uses through its textual extension. A denition of SnowBall from its website reads: Snowball is a small string-handling language, and its name was chosen as a tribute to SNOBOL, with which it shares the concept of string patterns delivering signals that are used to control the ow of the program.
Practically for this project, we will, e.g., see the words like "writing" be stemmed to "write", and the same will happen to "writes" and other variation. To do this, we can use the new data type imported in the previous section. A case can be given an entry with a text value, and later go through the stemmer method. Here we also have the possibility to lter cases based on their entries, as we obviously do not have to transfer cases without a1
48text entry if we want to use the stemmer.
After the method has been executed, the cases in our context have been changed. We will need a method to update our VolveCreek knowledge. This method should take the values from the jColibri individuals, and place them in VolveCreek components. It is likely that we will need many variations of this method. It really depends on the method used to aect the cases. An important issues which surfaces when we are about to transfer the updated values back to VolveCreek, is to make sure that data types and symbol types are created appropriately. There are many things to worry about here, but this project will not solve everything. The most important thing in this project is to see if we can do this at all, and these issues are mostly practical and their solutions are fairly obvious. It is also a question whether or not we want the changes in our model, or if we just wanted to see the result of it as a temporary calculation. Some custom work is likely to be necessary for each new type of application, but it should not be extensive.
4.4
Demonstrator System
The demonstration is based on the example application which is included in VolveCreek. This example has been used throughout this project and [Sti06], and section 3.2.7 provides a description of how it is build. We will be extending this application with our new components: A method called StemmerMethod; Similarity functions called Equal, Interval, TokensContained; A data type called Text. Since the example application already has some cars with attributes, we can extend these cases with more attributes. First o we need to apply the similarity functions on some values of the correct data types. The Equal similarity function simply checks if two individuals are equal or not, and returns either 1 (equal) or 0 (not equal). The function uses the method java. lang.Object.equals to do this, unless a value is a StringEnum in which case java.lang.String.equals will be used instead. We can test any data type using this similarity function, so we do not have to add any new attributes. The Interval similarity function works with numbers, and the example application already has an attribute for that as well. The age attribute Reusing External Library Components in the Creek CBR System
4.5. Summary
49
y | can be used. Interval computes 1 IN T|x , where x and y are the two ERV AL numbers being compared, and INTERVAL is a number dening how large the interval in which they are compared is. Some dierence between x and y may not mean a whole lot if INTERVAL is large.
TokensContained should be tested on a String value having several words (tokens). Since we want to check how many words two attributes have in common, we must make sure that at least one word can be found in all attributes of this kind. The similarity value is computed by checking how many tokens they have in common, and dividing that number by the total number of tokens.
One attribute should also use the new Text data type. This is needed to both test the stemmer and the data type itself. A stemmer can potentially improve the case matching by making two words be recognized as the same words even though they are written in dierent times or in plural. To test this we can run a similarity function before the stemming, and then another after the stemming. If we chose words for this attribute that are not the same before stemming, but become the same after the stemming, we have proven a point. If this can be done, we have also shown that the text data type itself is working in Creek. By running the resulting application, we will be able to evaluate whether the imported components are working as they should or not.
4.5
We look at the construction of three components: helper functions, data types and methods. Tasks are not included because we want to use the components in existing VolveCreek applications, and we are not able to use tasks there. The only helper functions we can import are the similarity functions. We are able to construct a general solution which will enable us to use any similarity function of jColibri through one similarity measure implemented in VolveCreek. A wrapper is used to represent case entries as individuals before they are compared by the similarity function. There are no general solution for data types, so we must implement one at a time. The jColibri data type classes can be used directly, but we must add a new entity type in VolveCreek. We must also add the new data type to the ontologies and edit some other VolveCreek components. Erik Stiklestad
50
The methods require more work than the other components because of their centralized position within the jColibri system. We look at two ways of solving it. First, we have a solution which focuses on the import. This is possible, but when looking at specic methods it is clear that they are not very useful after having been imported. Instead, we go for the second solution which has a usage focus. This solution creates a minimal jColibri environment in which the method can be executed, before applying the changes back to VolveCreek. To achieve this, we create a connector to transfer cases, and a class inspired by the jColibri application template to create the environment. A demonstrator system will be based on the VolveCreek example. Two similarity functions can tested on existing attributes, while the rest requires that we extend the application a little bit. The new data type and method also requires some work, but can be tested fairly easily.
Chapter 5 ImplementationThe implementation chapter describes how the steps outlined in the construction chapter was implemented to create a demonstration. Code snippets introduced by explanations are provided.
5.1
The construction phase describes a very general solution to import all jColibri similarity function through one VolveCreek similarity measure. The class implemented to take care of this extends VolveCreek's SimilarityMeasure interface. First of all, we will look at the constructor and the class variables. In the code snippet below, we assume that the name is equal to a Java class name with full path. If it does not exist, it will be caught in an exception later. The parameters are also assumed valid, and of the type CBRSimilarityParam which is implemented in jColibri. This is all we need, and the constructor is shown below.protected String name ; protected List < CBRSimilarityParam > parameters ;
1 2 3 4 5 6 7
public JColibriSimilarityMeasure ( String name , ArrayList params ) { this . name = name ; this . parameters = params ; }
Each similarity measure has a similarity method, and it is here that we have to be careful. This is where the actual similarity is being computed. This method must get two case entries and an attribute as parameters. The case entries have the values we want to give to the jColibri similarity 51
52
5. Implementation
functions, so we need to put them in Individual objects. This is done in a class called CaseEntryIndividual, which implements the jColibri Individual interface and wraps a VolveCreek CaseEntry. The implementation of the wrapper is not complicated. It is very similar to SimpleIndividual which is implemented in jColibri. The main dierence is the constructors, which are specialized to deal with VolveCreek case entries. The value of the individual is set to either the symbol value or the data value of the case entry, depending on which of the two is provided. Only one will be provided for each entry. The rest of this new individual is the same as the original jColibri SimpleIndividual. Now that we have two individuals, we are almost ready to invoke a jColibri similarity function. Before we do that, we must create an instance of the wanted similarity function, and we must give it the list of parameters. The below code takes care of this. Note that this approach is very similar to that of jColibri itself, and the code below is similar to a method in jColibri's CBRSimilarity, which also has the same functionality.public SimilarityFunction getSimilarityFuncion () { Class cl ; SimilarityFunction similFunc ; Iterator it ; HashMap < String , Object > map ; CBRSimilarityParam param ; try { cl = Class . forName ( this . name ); similFunc = ( SimilarityFunction ) cl . newInstance (); if ( parameters != null ) { it = parameters . iterator (); map = new HashMap < String , Object >(); while ( it . hasNext ()) { param = ( CBRSimilarityParam ) it . next (); map . put ( param . getName () , param . getValue ()); } similFunc . setParameters ( map ); } return similFunc ; } catch ( java . lang . ClassNotFoundException cnfe ) { } catch ( java . lang . InstantiationException ine ) { } catch ( java . lang . IllegalAccessException ile ) { } return null ; }
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
The above method creates a new instance of the similarity function, adds the wanted parameters, and catches all possible exceptions which may be thrown. The catch blocks could be used to write to the log or something. Now that we have a similarity function and the two individuals that are needed, the similarity method can be completed. The method is shown below.
53
public double similarity ( Entity attribute , CaseEntry entryA , CaseEntry entryB ) { if ( entryA == null || entryB == null ) return 0.0; CaseEntryIndividual a = new CaseEntryIndividual ( entryA , attribute ); CaseEntryIndividual b = new CaseEntryIndividual ( entryB , attribute ); SimilarityFunction simFunc = getSimilarityFuncion (); double eval = 0.0; if ( simFunc != null ){ eval = simFunc . compute (a , b ); } return eval ; }
If all goes well, a double value representing the similarity between the two entries is returned. If not, the similarity is assumed to be zero (irrelevant).
5.2
Following are code snippets and explanations which covers the import of the new data type Text from jColibri to VolveCreek. A new entity type is implemented. In the new entity type we need a constructor. The constructor will take the knowledge model, the text value and a description as parameters. It will create the new entity and add it to the knowledge model. The entity will be given the identication "TextEntity#" followed by a unique number. The description will be attached to the entity. In addition, it will be associated with the Text object as its entity data.public TextEntity ( KnowledgeModel model , Text text , String description ) { super ( makeEntity ( model , description ), true ); setEntityObject ( text ); }
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
private static Entity makeEntity ( KnowledgeModel model , String description ) { Entity entity = null ; int i = model . entitySize ()+1; while ( entity == null ){ try { entity = new Entity ( model , " TextEntity #"+i , description ); } catch ( NameAlreadyExistException e) { i ++; } } return entity ; }
We also need a constructor to be used when we want to create a new entity value of an already existing entity type. Parameters are the knowledge Erik Stiklestad
54
1 2 3 4 5 6 7 8 9
model, the text object and the entity type it will belong to. An instance-of relation is used in the model to associate the entity with its type.public TextEntity ( KnowledgeModel model , Text text , Entity type ) { this ( model , text , "" ); try { addRelation ( BasicModel . INSTANCE_OF , type ); } catch ( NoSuchRelationTypeException e) { e. printStackTrace (); } }
1 2 3 4
Although the code is short, the following method is very important. It will dene how entities are matched before dened and treated as a text entity type. We simply make sure that the value exists and that it is an instance of the Text class.public static boolean matches ( Entity ent ) { return ( ent . getEntityObject () != null ) && ( ent . getEntityObject () instanceof Text ); }
In order to make the VolveCreek system display the new entity type in the GUI elements both in results after execution and in the CKE1 , we need to add some code supporting it several places. It is not very exible that we have to add this directly, and it should be some kind of conguration les in XML for this, but that is likely to improve as the VolveCreek software matures. The case writer and parser also needs to recognize the new data type. Following is a list of places where changes were made during implementation. The changes are minor, and some are just for convenience. We will not go into further detail regarding these changes in the report. The case model and associated constants The case parser The case writer A new constructor for case entry to take the new data type as a parameter The new data type will be tested in the next chapter.1 VolveCreek
Knowledge Editor
55
We will start by creating the environment in which our methods will be executed. It is called JColibriApp, and it is much like a jColibri application. The dierence is that it does not use tasks, and hence we are not using a core object either. The implementation is not fully developed, but it illustrates its points. A few simplications have been made compared to the solution explained in the construction phase. First o, we create the constructor. It takes two parameters, which are an instance of the jColibri method and a case. The case represents the lter, as we can use this case to only fetch cases of its type. This also ensures that they are possible to compare, which may often be a very important point. The constructor then initializes the three main variables we will be using: context, knowledge model and the connector. The cases are retrieved by the connector in retrieveAllCases (also shown in the code below), and then put into the context by setCases. The method is then executed.public JColibriApp ( CBRMethod method , SeparatedCase case1 ) { this . context = new CBRContext (); this . connector = new VolveCreekConnector (); this . km = case1 . getKnowledgeModel (); SeparatedCase [] cases = this . km . getCases (); try { this . connector . init2 ( cases ); this . context . setCases (( List < CBRCase >) this . connector . retrieveAllCases ()); } catch ( InitializingException e1 ) { e1 . printStackTrace (); } executeMethod ( method ); } public Collection < CBRCase > retrieveAllCases () { ArrayList < CBRCase > list = new ArrayList (); for ( int i = 0; i < cases . length ; i ++){ CBRCaseRecord cbrcase = new CBRCaseRecord ( cases [i ]. getName ()); CaseEntry [] entries = cases [i ]. getEntries (); for ( int j = 0; j < entries . length ; j ++){ // if symbol value if ( entries [j ]. getSymbolValueAttribute () != null ){ cbrcase . addAttribute ("" + entries [j ]. getID () , entries [j] . getSymbolValueAttribute () , entries [j] . getStatisticalWeight () , null ); } // if data value if ( entries [j ]. getDataValue () != null ){ Object value = entries [j ]. getDataValue (); cbrcase . addAttribute ("" + entries [j ]. getValueType (). getName () , value , entries [j ]. getStatisticalWeight () , null ); } } list . add ( cbrcase ); } return list ; }
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
56
1 2 3 4 5 6 7 8 9 10 11
Before we execute the method, we need to set some parameters. This implementation will only set the necessary parameters. It is possible to go further and let each method dene its own parameters, but this was not given any attention in this project. It will always process the cases. If a specic method should be given some other parameters from the user, it would not be a lot of work to create a way to let the system designer send an array with values and have them added to the parameter hash map.public void executeMethod ( CBRMethod method ){ HashMap parameters = new HashMap (); parameters . put (" Process Cases " , true ); parameters . put (" Process Query " , false ); method . setParameters ( parameters ); try { method . execute ( this . context ); } catch ( ExecutionException e) { e. printStackTrace (); } }
Once the method has executed, we can use the result to update our cases. As mentioned earlier, time did not allow for implementation both ways here.
5.4
We will look at one component at a time as we place them in the demonstrator system.
5.4.1
1 2 3 4 5 6
To use the new data type, we add an attribute desc, just like we added colour in section 3.2.7. This is an abbreviation for description, but we will just use it to attach some text to each car. The text will contain words that can be stemmed by the StemmerMethod later and become identical. It does not really matter what this text is, as we are just testing it. The code below shows that desc will be an instance of Text which was added to the case model in section 3.2.7, and it is an attribute.... Entity text = new Entity (km , " desc " , " the description of the car " ); text . addRelation ( SeparatedCaseModel . INSTANCE_OF , SeparatedCaseModel . TEXT ); text . addRelation ( SeparatedCaseModel . SUBCLASS_OF , attribute ); ...
57
1 2 3
The rst car is given the text "run", the second car is given "runs" and the third car is given "running". These text values are added to desc entries, like shown below. The text variable contains "run" since this is Car Case 1, while "Human Observation" is a value describing how the value was obtained. The latter is not important here.... case1 . addEntry ( km . getEntity (" desc "), " Human Observation " , text ); ...
We will not add this attribute to the causal model. If it had been added, however, it would not have aected this demonstration in any way.
5.4.2
To use our new similarity functions, we implement a new comparison controller. This is a very convenient way to make sure that our attributes will be compared using a jColibri similarity function. The new comparison controller will be called DemoComparisonController, and the only thing we will change from the default controller is the getSimilarityMeasure method. We explicitly nd attributes of the car, and assign a similarity function to them. Below is one example which is assigning the jColibri Interval similarity function to the age attribute. Notice that we are also sending this function a parameter.... if ( attribute . getName (). equals (" age " )){ CBRSimilarityParam param = new CBRSimilarityParam (" INTERVAL " , " 15 " ); ArrayList arrayList = new ArrayList (); arrayList . add ( param ); // This function computes : sim (x ,y) = 1 - (|x -y |/ interval ) simMeasure = new JColibriSimilarityMeasure ( " jcolibri . similarity . local . Interval " , arrayList ); } ...
1 2 3 4 5 6 7 8 9 10
Matching the name of an attribute directly is OK in this demonstration, but normally it would be smarter to match something a bit more general like an entity type. We can apply any similarity function we want, although we should of course consider the data type of an attribute before we apply it. If the data type is not right, then we will typically just end up with a similarity of 0.0. Following is a list of case entries and which similarity function they have assigned in this demonstration. All similarity functions are implemented for jColibri, but now used by VolveCreek. Erik Stiklestad
58
5. Implementation age is a number, and has the Interval similarity function assigned; words is a collection of words, and has the TokensContained similarity measure assigned; desc, and all other attributes already in the application, will be using the Equal similarity measure.
5.4.3
Invoking a Method
To invoke the method, we only have to create a new JColibriApp (see section 5.3), and give it an instance of StemmerMethod and a case. The case was only chosen as a way to lter which cases we will be transferring through the connector.... new JColibriApp ( new StemmerMethod () , km . getCase (" Car Case 3" )); ...
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Like discussed earlier, the implementation is missing a way to aect the VolveCreek cases after the jColibri method has been executed. It is not necessarily dicult to implement it, but the implementation phase ran out of time. To cover this hole, we create a method printing the results instead. The method printing the results will be invoked right after the stemming is completed. Code for this method follows.public void outputResults (){ ArrayList < CBRCase > cases = ( ArrayList < CBRCase >) context . getCases (); Iterator iter = cases . iterator (); int i = 0; while ( iter . hasNext ()){ CBRCase case1 = ( CBRCase ) iter . next (); Text val = ( Text ) case1 . getDescription (). getRelation (" Text " ). getTarget (). getValue (); Collection tokens = val . getTokensList (); Iterator itern = tokens . iterator (); while ( itern . hasNext ()){ Token token = ( Token ) itern . next (); System . out . println (" The original token was : " + token . getData ( Token . COMPLETEWORD )); System . out . println (" The stemmed token is : " + token . getData ( Token . STEMMEDWORD )); } } }
The code simply loops through the list of cases, and then through the token list for each case. For each token it prints both the complete (original) word and the stemmed word. The idea is that all stemmed words should be identical if the complete word is the same. Reusing External Library Components in the Creek CBR System
5.5. Summary5.5 Summary
59
Following the construction phase, we implement the import the three components we are interested in: helper functions, data types and methods. Helper functions are implemented as planned. The Text data type is also implemented as planned, with some unforeseen work in the case writer and parser. The method is also imported, but with some simplications. The connector only works one way, and the evaluation phase will have to use a rather basic method which prints the results instead of applying them back to the VolveCreek application. In addition we implement the demonstrator system, which is the VolveCreek example application extended to use the imported components.
60
Chapter 6 TestingThe code produced during the implementation phase will now be evaluated by running the demonstrator system. This chapter present and evaluates its results. A further discussion about the solution in general is provided in the next chapter. First o are the results from the comparisons performed by the similarity functions.
6.1
Similarities
Figure 6.1 shows the similarities computed by the similarity functions. words is the rst attribute in the list, and it is given a similarity value of 0.6 using the TokensContained similarity function. The description of the rst case (called A in the gure) is "ve six two four", while the second case (called B in the gure) has "one two three four ve". Of the ve tokens in the second case's attribute, the rst case contains three. Since 3 = 0.6, we can 5 conclude that the similarity functions is working as it should. There are several attributes using the Equal similarity function: solution, battery status, colour, starter engine won't turn, engine status, desc and starter engine turns slowly. These needs to be identical for the similarity function to give a similarity value of 1, or else they will be given 0. We could call this a boolean function, but the return value is a double. As we can see in Figure 6.1, some of them are equal, and some are not. The results are correct. A special case is desc, which is using our new Text data type. The values should have been stemmed and the results applied back to the cases, but 61
62
6. Testing
Figure 6.1: Results from the similarity functions as we already know, the implementation phase ran out of time and did not complete this. Since a method was created to at least print the results, we will evaluate what the results would have been in section 6.2 where the method is tested and evaluated. Finally, we have the age attribute. Its similarity function is Interval, and the value are 16 and 3. This function also uses a INTERVAL number, which was set to 15 which can be seen in the section 5.4.2. Placing these numbers 3| = 0.133... This is the same as shown in in the formula, we get 1 |16 15 Figure 6.1, so we now know that all similarity functions have produced the desired results.
6.2
The StemmerMethod was selected as our method component, and it is supposed to stem words stored in the Text data type. We created a case attribute called desc to store such a value, and although the implementation is only partial, the method printing the results produce the following:
INFO: StemmerMethod BEGIN INFO: StemmerMethod END The original token was: run The stemmed token is: run The original token was: runs The stemmed token is: run The original token was: running The stemmed token is: runThe code printing this is presented in section 5.4.3. Above, we can see that the method begins stemming, and then ends before the results are printed. The rst case had the value "run", which was already stemmed. The second Reusing External Library Components in the Creek CBR System
63
Figure 6.2: Screen shot from the VolveCreek Knowledge Editor with the demonstrator system loaded case had "runs", and we can see that it was stemmed to "run". The same happened to the third case's "running" value. All stemmed tokens are identical, which is what we wanted. If we had used the Equal similarity function on the stemmed values of desc, it would have returned 1 instead of 0. The VolveCreek GUI could still have shown the original tokens as each Token has both an original and a stemmed version, but assuming some work on the GUI, the line showing the desc attribute in gure 6.1 could then be as follows:
desc
This would mean a match between "running" and "run", both stemmed to "run", for the attribute desc. Since the values were added in VolveCreek and treated by the code of both systems without incidents, the Text data type must also be working properly. The model is also saved successfully because of the serialization x applied in section 4.2. If we open the model generated and saved during the execution of this demonstration, we can, e.g., view Car Case 1 like in Figure 6.2. The Creek Knowledge Editor is used to view the model. The columns covered by the "desc Frame View" are not set in the demonstration. Erik Stiklestad
646.3 Summary
We expand the example application with our three new components to form a demonstration. The new components are the new data type Text, the method StemmerMethod and the similarity functions TokensContained, Interval and Equal. To test them, we add a new entry to the car cases with a Text value. In addition, some other entries are added to test the similarity functions. The similarity functions are computing the right similarities on our old and new case entries. The new method and data type is also working as the Text values are stemmed by the method and giving the correct output.
7.1
In this project it has been shown that it is possible to import several of the major components of jColibri into VolveCreek. The outstanding and essential issue is representation, but this was not part of the goal in this project, although it has been discussed throughout this report and [Sti06]. A perfect solution to the representation problem is unlikely to exist. We can argue that the representational issues are addressed at some level, but a complete solution was not attempted. Perhaps it is not even necessary or wanted to merge the representations either. Maybe we are better o having one system with both representations, giving us the best of both worlds. The second part of this chapter will to some degree end up with such a hybrid system, after the VolveCreek system is placed as an extension in jColibri and brings its own representation. More about this in section 7.1. The connector in this project is translating from one representation to the other, and we are also using some wrappers. In [Sti06], a simplied model was exported as OWL, and the export of methods was also explained (some 65
66
of that work inuenced section 7.1). In both cases we have the situation where everything is not necessarily possible to translate, wrap or export. However, the solutions are still good enough to be very practical in several areas. An obvious example is that we can use this approach to test various aspects of jColibri from VolveCreek in a very cheap way, which is interesting in itself because it gives the developers information that may be very valuable and expensive to obtain otherwise. We will now discuss the import of each component.
7.1.1
The similarity functions are probably the most successfully imported components in this project, and they were also a very high priority. It is possible to improve the solution quite a bit, but the demonstration shows the important points. Since these functions are using values directly in both systems, they are much easier to work with compared to other components. A similarity function is rarely concerned with anything but the value itself, although it can sometimes be interesting to scale the results or otherwise inuence the values based on the application in general or its state. With jColibri similarity functions you can sent parameters to aect the similarity measures, while in VolveCreek the comparison between two values will always be the same. An example could be seen in the Interval similarity function imported from jColibri in this project, where a variable was used to scale the similarities. It would be a good idea for VolveCreek to implement something equivalent. The comparison controller plays an important role in VolveCreek, and as mentioned earlier it appears to be a good idea with such a component. This component may have a lot of potential, and could work as an important layer between the case components and the similarity functions. If it is not there, we are more dependent on the similarity functions than if we had a layer to control it. With jColibri dening which similarity functions that should be used in the case structures, it does appear to be less exible. jColibri does gain a lot of exibility by having implemented classes to take care of parameters, and although it has a very thorough distributed architecture, it could potentially become better by adapting some of VolveCreek's recent development. Reusing External Library Components in the Creek CBR System
67
Data types are implemented in a very straight-forward way in both systems. We showed that importing the Text data type was possible, and that we had to implement a new entity type before using it. When the entity type was implemented, the new data type could be used directly. It would be preferable to have a general way to import all data types, and it may actually be possible. It really depends on the data types, as some aspects like serialization might become problematic. We can conclude that it is not a lot of work to import them, however. The VolveCreek system is not well organized when it comes to some components, and the data types are perhaps the most obvious case. VolveCreek would benet from organizing components in dierent packages and making them available to the rest of the system through some kind of interface. Right now, adding new data types requires editing the source several places. This is something which is normally solved when the software matures, however, and it is not important in the beginning.
7.1.3
Methods
The methods were harder to import than other components because of their centralized position in the software. The solution chosen, which had a focus on usage, has both positive and negative sides. A method may do anything to any part of a system, and hence it is hard to create a general solution to import them. The solution implemented in this project works well with many methods, but not everyone. Those that cannot be used with this kind of approach are not interesting for VolveCreek at this point, however. Methods assigned to tasks just to congure the execution while doing nothing to the context are not interesting, and they only exist to guide the application execution in jColibri as well. Methods using helper functions may be interesting, but since the helper functions can be used directly, they are not really necessary although often convenient. Methods accomplishing various tasks in specic extensions are very interesting on the other hand. The stemmer method imported in this project is a good example of such as method, an as we saw, we often have to import other components like data types and similarity measures to really make use of it. It is not necessarily just methods created for jColibri that are interesting for Erik Stiklestad
68
VolveCreek, but methods that enable jColibri to use external projects. The stemmer is yet again a good example. jColibri is able to use an external stemmer in its system, and VolveCreek would like to do that as well. By reusing jColibri's code, VolveCreek can use the stemmer without having to implement everything. It is of course an advantage to be able to use external projects directly instead of going through jColibri, but at least for testing purposes and comparison of solutions, this is a very cost eective solution. By enabling reuse of jColibri components, VolveCreek gains access not only to jColibri components but also to all other projects which jColibri can work with. Because of jColibri's focus on standardization and close relation to eorts such as the Semantic Web, we may be looking at quite a few projects as development continues. This is a very important motivation. Finally, there has been some progress during the last period of this project, which did not make it into this report nor the implementation. It appears that it would in fact be possible to import methods in a more direct way, and that it would not have caused us to modify existing methods directly. This has not been implemented or tested, but the conceptual idea does seem to work. That said, the solution used in this project is not bad, and it does serve its purpose as a possible solution. The conclusion that a strict import was not usable, however, may not have been completely accurate. The fact that another solution would require more work was accurate, however the amount of additional eorts may not be as big as previously estimated. Time did unfortunately not allow for a thorough investigation, and further details are left to future projects.
We will now look at an alternative solution, which is to move VolveCreek into jColibri as an extension. Since jColibri has come further in its development, this seems like an attractive alternative in a possible integration or cooperation between the two systems. This approach seems very logical in many ways because of jColibri's focus. Because of the earlier project [Sti06] and the way jColibri was explored and studied in preparation for this project, some of the ideas regarding having VolveCreek as a jColibri extension is partly tested with experimental code. This section will attempt to explain this approach by describing how a possible implementation could be done. This is based on conceptual ideas, but explained by going through an approach that was attempted. Reusing External Library Components in the Creek CBR System
69
The eorts related to the models are strictly representational, and this was worked on in [Sti06]. The solution presented was good, but not complete since the model was slightly simplied. Since the representation is unlikely to be completely solved like mentioned in the beginning of this section, it is important to see if they can somehow live side by side. The jColibri context could potentially have a VolveCreek knowledge model, and the VolveCreek extension could use that model for its strengths. A specialization of the CBRContext would be necessary, and some translation between the two representations would also become necessary. This is a similar functionality to the connector implementated in the previous solution.
7.2.2
We will check how well VolveCreek can function as an extension of jColibri by looking at one component at a time. There are not many of each component in VolveCreek, so possible changes to them would not be a huge eort, but we will go through them to see what is needed.
Similarity MeasuresThe VolveCreek similarity measures will have to become helper functions, and more specically they must implement the SimilarityFunction interface. In section 3.3.4 we found out that jColibri and VolveCreek are very similar when it comes to these functions, so all we need is to merge the two interfaces SimiliarityFunction and SimilarityMeasure which is not a major operation. A good solution would be to have both compute and similarity methods in each, so they could be called with both VolveCreek entries from the knowledge model now in the context, and with jColibri individuals. Some other comparison components in VolveCreek could still be used like they are today, but they would have to be moved into methods. This is explained later. Erik Stiklestad
70 Data Types
Data types would have to be congured like other data types in jColibri, and they also need an editor. VolveCreek does not have any data types that are custom to the system, so this is not an issue. E.g. we could have congured the URL data type with the following XML code located in /config/creek/datatypes.xml:
MethodsWe would like to have the VolveCreek methods dened through CBROnto, and we would like to give them competencies to solve certain tasks. The denition is not a problem, but we must change the implementation. Each method must implement the CBRMethod interface, which basically means that it must have an execute method. VolveCreek does not have a lot of methods either, so we do not have any serious issues with many existing components. The next section will exemplify how it can be done and also denes the methods through CBROnto. All in all, we do not have any major issues with components, as they are few and can be changed slightly to work with jColibri. The challenge is to make the idea behind the VolveCreek system work in jColibri, not its system components.
7.2.3
Example Application
This example does exactly the same as the original VolveCreek example. The rst thing we have to do, is to place the entire VolveCreek source in /src/jcolibri/extensions/creek, refactor all paths and create a conguration le for the extension. The conguration follows:
71
When this is present, we will get an option to enable the extension when we open the jColibri GUI and start making a new system. We will separate the code from the VolveCreek example application and put them into methods implementing CBRMethod to illustrate how this can be done since we do not have any such methods in VolveCreek. There are others we could have used, but it would have been the same operation. The code for the methods are copied directly from the VolveCreek example le, and placed in execute. This is necessary, and it is the main eort needing to be completed outside the denitions. Following is a list of methods and a description of what they do. CreekPreCycleMethod does nothing by returning the unchanged context; AddAttributesMethods adds the attributes to our model; AddCausalModelMethod adds the causal model as dened by the VolveCreek example; AddCasesMethod adds the car cases; SolveCreekCBRMethod executes the RetrievelResults reasoning step implemented for VolveCreek; CreekPostCycleMethod does nothing by returning the unchanged context. Since we have now implemented several methods, we want to dene them through CBROnto, give them competencies and hence we also need some tasks which they can solve. Following is one task followed by a method which has the competence to solve it. This is dened in /config/creek/tasks.xml and /config/creek/methods.xml.
72
73
Figure 7.1: Conguring the CreekExample in jColibri other, and then back again if we want to apply the results. Although this attempt seemed to work very nicely, it is clear that it has a lot to do with the VolveCreek system not having a lot of components ready. The architecture is there, but since they can also be dened in jColibri without a lot of work, we are not looking at any major issues. jColibri's architecture is written in such a general way that it is actually about to formalize CBR, should we believe our experiences with the system. For VolveCreek as a comercial product, this may not be quite as interesting as the former solution, and that is why the former solution was given more attention in this project. For academic purposes, it seems like a good idea to start using the jColibri framework.
74
8.1
Further Work
There are many issues touched by this project that should be further investigated. They can be logically divided into a VolveCreek view and a jColibri view. The VolveCreek view contains work having VolveCreek in focus, assuming a continued development of VolveCreek in the current direction. The jColibri view contains work where the focus is to make VolveCreek a part of jColibri which was discussed in section 7.1. These two views will be addressed in the following subsections.
8.1.1
VolveCreek View
It is clear that a lot of work is yet to be done, and this project is merely a start. The goals of this project was not to nalize a solution, but to identity eorts and exemplify some of the things that are possible. This means that the code produced in this project is not necessarily meant to be used for anything else than exemplication, although generally good solutions have of course been attempted. Some of the implementation done in this project is rather specic towards the demonstrator system. The systems are large and the components many, so a project like this does not have enough time to work thoroughly through everything. It is perhaps not best to continue development on top of this 75
76
project, but rather use it to get started quickly should the results of this project be interesting for further development. Each component can be improved quite a lot, and perhaps most of all the methods. Very important in this regard is the task component. A considerable amount of time should be used to analyze how this component can be a part of VolveCreek. Even if that is not possible, VolveCreek should still consider implementing something with a similar functionality. VolveCreek does have the CBRReasoningStep abstract class with variables such as state which can be compared to some things found in jColibri, but it does not provide enough to compensate for the tasks which are more general. A task and method hierarchy with competencies is a very powerful approach. If they are also dened by ontologies in the model then that is even better. jColibri has developed a very good solution here, which VolveCreek could use as an inspiration. Other components, namely data types and similarity functions, are imported rather well in this project. The solutions should get better errorhandling and other things making them easier to use, but other than that we have shown that they can be imported without a lot of problems. Regarding the representation, there has been suggested several ideas both in this project and other projects. None of these projects have had the representation as its main focus, so perhaps that is what needs to be done. Such a representation-focused project should look at the problem from several dierent angles, including some kind of integration between them, hybrid solutions or some way of using them all together with an interface to translate between them. The representation has become increasingly advanced lately because of the focus on specic knowledge in addition to general knowledge which has been a focus for a long time. This representation issue should be given a thorough investigation.
8.1.2
jColibri View
The solution where VolveCreek tries to become an extension of jColibri seems to be something that is very possible with what we have today, but it does require a lot of work if we want a awless integration. First of all, the VolveCreek system must be slightly changed to easier t within the jColibri framework, but this is not going to cost VolveCreek any functionality. This is strictly a code design issue, and can be worked through fairly easily since VolveCreek has not been nished and is not being used extensively in a lot of projects. In fact, now is a good time to work on it, if we want to do it. Reusing External Library Components in the Creek CBR System
8.2. Conclusion
It is very likely that the developers of VolveCreek have plans that go past what this project has discovered, and that this complicates things quite a lot. This solution is mostly interesting for an academic version of Creek.
Conclusion
This project has analyzed jColibri and VolveCreek, and compared them to nd similarities and dierences. Based on the comparison, it was shown that it is possible to reuse external library components in the VolveCreek CBR system. The import of three dierent jColibri components was constructed, implemented and evaluated. One data type, one PSM and three similarity functions. The construction shows that it is possible to create fairly general solutions for importing components, and that they will work with the existing VolveCreek system. Later chapters, which slightly simplies what was outlined in the construction, shows that the implementation is not extensive, and that the components can be imported and tested which can give VolveCreek developers valuable information. The data types and similarity functions are imported in a clean way, while the methods are more painful. The methods would be easier to import had they not been implemented been so centralized in jColibri. It was later realized, however, that the best solution for the methods was perhaps not used in this project. The evaluation shows that the project goals were accomplished, and that the demonstrator system brings the expected results. In the demonstration, the new data type Text is used when adding a new entry to several cases, and the data type is later stemmed by an imported method. Several jColibri similarity functions were applied to VolveCreek case entries, and they returned the correct similarity value. The nal goal of the project was accomplished in section 7.1, where the alternative approach was discussed and partly tested. It was shown that such a solution is possible, and that the cost is not very high. In fact, from an academic point of view, it may even be preferable to use this approach rather than the one having a focus in this report. The approach used in this project is more interesting for VolveCreek as a commercial product, however.
78
Bibliography[Aam94Nov] Agnar Aamodt
Proceedings from IEEE TAI-94, International Conference on Tools with Articial Intelligence. New Orleans, November 5-12, 1994. 4 pages. [Aam04] Agnar Aamodt
ECCBR 2004. LNAI 3155, Spinger, 2004. pgs. 1-16. [AP94] A. Aamodt and E. Plaza
[Bra04] Stein Erlend Brandser The jCreek Programmer's Guide URL: [BSAB04] Tore Brede, Frode Srmo, Agnar Aamodt, Ketil B A Knowledge URL: [Cha90] Chandrasekaran, B
[DGGG05] Belen Daz-Agudo, Pedro A. Gonzlez-Calero, Pedro Pablo Gmez-Martn and Marco Antonio Gmez-Martn
80
BIBLIOGRAPHYProceedings of Workshop OWL: Experiences and Directions, at International Conference on Rules and Rule Markup Languages for the Semantic Web, 2005
Proceedings of the 5th European Workshop on Advances in Case-Based Reasoning, 2000 [Dia02] Belen Daz-Agudo and Pedro A. Gonzalez-Calero
In S. Haller and G. Simmons, editors, Proc. Of the 15 the International FLAIRS?02 Conference. AAAI Press, 2002. [DINS96] Donini, F. M., lenzerini, M., Nardi, D., and Schaerf, A.
Pages 191 - 236. URL: reasoning-in-DL.ps.gz. [GGDF99] Gomez-Albarran, M., Gonzalez-Calero, P. A., Diaz-Agudo, B., and Fernandez-Conde, C. URL: mercedes.pdf. [Gru93] T.R. Gruber.
Knowledge Acquisition, 1993, Vol. 5, No. 2, pp. 199 - 220. [jColibri] Homepage of jColibri
[RDGW05] Juan Antonio Recio, Beln Daz-Agudo, Marco Antonio Gmez-Martn and Nirmalie Wiratunga
Proceedings of Case-Based Reasoning Research and Development, 6th International Conference on Case-Based Reasoning, ICCBR 2005, pages 421-435. Reusing External Library Components in the Creek CBR System
BIBLIOGRAPHY
81
[RSDG05] Juan A. Recio-Gara, Antonio Snchez, Beln Daz-Agudo, and Pedro A. Gonzlez-Calero.
In M. Petridis, editor, Proccedings of the 10th UK Workshop on Case Based Reasoning, pages 20-28. CMS Press, University of Greenwich, 2005. [SRDG05] Antonio Snchez, Juan A. Recio, Beln Daz-Agudo, and Pedro Gonzlez-Calero
Best Poster Award. Twenty-th SGAI Int. Conf. on Innovative Techniques and Applications of Articial Intelligence, AI 2005. Cambridge, UK. [Ste90] Luc Steels
Components of expertise
TDT4745 Knowledge Based Systems, Autumn 2006 [SPGKY07] Evren Sirin, Bijan Parsia, Bernardo Cuenca Grau, Aditya Kalyanpur and Yarden K.
|
https://de.scribd.com/document/221666465/CBR-System
|
CC-MAIN-2020-05
|
refinedweb
| 22,627
| 54.32
|
While I still believe Microsoft’s decision to include a browser in the OS is a bad one it does open up an interesting use case; using IE as the user interface for Python. Like a lot of MS applications, IE can be controlled through a COM interface so with few lines we can start IE and point it at whatever URL we need.
import win32com ie = win32com.Dispatch("InternetExplorer.Application") ie.Navigate("") ie.AddressBar = ie.MenuBar = ie.StatusBar = ie.ToolBar = False ie.visible = True
As wonderful as this website is, it isn’t a user interface. Using a micro web framework you can create a local webserver and then navigate IE to this. Use a template language to make life easier and suddenly you’ve got a flexible UI for a few minutes work.
|
https://quackajack.wordpress.com/category/com/
|
CC-MAIN-2018-47
|
refinedweb
| 135
| 53.61
|
recently had a discussion with a colleague about the capabilities of WPF for application development. We had discussions on the various new features in WPF including the capabilities of the WPF DataGrid control. The WPF DataGrid control has lots of features for data representations and manipulation. One of the nicest features of the DataGrid, is that we can change the column position using Drag-Drop. When my colleague asked me if the Drag-Drop effect is possible for the DataGridRow too, I was clueless. But this question gave me enough motivation to try my hands on implementing a Drag drop with the DataGridRow. This article demonstrates how to do so.
Step 1: Open VS2010 and create a WPF windows application. Name it as ‘WPF40_DataGrid_Row_Drag_Drop’.
Step 2: To this project, add a new class file and name it as ‘DataAccess.cs’. Write the following code in it:
The above class, defines classes for Employee Entity and the EmployeeCollection, to store Employee records.
Step 3: Open MainWindow.Xaml and define an instance for the ‘EmployeeCollection’ class in Windows.Resource. Also define the DataGrid columns and set the AllowDrop property of the DataGrid to ‘true’. This will enable Drag-Drop operations on the DataGrid control.
Step 4: Now we need to write some code which will provide the DataGridRow Drag-Drop functionality. To do so, open MainWindow.xaml.cs and define a delegate. This delegate will return the position of the Mouse Button event and Drag-Drop event. This delegate accepts an ‘IInputElement’ interface object, which is used to establish the common event and event related properties and methods for WPF element input processing. Here the input is sent using a Mouse button. The delegate is declared at the namespace level as shown below:
After declaring the delegate, we now need to check if the Mouse is placed on the DataGridRow for Drag-Drop operation. To do this, a method is written which returns Boolean and accepts two parameters - the first parameter is the ‘Visual’ object which provides rendering to the WPF application and the second parameter is the ‘GetDragDropPosition’ delegate. This method provides logic for capturing the Rectangle ‘Rect’ information, within which the rendering of the Drag-Drop operations is managed using ‘Point’ object, which is provided by the Visual object. So if the ‘Rect’ contains the specified point for Drag-Drop, the method returns true. Here’s the code for this method:
Now we need to get the DataGridRow for Drag-Drop. The method is as shown below:
Now it’s time for us to define the logic for getting the Drag-Drop index for the DataGridRow. To do so, we need to iterate through the ‘Items’ property of the DataGrid, then retrieve the DataGridRow for the specific index and check whether it is on the Mouse Target with its Rect position, which we defined earlier using ‘IsTheMouseOnTarget’ method
Now declare the class level variable for keeping track of the DaraGridRow object index:
int prevRowIndex = -1;
After following the above steps, we now need to implement ’PreviewMouseLeftButtonDown’ event of DataGrid, which will get the Employee object on the selected index and will provide the Drag-Drop effect. The code for the same is as shown below:
Step 5: Let us now implement the drop event of the DataGrid. This will track the Drop index, so once the drop operation is completed, the Row will be removed from the previous index and will be inserted at a new index.
Finally just hook these events in the constructor, as shown below:
Step 6: That’s it! Run the application, drag a Row from the DataGrid and drop it at a new index. Note: WPF DataGrid always shows the last row which is empty. This row is used for Insert operation in the DataGrid, so the Drag-Drop is not applicable there and we have checked this is in the Drop event. Output, when the screen loads:
After Drag-Drop: Employee No. 4 is dropped to position 2.
and here’s the error message if you try to drop a row in the last row:
Conclusion: In this article, we saw how to implement Drag-Drop effect for the WPF DataGridRow. The entire source code of this article can be downloaded over here
|
https://www.dotnetcurry.com/wpf/677/wpf-data-grid-row-drag-drop
|
CC-MAIN-2018-39
|
refinedweb
| 709
| 61.56
|
news.digitalmars.com - digitalmars.DDec 31 2009 output ranges: by ref or by value? (35)
Dec 31 2009 Inconsistent error/warning messages (3)
Dec 30 2009 Does functional programming work? (35)
Dec 30 2009 algorithms that take ranges by reference (6)
Dec 30 2009 one step towards unification of std.algorithm and std.string (12)
Dec 30 2009 safe quiz (7)
Dec 29 2009 is(ulong : long) evaluates to true, is(int[] S) does not always compile (1)
Dec 29 2009 Re: Comma expression as tuple operator [was Tuples, C#, Java, language (5)
Dec 29 2009 D+Ubuntu+SDL_image said undefined reference (8)
Dec 29 2009 GDC for Windows 2.014 (3)
Dec 28 2009 GDC and Tango problem (4)
Dec 28 2009 Tuples, C#, Java, language design (19)
Dec 27 2009 the const correctness of the this pointer (12)
Dec 27 2009 Concurrency architecture for D2 (48)
Dec 27 2009 Is the automatic opAssign exception-safe? (1)
Dec 26 2009 What is the structure of D arrays? (12)
Dec 26 2009 opAssign(int) necessitates this(this) for automatic opAssign to work (5)
Dec 26 2009 Re: Phobos, std.typecons: arrays of Rebindable - with DOS file (1)
Dec 26 2009 Re: Phobos, std.typecons: arrays of Rebindable - with file (1)
Dec 26 2009 Phobos, std.typecons: arrays of Rebindable (3)
Dec 26 2009 Proposal: generic extensible storage qualifiers (1)
Dec 25 2009 Function meta information (7)
Dec 24 2009 What's C's biggest mistake? (59)
Dec 23 2009 Void parameter type for generic code? (1)
Dec 22 2009 Local variable inside delegate literal (5)
Dec 22 2009 super and inner (1)
Dec 22 2009 question/suggestion about protection attributes (4)
Dec 22 2009 Strange exit code (1)
Dec 21 2009 mixin not overloading other mixins, Bug or feature? (6)
Dec 21 2009 one suggestion for improving the performance of gc and memroy management (4)
Dec 21 2009 Enhanced array appending (24)
Dec 21 2009 writeln & cie. not reliable on Mac OS X (2)
Dec 21 2009 dmd-x64 (34)
Dec 20 2009 Problems with Socket on linux (3)
Dec 20 2009 QSort in D: is this best? (36)
Dec 19 2009 isRef, isLazy, isOut (9)
Dec 18 2009 High quality example programs and commercial software (4)
Dec 17 2009 containers/collections for phobos (6)
Dec 17 2009 disable all member function calls for rvalues? (25)
Dec 17 2009 What's wrong with D's templates? (78)
Dec 17 2009 D2 struct as value in Associative Array (issue 1886) still not (1)
Dec 17 2009 Is std.typecons.Rebindable ever going to work for this? (5)
Dec 17 2009 Ranges in std.range vs foreach ranges (3)
Dec 16 2009 More on Multithreading Performance (1)
Dec 16 2009 Another little test (1)
Dec 16 2009 TDPL goes out for preliminary review (23)
Dec 15 2009 auto ref (32)
Dec 15 2009 D growing pains (was Re: The Demise of Dynamic Arrays?!) (8)
Dec 15 2009 transporting qualifier from parameter to the return value (45)
Dec 15 2009 Binary operation on typedefs (3)
Dec 15 2009 Can -profile switch be used with threaded aplication on D1? (4)
Dec 14 2009 The Demise of Dynamic Arrays?! (13)
Dec 14 2009 Windows multi-threading performance issues on multi-core systems only (20)
Dec 14 2009 Go rant (95)
Dec 14 2009 const ref rvalues (11)
Dec 14 2009 Detecting inadvertent use of integer division (39)
Dec 13 2009 Reference Counting Template (9)
Dec 12 2009 opEquals(const ref yadaYada) (16)
Dec 12 2009 D gets a mention by Verity Stob (7)
Dec 12 2009 Access violation in gcx.Gcx.mark (2)
Dec 11 2009 D2 GUI Libs (69)
Dec 11 2009 File size problem (4)
Dec 11 2009 No D in Great Computer Language Shootout? (34)
Dec 11 2009 D1 garbage collector + threads + malloc = garbage? (17)
Dec 10 2009 "Almost there" version of TDPL updated on Safari Rough Cuts (13)
Dec 09 2009 Evaluation order (1)
Dec 09 2009 Proof of Concept: Binding to and extending C++ objects via a metric (18)
Dec 09 2009 Unique Objects (1)
Dec 09 2009 Kitchener Wants You...r programmer friends who don't know D (6)
Dec 09 2009 About 0^^0 (3)
Dec 09 2009 dsource is unusable (5)
Dec 09 2009 A history lesson for D (2)
Dec 08 2009 dfdfdf (2)
Dec 08 2009 enhancing enums (16)
Dec 08 2009 deprecating the body keyword (6)
Dec 08 2009 Static member functions (8)
Dec 08 2009 Semantics of ^^ (51)
Dec 07 2009 Various shared bugs (3)
Dec 07 2009 Go versus PL "Brand X" (1)
Dec 07 2009 When will the complex types be dumped? (5)
Dec 07 2009 More on semantics of opPow: return type (42)
Dec 06 2009 yank unary '+'? (57)
Dec 06 2009 yank '>>>'? (33)
Dec 05 2009 lazy redux (26)
Dec 05 2009 Why D failed (4)
Dec 05 2009 [OT] Broken newsgroup threads in Thunderbird 3? (15)
Dec 05 2009 switch case for constants-only? (45)
Dec 05 2009 Should ^^ (opPow) be left- or right-associative? (8)
Dec 05 2009 Overload on opDispatch ? (2)
Dec 04 2009 the Python moratorium (2)
Dec 04 2009 Traits problem (6)
Dec 04 2009 Building DMD from sources (14)
Dec 03 2009 private (aka no-inherit) implementations for public methods (3)
Dec 03 2009 Dump special floating point operators (14)
Dec 03 2009 Last Dobb's Code Talk (5)
Dec 02 2009 should postconditions be evaluated even if Exception is thrown? (53)
Dec 02 2009 Intel Threading building blocks for C++ (3)
Dec 02 2009 D piggyback style - is popularity really what D wants? If so... (8)
Dec 02 2009 AMD Performance Profiling Without the Overhead - perhaps of interest (1)
Dec 02 2009 Breaking compatibilyt hurts (12)
Dec 02 2009 Input ranges do not compose (3)
Dec 02 2009 Struct opCall (1)
Dec 02 2009 TDPL draft updated on Safari Rough Cuts (3)
Dec 02 2009 Unification (6)
Dec 01 2009 Tagging (12)
Dec 01 2009 shortcut for dynamic dispatch and operators (14)
Dec 01 2009 Thanks for the improved commit messages! (4)
Dec 01 2009 program for building the program (12)
Nov 30 2009 Quebec City (2)
Nov 30 2009 Unofficial wish list status.(Dec 2009) (1)
Nov 30 2009 Converting Optlink from Assembler to C - Reboot (3)
Nov 30 2009 useSameLockAs (3)
Nov 30 2009 Converting Optlink from Assembler to C (4)
Nov 30 2009 Asserts and side effects (7)
Nov 30 2009 Microsoft's top developers prefer old-school coding methods (3)
Nov 29 2009 The value of asserts (1)
Nov 29 2009 inheriting constructos (24)
Nov 29 2009 Phobos packages a bit confusing (52)
Nov 29 2009 Custom search engine for D resources on the web (4)
Nov 29 2009 owner pointers (2)
Nov 29 2009 dmd optimizer bug under linux (3)
Nov 29 2009 Why not? (8)
Nov 28 2009 1.044 download does not work (3)
Nov 28 2009 Is there any fancy "R.I.P. C++" D banner out there? (2)
Nov 28 2009 new version (30)
Nov 27 2009 Should operator overload methods be virtual? (23)
Nov 27 2009 dynamic classes and duck typing (252)
Nov 27 2009 Reference counting for resource management (6)
Nov 27 2009 Humble revamp of trust, safe and unsafe (3)
Nov 27 2009 Wibble, Wibble (1)
Nov 26 2009 Message to Walter (1)
Nov 26 2009 It's Walter's Dead Horse? (1)
Nov 26 2009 The politics of D (1)
Nov 26 2009 OT Need help in VS 2008 C# project (1)
Nov 26 2009 Should pure nothrow ---> pure nothrow ? (27)
Nov 26 2009 Ongoing problems with shared and immutable (2)
Nov 25 2009 D1: Member function delegate issues (1)
Nov 25 2009 Sort enum values, please (5)
Nov 25 2009 D compiler layers (4)
Nov 24 2009 Non-enum manifest constants: Pie in the sky? (10)
Nov 24 2009 News/info on Go and Java (7)
Nov 24 2009 Design by Contract - most requested Java feature! (5)
Nov 24 2009 Small code example (1)
Nov 24 2009 16bit half floating point type as defined in IEEE-754r in D? (2)
Nov 24 2009 GDC on ARM CPUs? (4)
Nov 24 2009 The great slice debate -- should slices be separated from arrays? (17)
Nov 24 2009 is this a dmd bug ? (9)
Nov 24 2009 Can we have this Syntactic sugar. (1)
Nov 23 2009 Should masked exceptions be an error? (16)
Nov 22 2009 Pure, Nothrow in Generic Programming (29)
Nov 21 2009 Walter on the Concept of Sandcastles (5)
Nov 21 2009 Itcy-BitC closures and curries (5)
Nov 20 2009 RPC and Dynamic function call (7)
Nov 20 2009 Class/Interface Modeling of Ranges (5)
Nov 20 2009 Switch-case made less buggy, now with PATCH! (34)
Nov 20 2009 Deprecate static opCall for structs? (3)
Nov 20 2009 removal of cruft from D (82)
Nov 20 2009 And what will we do about package? (7)
Nov 20 2009 There should be an IFTI spec (1)
Nov 19 2009 Can we drop static struct initializers? (57)
Nov 19 2009 Benchmarks and anonymous delegates (2)
Nov 19 2009 Static arrays size limit, int (9)
Nov 19 2009 Conspiracy Theory #1 (66)
Nov 18 2009 string concatenation (12)
Nov 18 2009 The VELOX research project (1)
Nov 18 2009 static foreach is deferred (7)
Nov 18 2009 Chaining exceptions (13)
Nov 18 2009 Short list with things to finish for D2 (204)
Nov 18 2009 Compile-time DSL to D compilation? (3)
Nov 18 2009 Quicker GC group allocations (2)
Nov 18 2009 :? in templates (8)
Nov 18 2009 Should we make DMD1.051 the recommended stable version? (10)
Nov 17 2009 Developing a browser (Firefox) extension with D (8)
Nov 17 2009 version() abuse! Note of library writers. (15)
Nov 17 2009 OSS memory management (4)
Nov 17 2009 alignment on stack-allocated arrays/structs (11)
Nov 17 2009 lexertl (3)
Nov 17 2009 Going from CTFE-land to Template-land (6)
Nov 16 2009 String Mixins (6)
Nov 16 2009 XMLP (9)
Nov 16 2009 Should the comma operator be removed in D2? (102)
Nov 16 2009 static interface (28)
Nov 16 2009 struct mixins (12)
Nov 16 2009 updated OpenCL headers (11)
Nov 16 2009 Making alloca more safe (55)
Nov 16 2009 Ansi vs Unicode API (29)
Nov 15 2009 D array expansion and non-deterministic re-allocation (121)
Nov 15 2009 opApply Vs. Ranges: What should take precedence? (11)
Nov 15 2009 About switch case statements... (91)
Nov 15 2009 Walter gets no credit? (3)
Nov 15 2009 We should deprecate C-style declarations (11)
Nov 15 2009 Re: Help Wanted: Web-developers (1)
Nov 15 2009 "Mine too" (4)
Nov 15 2009 Aldacron's thoughts on D's future (1)
Nov 14 2009 D: at Borders soon? (13)
Nov 14 2009 Designing Safe Software Systems Part 2 (2)
Nov 14 2009 Go shootout (4)
Nov 14 2009 Go compilation model (5)
Nov 13 2009 Back from the dead - How is D going? (8)
Nov 13 2009 Windows Installer Completely Whacks the PATH (4)
Nov 13 2009 std.metastrings (8)
Nov 13 2009 Std. Lib and Binary Attribution (5)
Nov 13 2009 GDC on Snow Leopard (1)
Nov 12 2009 Are out-of-class-declaration method definitions allowed? (5)
Nov 12 2009 How about Go's... error on unused imports? (75)
Nov 12 2009 Named return type > Out parameters (10)
Nov 12 2009 Serialization + semantics of toString (14)
Nov 12 2009 array literal element types (23)
Nov 12 2009 D library projects (77)
Nov 12 2009 How about Go's &Struct instead of new? (5)
Nov 12 2009 Simple program fails to link - what am I doing wrong? (4)
Nov 12 2009 typedef redux (12)
Nov 12 2009 Getting the error from __traits(compiles, ...) (23)
Nov 12 2009 Array literals REALLY should be immutable (19)
Nov 12 2009 Fixed-size arrays and 'null' (6)
Nov 11 2009 [Bikeshed] getter/readonly mixin util with private/protected backing (4)
Nov 11 2009 Re: Go +/- Unladen Swallow? (2)
Nov 11 2009 D Kernel Development (11)
Nov 11 2009 safe leak fix? (40)
Nov 11 2009 What is an attribute? (4)
Nov 10 2009 typedef: what's it good for? (25)
Nov 10 2009 Go: A new system programing language (96)
Nov 10 2009 Do we really need unsafe? (10)
Nov 10 2009 CPAN for D (13)
Nov 10 2009 Vectorized Laziness (4)
Nov 10 2009 static static (16)
Nov 09 2009 On Iteration (14)
Nov 09 2009 ICTI and ISTI (2)
Nov 09 2009 thank's ddmd ! (22)
Nov 08 2009 objdump src (4)
Nov 08 2009 foreach syntax, std.mixin (15)
Nov 08 2009 scope(exit) considered harmful (27)
Nov 07 2009 So bat, DMD crashed! (2)
Nov 07 2009 opPow, opDollar (46)
Nov 06 2009 Primary D Website Dilapidated? (4)
Nov 06 2009 D's slices have "discretionary sharing semantics" (1)
Nov 06 2009 Personal thoughts about D2 release, D, the Universe and everything (10)
Nov 06 2009 D polymorhic class objects (2)
Nov 06 2009 SIMD/intrinsincs questions (32)
Nov 06 2009 Problems compiling some D libraries. (1)
Nov 06 2009 Please join me... (6)
Nov 06 2009 Char literals (6)
Nov 06 2009 std.metastrings and Variadic CTFE funcs (5)
Nov 06 2009 dmd Changeset [241]: beta ? (2)
Nov 05 2009 "invariant" vs. "const" (3)
Nov 05 2009 Arrays passed by almost reference? (20)
Nov 05 2009 Safety, undefined behavior, safe, trusted (102)
Nov 05 2009 memory-safe implementation of ANSI C (1)
Nov 05 2009 synchronized performance (1)
Nov 05 2009 Semantics of toString (106)
Nov 05 2009 D loosing the battle (19)
Nov 05 2009 Regarding compiler switches (14)
Nov 04 2009 Binding request: mongodb (1)
Nov 04 2009 Introducing Myself (7)
Nov 04 2009 Template Base Classes, Refering to typeof(this) (7)
Nov 04 2009 An interesting consequence of safety requirements (16)
Nov 04 2009 Operator overloading and loop fusion (4)
Nov 04 2009 Re: (Phobos - SocketStream) Am I doing something wrong or is this a (5)
Nov 03 2009 (Phobos - SocketStream) Am I doing something wrong or is this a bug? (1)
Nov 03 2009 safety model in D (82)
Nov 03 2009 Lua JIT 2.0b (4)
Nov 03 2009 (Phobos - SocketStream) Am I doing something wrong or is this a bug? (8)
Nov 02 2009 Memory Management in D: Request for Comment (18)
Nov 02 2009 DIP8: Templating ClassInfo and TypeInfo (4)
Nov 02 2009 Does DMD produce working programs on Snow Leopard? (5)
Nov 02 2009 Generating headers with -H (6)
Nov 02 2009 Proposal: Replace __traits and is(typeof(XXX)) with a 'magic namespace'. (33)
Nov 02 2009 Any version of any D compiler that works properly on Mac OS X 10.6? (6)
Nov 01 2009 XML parser for D1 Phobos and Tango and D2 published (2)
Nov 01 2009 Grokking ranges: some new algorithms and ranges (13)
Nov 01 2009 Can classinfo and typeinfo_struct be templates? (1)
Nov 01 2009 Size does matter: TDPL reaches the size of Modern C++ Design (7)
Oct 31 2009 Add a clean way to exit a process (3)
Oct 31 2009 module hijacking (10)
Oct 31 2009 blast from the (recent) past (4)
Oct 31 2009 Another thread on Jarrett's blog post (5)
Oct 31 2009 Unofficial wish list status.(Nov 2009) (2)
Oct 31 2009 D2 closure and loop variables (3)
Oct 31 2009 C'tors from templates (11)
Oct 31 2009 Genie language updated (1)
Oct 31 2009 importing modules with non-identifier names (11)
Oct 31 2009 Hello World crashes on OS X 10.6.1 (3)
Oct 30 2009 "The Case for D" on Ycombinator (9)
Oct 30 2009 Getting class member offsets at compile time (1)
Oct 29 2009 Safe Systems from Unreliable Parts (7)
Oct 29 2009 Success! (Precisely) (21)
Oct 29 2009 Followup Poll: Why tango trunk instead of 0.99.8? (4)
Oct 29 2009 Permitted locations of a version condition (12)
Oct 29 2009 Is it possible that the Karmic upgrade interferes with dmd? (5)
Oct 29 2009 another stack frame optimization issue (1)
Oct 29 2009 More PC Precision Stuff (8)
Oct 28 2009 The Thermopylae excerpt of TDPL available online (92)
Oct 28 2009 class .sizeof (3)
Oct 28 2009 ICE: template.c:806: failed assertion `i < parameters->dim' (8)
Oct 28 2009 Need some help with this... (4)
Oct 28 2009 What is the air speed velocity of an unladen swallow? (1)
Oct 28 2009 associative arrays: iteration is finally here (59)
Oct 27 2009 Shared Hell (20)
Oct 27 2009 GC Sentinel (7)
Oct 26 2009 Thread-local storage and Performance (5)
Oct 26 2009 GC Precision (16)
Oct 26 2009 The bizarre world of typeof() (14)
Oct 25 2009 Locally Instantiated Templates (2)
Oct 25 2009 TDPL reaches Thermopylae level (35)
Oct 25 2009 langref.org: cookbook/programming examples (2)
Oct 25 2009 Disallow catch without parameter ("LastCatch") (14)
Oct 25 2009 Restricting ++ and -- (17)
Oct 25 2009 TDPL at its 100,000 words anniversary! (5)
Oct 24 2009 Thread safety of alloca (3)
Oct 24 2009 Getting All Instantiations of a Template (2)
Oct 24 2009 LLVM 2.6 Release! (16)
Oct 23 2009 Private enum members (21)
Oct 23 2009 Dmd2 on Mac OS X 10.6 (1)
Oct 23 2009 [OT] What should be in a programming language? (21)
Oct 23 2009 [BUG] Linker produces no output but returns 0 (1)
Oct 23 2009 Mini proposal: rename float.min to float.min_normal (11)
Oct 22 2009 IDE for D? Recommendations? (12)
Oct 22 2009 creal.re and creal.im are not lvalues: is that intentional? (2)
Oct 22 2009 C++ to D Utility? (4)
Oct 22 2009 Illegal instruction in GetOpenFileName() (2)
Oct 22 2009 OT: Hats... Mostly unnecessary? (9)
Oct 22 2009 What Does Haskell Have to Do with C++? (11)
Oct 22 2009 Struct Comparison (6)
Oct 22 2009 Who's using structs nested in functions? (11)
Oct 22 2009 Small performance problem (1)
Oct 21 2009 Targeting C (37)
Oct 21 2009 Semicolons: mostly unnecessary? (128)
Oct 21 2009 No header files? (108)
Oct 21 2009 int always 32 bits on all platforms? (35)
Oct 21 2009 d optimization: delegates vs. mixin (4)
Oct 21 2009 this() not executing code on structs (26)
Oct 21 2009 d2 stability ? (1)
Oct 20 2009 Condition Mutexes (12)
Oct 20 2009 stack frame optimization problem (9)
Oct 20 2009 (Another) XML Module Candidate (1)
Oct 20 2009 Access Violation after declaration second objet of the same type (1)
Oct 19 2009 static arrays becoming value types (57)
Oct 19 2009 Eliminate "new" for class object creation? (31)
Oct 19 2009 d3 ? (8)
Oct 19 2009 LRU cache for ~= (57)
Oct 19 2009 Array, AA Implementations (21)
Oct 19 2009 Scintilla_DFL_Control update ? (1)
Oct 19 2009 Access violation after inheriting. (3)
Oct 18 2009 Proposed D2 Feature: => for anonymous delegates (14)
Oct 18 2009 The demise of T[new] (95)
Oct 18 2009 "Error: long has no effect in expression (0)" (1)
Oct 18 2009 execute file size is much big from dmd1041. (2)
Oct 17 2009 Revamping associative arrays (55)
Oct 17 2009 bug fix is slower (12)
Oct 17 2009 64-bit (33)
Oct 16 2009 Who is Walter Bright? (8)
Oct 16 2009 Working with files over 2GB in D2 (12)
Oct 16 2009 std.stream bugs (1)
Oct 16 2009 dmd support for IDEs and the D tool chain (59)
Oct 15 2009 T[new] misgivings (50)
Oct 15 2009 Aliasing, and more (1)
Oct 15 2009 Error: /PAGESIZE:512 is too small (1)
Oct 15 2009 I feel - (6)
Oct 15 2009 OT Renting a dedicated Server in the US (5)
Oct 15 2009 I feel outraged - (16)
Oct 15 2009 A time to turn - (3)
Oct 15 2009 The D Manifesto (9)
Oct 14 2009 MathExp: KISS or All-Out? (10)
Oct 14 2009 Communicating between in and out contracts (51)
Oct 14 2009 New XML parser written for D1 and D2. (15)
Oct 14 2009 New XML parser written for D1 and D2. - XMLP_01.zip (0/1) (1)
Oct 14 2009 DIP6 (10)
Oct 14 2009 So many years I was following D... (5)
Oct 13 2009 Get name of alias parameter at compile time? (11)
Oct 13 2009 A possible solution for the opIndexXxxAssign morass (46)
Oct 12 2009 Eliminate assert and lazy from D? (49)
Oct 12 2009 opXAssign overloading (4)
Oct 12 2009 Specializing on Compile Time Constants (10)
Oct 12 2009 Goodbye (26)
Oct 12 2009 dmd development model. (12)
Oct 12 2009 Revamped concurrency API (88)
Oct 12 2009 A safer switch? (13)
Oct 11 2009 Geek of the week (6)
Oct 11 2009 Messages both in d.D.ide and d.D ? (7)
Oct 11 2009 Importing, and visibility (2)
Oct 11 2009 SymRational, Computer Algebra (4)
Oct 10 2009 dmd support for IDEs (217)
Oct 10 2009 Phobos.testing (17)
Oct 10 2009 Re: null references redux + Looney Tunes (1)
Oct 10 2009 Rationals Lib? (9)
Oct 09 2009 const "override" and interfaces (2)
Oct 09 2009 clear() (18)
Oct 08 2009 Uniform function call syntax (7)
Oct 08 2009 Array literals' default type (46)
Oct 07 2009 Use of first person in a book (64)
Oct 07 2009 Problem with undefined types with recent DMDs? (4)
Oct 07 2009 uint is NOT just a positive number (3)
Oct 06 2009 D marketplace (20)
Oct 06 2009 Eliminate class allocators and deallocators? (95)
Oct 06 2009 misaligned read handling on various processors (6)
Oct 06 2009 Re: I wrote some D today and it's completely blowing my mind. Ever (3)
Oct 05 2009 Salasana hukassa (3)
Oct 04 2009 Is there a way to get the size of a class object statically? (5)
Oct 04 2009 Unused result (1)
Oct 04 2009 Regression in the latest dmd and the fix. (3)
Oct 03 2009 Google C++ style guide (22)
Oct 03 2009 D2.0 cpp interfacing: what is a C++ unsigned long counterpart in D? (10)
Oct 03 2009 I wrote some D today and it's completely blowing my mind. Ever tried (10)
Oct 02 2009 Arrays template arguments and CT data structures (5)
Oct 02 2009 Don Clugston's article "Member Function Pointers and the Fastest (1)
Oct 02 2009 Multiple subtyping with alias this and nested classes (35)
Oct 02 2009 generalizing hiding rules (3)
Oct 02 2009 scope class members -> in-situ (15)
Oct 02 2009 Re: null references redux + Looney Tunes (78)
Oct 01 2009 Should certain abstract classes be instantiable? (26)
Oct 01 2009 Can D compile for PowerPC Architecture? (4)
Oct 01 2009 What does Coverity/clang static analysis actually do? (33)
Oct 01 2009 Defining some stuff for each class in turn (15)
Oct 01 2009 SoftBound (3)
Oct 01 2009 A possible leak (7)
Sep 30 2009 Code injection (2)
Sep 30 2009 restructuring name hiding around the notion of hijacking (10)
Sep 30 2009 Unofficial wish list status.(Oct 2009) (1)
Sep 30 2009 It's official: One-day D tutorial at the ACCU Conference 2010 in (4)
Sep 30 2009 Video Codecs? (4)
Sep 30 2009 Re: Null references redux + Cyclone (1)
Sep 29 2009 Comparing apples and oranges (11)
Sep 29 2009 Brain-limited informatics problems (3)
Sep 29 2009 Proposal: "void f() { return 7; }" should be illegal (9)
Sep 29 2009 Workaround for installing D1 on a directory with spaces on Windows (1)
Sep 28 2009 anybody working on OpenCL bindings? (10)
Sep 28 2009 opApply ref and "const" iterators (4)
Sep 28 2009 Interesting GCC extensions (16)
Sep 27 2009 opEquals and the Non-Virtual Interface idiom (9)
Sep 26 2009 putting more smarts into a == b (17)
Sep 26 2009 Null references redux (273)
Sep 26 2009 Dispatching on a variant (26)
Sep 25 2009 The Non-Virtual Interface idiom in D (35)
Sep 25 2009 property keyword (6)
Sep 24 2009 resolveProperties (dmd hacking) (4)
Sep 24 2009 Is typedef an alien? (13)
Sep 24 2009 How does one ask a non-invasive question on this (or any other) forum? (4)
Sep 24 2009 should protected imply package? (7)
Sep 24 2009 Why not move cast to the standard library? (27)
Sep 24 2009 override(T) (32)
Sep 24 2009 Regular expression support in D2 (2)
Sep 24 2009 DFL IDE Editor ? (22)
Sep 24 2009 Strict mode (3)
Sep 23 2009 Function template parameter inference from class template argument (5)
Sep 22 2009 contravariant argument types: wanna? (56)
Sep 22 2009 D 1.0: std.regexp incredibly slow! (7)
Sep 22 2009 .init property for char[] type (19)
Sep 21 2009 Pure dynamic casts? (54)
Sep 21 2009 Arbitrary Size Integer Arrays (12)
Sep 21 2009 Problems when trying to write a Dll/COM object in D2 (1)
Sep 19 2009 Rich Hickey's slides from jvm lang summit - worth a read? (25)
Sep 19 2009 memset and related things (23)
Sep 19 2009 Mixin a constructor ? (9)
Sep 18 2009 How Nested Functions Work, part 2 (42)
Sep 17 2009 Noop language (1)
Sep 16 2009 Elliotte Rusty Harold's take on Java (11)
Sep 16 2009 DInstaller overwrites the %PATH% variable (3)
Sep 15 2009 Type unions in D (21)
Sep 14 2009 Compile-time AAs (7)
Sep 14 2009 Writing a language parser in D (19)
Sep 12 2009 std.string phobos (6)
Sep 12 2009 XML ecosystem wrt D (6)
Sep 11 2009 Simple bolt-on unittest improvement (7)
Sep 11 2009 shared adventures in the realm of thread-safety. (17)
Sep 11 2009 Simple partial evaluation (1)
Sep 11 2009 Incremental compilation with DMD (26)
Sep 10 2009 File.seek Mac OS X dmd 2.031 and 2.032 (2)
Sep 10 2009 Function parameters name (2)
Sep 10 2009 Floating point rounding modes: we should restrict them slightly (38)
Sep 09 2009 BigInts in Tango/Phobos (2)
Sep 09 2009 Apache HTTPD server module in D (linux) (5)
Sep 09 2009 Modern Windows GUI visual styles (17)
Sep 08 2009 Template Metaprogramming Made Easy (Huh?) (80)
Sep 08 2009 Cannot convert of type HANDLE to HANDLE (2)
Sep 07 2009 Special Token __FUNCTION__ (5)
Sep 06 2009 D 2.00 official spec (8)
Sep 06 2009 dmg for Snow Leopard x86_64 ? (4)
Sep 06 2009 Derelict+Tango (15)
Sep 06 2009 Bug with patch (3)
Sep 06 2009 Error: mixin is not defined (dmd v2.032) (4)
Sep 06 2009 D on the Objective-C runtime? (8)
Sep 05 2009 Compile-time overflow checks (1)
Sep 05 2009 Iterators Must Go (Ahead) (1)
Sep 05 2009 what happened to std.array.erase? (1)
Sep 04 2009 D naming style? (12)
Sep 04 2009 DDL should become official part of DMD (7)
Sep 04 2009 =?ISO-8859-1?Q?Re:_Descent_support_dmd2.032=a3=bf?= (5)
Sep 04 2009 =?ISO-8859-1?Q?Descent_support_dmd2.032=a3=bf?= (12)
Sep 03 2009 Compiled dmd2.032 in VC++ 2009! (36)
Sep 03 2009 The Linker is not a Magical Program (16)
Sep 02 2009 Nullable or Optional? Or something else? (58)
Sep 01 2009 Apple Blocks added to C++? (28)
Sep 01 2009 type switch (2)
Aug 31 2009 Unofficial wish list status.(Sep 2009) (2)
Aug 31 2009 template FieldTypeTuple(S) in std.traits (1)
Aug 30 2009 const and immutable objects (12)
Aug 30 2009 How Nested Functions Work, part 1 (71)
Aug 29 2009 OT: What's your favorite codeline? (11)
Aug 26 2009 Does dmd have SSE intrinsics? (43)
Aug 26 2009 Reference value of structs not optimized or inlined? (40)
Aug 25 2009 A hypothetical question (5)
Aug 25 2009 Compilation models for C++ (7)
Aug 25 2009 Java generic inference is unsound (2)
Aug 25 2009 D should disallow forward references (3)
Aug 24 2009 Turkish 'I's can't D either (14)
Aug 23 2009 Making Metaprogramming Pleasant and Fun (3)
Aug 22 2009 Scala future, Sing# (20)
Aug 21 2009 D features - wanna start project (13)
Aug 20 2009 First machine-checked OS kernel (4)
Aug 19 2009 auto with array of strings (BUG?) (6)
Aug 19 2009 OT - Which Linux? (35)
Aug 18 2009 'scope' reference accepts object allocated out of scope: This is a bug. (1)
Aug 17 2009 It's awfully quiet (32)
Aug 16 2009 escaping pointer to scope local array: bug or not? (17)
Aug 16 2009 Partial specialisation is foobarred?! (3)
Aug 15 2009 The future of D 1.x (8)
Aug 15 2009 Languages usability (1)
Aug 14 2009 Call stack mechanism (4)
Aug 14 2009 I don't think this is a bug but... (5)
Aug 13 2009 shared class constructor (2)
Aug 12 2009 unfixed ICE issues (5)
Aug 12 2009 Notepad++ (24)
Aug 12 2009 auto (14)
Aug 12 2009 Mixin Language Documentation -- error (2)
Aug 11 2009 Explicitly saying ref or out when invoking a function (23)
Aug 11 2009 Void-safety (and related things) (11)
Aug 11 2009 Properties, opIndex, and expression rewriting. (5)
Aug 10 2009 GPU/CPU roadmaps (8)
Aug 10 2009 Some questions about dmd development (3)
Aug 10 2009 Unit test practices in Phobos (10)
Aug 09 2009 T[new] (53)
Aug 09 2009 SSE, AVX, and beyond (4)
Aug 08 2009 Compile time code paths (6)
Aug 08 2009 Memory allocation problem (19)
Aug 08 2009 Properties in C# and prop_Foo (4)
Aug 08 2009 Ambiguous comment syntax (6)
Aug 08 2009 unittext extension proposal (8)
Aug 08 2009 YAPP - reminder (4)
Aug 07 2009 calling from C into D (4)
Aug 07 2009 delete and references? (18)
Aug 07 2009 Exponential operator (31)
Aug 06 2009 New Property Article on Wiki4D (1)
Aug 06 2009 Global operator overloading? (3)
Aug 06 2009 Templates - Numeric Types only (16)
Aug 06 2009 Properties and Copy Constructors (4)
Aug 06 2009 proposed syntax change (37)
Aug 06 2009 final switch usage (3)
Aug 06 2009 array slicing policy (3)
Aug 05 2009 Usability of latest DMD 1.x (7)
Aug 05 2009 Triggers (3)
Aug 05 2009 scope, inline, optimizations, scoped attributes (4)
Aug 05 2009 Searching the digitalmars.com/d website (15)
Aug 04 2009 Naming things in Phobos - std.algorithm and writefln (26)
Aug 03 2009 reddit.com: first Chapter of TDPL available for free (41)
Aug 03 2009 Iterators Must Go video online (12)
Aug 02 2009 Contextualizing keywords (51)
Aug 02 2009 rvalues as ref arguments (as seen in digitalmars.D.learn) (2)
Aug 02 2009 property syntax strawman (119)
Aug 01 2009 DIP6: Attributes (97)
Aug 01 2009 YAPP - D properties voting reminder (4)
Aug 01 2009 Omissible Parentheses... (27)
Aug 01 2009 YAPP - D properties - voting (8)
Aug 01 2009 property / getProperty() / setProperty() (36)
Jul 31 2009 Reading bool as the string "true" or "false" (2)
Jul 31 2009 Compile time float binary representation (8)
Jul 31 2009 Unofficial wish list status.(Aug 2009) (1)
Jul 31 2009 YAPP - yet another properties poll (8)
Jul 31 2009 Alignments, semantics from asserts, auto-count (4)
Jul 31 2009 True Properties Poll (23)
Jul 31 2009 Property and method groups (4)
Jul 30 2009 We can see the performance difference from the simple functions in Tango and Phobos (5)
Jul 30 2009 overloading functions against function templates (13)
Jul 30 2009 A simple rule (20)
Jul 30 2009 Some things to fix (10)
Jul 29 2009 Twitter hashtag for D? (6)
Jul 29 2009 shorthand template and static conditionals? (3)
Jul 29 2009 Yet a new properties proposal (29)
Jul 29 2009 Re: poll for properties (2)
Jul 29 2009 Naming things: Phobos Programming Guidelines Propsal (9)
Jul 29 2009 Properties: problems (6)
Jul 29 2009 Re: poll for properties (1)
Jul 28 2009 Re: poll for properties (5)
Jul 28 2009 Properties: a.b.c = 3 (114)
Jul 28 2009 Properties -- another one that gets me (5)
Jul 28 2009 [OT] Google wave (1)
Jul 28 2009 Properties: .sort and .reverse (7)
Jul 28 2009 Properties poll (6)
Jul 28 2009 std.format request (2)
Jul 27 2009 D Framework (13)
Jul 27 2009 The XML module in Phobos (33)
Jul 27 2009 suggestions for improving dsss? (1)
Jul 27 2009 Two optimizations (6)
Jul 27 2009 -deps parameter isn't working in D1.046 (2)
Jul 27 2009 new DIP5: Properties 2 (130)
Jul 26 2009 d2 constness (1)
Jul 25 2009 My own IDE for D (31)
Jul 25 2009 GDB sees no line number information on Mac OS X (1)
Jul 25 2009 The empty statement ";" - when is it useful? (39)
Jul 25 2009 Was DIP4: Properties (2)
Jul 24 2009 Compile-time constness is waaay to strict! (9)
Jul 24 2009 What makes D, D? (18)
Jul 24 2009 DIP4: Properties (3)
Jul 23 2009 DIP4: Properties (7)
Jul 23 2009 DIP4: Properties (51)
Jul 23 2009 DMD 1 & 2 coexisting (6)
Jul 23 2009 [~OT] Finally, a clear, concise D spec! (11)
Jul 23 2009 std.metastrings.ToString! problems (5)
Jul 23 2009 Creating ActiveX in D (10)
Jul 23 2009 Can enum and immutable be unified? (7)
Jul 22 2009 DMD2, std.zlib, and gzip (3)
Jul 22 2009 Two interesting articles (2)
Jul 22 2009 Reddit: why aren't people using D? (340)
Jul 20 2009 C++ concepts, templates, reducing code bloat (2)
Jul 20 2009 (Non) Nesting block comments (17)
Jul 20 2009 C faults, etc (12)
Jul 20 2009 Casts and conversions done right (8)
Jul 19 2009 CUDA with D working after all (12)
Jul 19 2009 ACCU 2010 Call for Papers (1)
Jul 18 2009 Wiki4D, Walter and DIPs (1)
Jul 17 2009 Andy Glew reviews "The Case For D" (2)
Jul 17 2009 cast(public) (18)
Jul 17 2009 Strange behaviour of enums in for loops (5)
Jul 17 2009 Nested Foreach (4)
Jul 16 2009 Random Suggestion: Swap Operator <=>? (19)
Jul 15 2009 Dynamic D Library (47)
Jul 15 2009 Changes in the D2 design to help the GC? (6)
Jul 15 2009 All this talk about finalising D2 makes me worried (16)
Jul 15 2009 constraints,template specialization,drop IFTI (9)
Jul 15 2009 A quick question for Walter and Andrei - LDC (3)
Jul 15 2009 Problem with debugging in Linux (4)
Jul 14 2009 What will happen after D2? (3)
Jul 14 2009 C compatibility (10)
Jul 14 2009 Developing a plan for D2.0: Getting everything on the table (81)
Jul 13 2009 Optimizing Scala (1)
Jul 13 2009 Compiler Page - Request for review (12)
Jul 13 2009 Patronizing Language Design? (17)
Jul 13 2009 Conditional compilation inside asm and enum declarations (48)
Jul 12 2009 modulus redux (14)
Jul 12 2009 phobos unstable builds (4)
Jul 11 2009 Adam Ruppe donates Windows cycles to dmd/phobos (5)
Jul 11 2009 DIP3 - Remove inheritance protection (5)
Jul 11 2009 Oh Dear (60)
Jul 11 2009 Can let DMD output source format file after analysis? (2)
Jul 09 2009 Is DFL still developed? (11)
Jul 09 2009 new DIP2: Const code bloat (10)
Jul 09 2009 Enhancement request (8)
Jul 08 2009 Minor issue - zero-length fixed size arrays in variable-sized (11)
Jul 08 2009 Array indices and (in|ex)clusive ranges (4)
Jul 07 2009 is rdmd ready for primetime on Windows? (3)
Jul 07 2009 new DIP1: DIP Template (36)
Jul 07 2009 Bartosz asks =?windows-1252?Q?What=92s_Wrong_with_the_Th?= (8)
Jul 07 2009 Haskell update proposals (3)
Jul 07 2009 Can we fix reverse operator overloading (opSub_r et. al.)? (35)
Jul 06 2009 Error messages D2 (1)
Jul 06 2009 Case Range Statement .. (181)
Jul 06 2009 string mixin reference (8)
Jul 06 2009 std.algorithm.swap and memcpy (2)
Jul 05 2009 Linker problem (10)
Jul 05 2009 Adaptations for a modern back-end (1)
Jul 05 2009 Have language researchers gotten it all wrong? (14)
Jul 05 2009 rdmd (10)
Jul 05 2009 Template error messages (1)
Jul 04 2009 Value type, ref type, how about something in between? (3)
Jul 04 2009 [OT] Shell scripting compatibility (7)
Jul 03 2009 Method overloading and inheritance (4)
Jul 03 2009 invariant/const (1)
Jul 02 2009 Reminds me of? (11)
Jun 30 2009 Unofficial wish list status.(Jul 2009) (2)
Jun 30 2009 resolving template instantiations (3)
Jun 30 2009 Combining Delegate and Functions (22)
Jun 30 2009 Concurrency paradigms (1)
Jun 30 2009 D2 MemoryStream (1)
Jun 30 2009 optlink on multicore machines (21)
Jun 29 2009 New User Experience (16)
Jun 29 2009 improving the D spec (9)
Jun 29 2009 At a crossroad (35)
Jun 28 2009 Give me a break (90)
Jun 28 2009 finding help for D game engine (5)
Jun 27 2009 Windows DMD installer (52)
Jun 26 2009 int nan (40)
Jun 25 2009 Dejavu (47)
Jun 25 2009 Why did you remove std.algorithm.inPlace? (1)
Jun 24 2009 Coming Attractions (11)
Jun 23 2009 CloseHandle missing in phobos/std/file.dtrunk/phobos/std/file.d read? (2)
Jun 23 2009 The dmd compiler license (15)
Jun 22 2009 declaration/expression (10)
Jun 22 2009 Bug in std.range.retro? (2)
Jun 21 2009 Suggestion: Syntactic sugar for Exception handling in D2 (18)
Jun 20 2009 Making changes to Wiki4D (10)
Jun 20 2009 base class protection (4)
Jun 19 2009 The proper case for D. (43)
Jun 18 2009 Ranges (57)
Jun 18 2009 How to make invariant switchable (18)
Jun 17 2009 D2 vs D1 (14)
Jun 17 2009 How to use pure in D 2.0 question on Stack Overflow? (13)
Jun 17 2009 From Reddit (13)
Jun 17 2009 string on stack (1)
Jun 16 2009 final conflicts with override? (1)
Jun 16 2009 Bartosz's latest installment on threads (3)
Jun 16 2009 TDPL's first three chapters now on Safari's Rough Cuts (1)
Jun 16 2009 EnumBaseType conversion (6)
Jun 16 2009 new -vtls flag does nothing? (4)
Jun 16 2009 Andrei writes "The Case for D" (42)
Jun 15 2009 Reading a File That is Being Written To From Phobos (2)
Jun 12 2009 runtime vararg can easily be broken (6)
Jun 12 2009 __FUNCTION__ implemented with mixins and mangles (20)
Jun 10 2009 why implicitly allowing compare ubyte and byte sucks (16)
Jun 10 2009 Inlining asm functions (5)
Jun 10 2009 Count your blessings! (21)
Jun 09 2009 Arrays vs slices (4)
Jun 09 2009 Follow-on question about delegates (4)
Jun 08 2009 Fractal (16)
Jun 08 2009 question about foreach, opApply, and delegates (6)
Jun 07 2009 D Wiki (45)
Jun 07 2009 Should this be a compiler error? (4)
Jun 07 2009 LDC predefined identifiers (13)
Jun 07 2009 'final' function implementations in interface definition (14)
Jun 07 2009 C#4 Covariance/Contravariance (8)
Jun 06 2009 silently accept &parentclassName.func can be bug-prone (1)
Jun 06 2009 Pop quiz [memory usage] (43)
Jun 06 2009 D2's feature set? (8)
Jun 05 2009 GIS and D (8)
Jun 04 2009 DMD + nedmalloc? (1)
Jun 04 2009 Generic Class Alias Syntax (10)
Jun 04 2009 static if syntax (7)
Jun 04 2009 Asking for a visit [OT] (2)
Jun 04 2009 Fun With Generics, Class Templates and Static Ifs (4)
Jun 04 2009 bug in std.algorithm.sort ?? (8)
Jun 03 2009 More pure optimizations (8)
Jun 03 2009 DMD 2.030 doesn't exist (27)
Jun 02 2009 D arithmetic problem (38)
Jun 02 2009 Any parser generators for D that are not abandoned? (6)
Jun 02 2009 Complex Object Generation with Templates/Mixins (3)
Jun 01 2009 Unique as a transitive type? (6)
Jun 01 2009 Compiling in 64 system 32 bit program (5)
May 31 2009 Automatic void initialization (14)
May 31 2009 Unofficial wish list status.(Jun 2009) (1)
May 31 2009 Why are void[] contents marked as having pointers? (54)
May 31 2009 visualization of language benchmarks (15)
May 30 2009 Compile-time generated code... not that nice (7)
May 30 2009 forward ranges must offer a save() function (9)
May 30 2009 Outer names, binding (1)
May 30 2009 D compiler as a C++ preprocessor (5)
May 29 2009 Source control for all dmd source (76)
May 29 2009 HotOS 2009 (1)
May 29 2009 Cuda for C++ (7)
May 28 2009 Operator overloading, structs (46)
May 28 2009 [OT] Convention of Communication (17)
May 27 2009 how to find stack trace of "Error: 4invalid UTF-8 sequence" "Error: std.format int argument expected"? (5)
May 27 2009 D2 std.conv to D1 please (10)
May 26 2009 Struct d'tors and destructive assignment of return vals (5)
May 26 2009 Any IDEs or editors that are compatible with D 2.0? (12)
May 26 2009 [OT] Language design question (3)
May 26 2009 Two about Scala features (1)
May 26 2009 Functional programming, immutablility (2)
May 26 2009 Lazy? (3)
May 25 2009 sqlserver2000 for d2 api ? (7)
May 25 2009 Needing templates/compile-time functions for debugging (7)
May 25 2009 Template limits (8)
May 24 2009 Can you find out where the code goes wrong? (10)
May 24 2009 how to use GC as a leak detector? i.e. get some help info from GC? (21)
May 23 2009 randomSample (10)
May 23 2009 [OT] n-way union (32)
May 23 2009 Asserts inside nothrow function (7)
May 22 2009 !in operator? (11)
May 22 2009 Finalizing D2 (49)
May 21 2009 why allocation of large amount of small objects so slow (x10) in D? (8)
May 21 2009 Dithering about ranges (4)
May 21 2009 ideas about ranges (35)
May 21 2009 It's not always nice to share (3)
May 21 2009 Defining a version after it's tested for (11)
May 21 2009 opCall and template functions (3)
May 20 2009 Iterators Must Go video to come online soon (13)
May 20 2009 "the last change" for ranges (50)
May 20 2009 switch-case (bug, not a proposal) (2)
May 19 2009 Bit fields in struct, why doesn't D support them? (4)
May 19 2009 Why is !() need with default template arguments (6)
May 19 2009 Re: Switch - Full Circle (2)
May 19 2009 Introspection - how? (4)
May 19 2009 Class.classinfo.name and Class.stringof (1)
May 18 2009 Differences between invariant and immutable? (3)
May 18 2009 While we're lynching features, how bout' them omittable parens? (30)
May 18 2009 Notes on 2.030 (2)
May 18 2009 Some memory safety (3)
May 18 2009 The Final(ize) Challenge (56)
May 18 2009 Re: foreach (x; a .. b) and foreach_reverse (x; a .. b) should be (3)
May 18 2009 What a nice bug! (6)
May 18 2009 langpop (3)
May 17 2009 the cast mess (8)
May 17 2009 "with" should be deprecated with extreme prejudice (223)
May 17 2009 foreach (x; a .. b) and foreach_reverse (x; a .. b) should be disallowed (30)
May 17 2009 std.string.sformat broken in DMD 2.030 ? (2)
May 17 2009 Eliminate the baroque floating-point operators a la !<>= (15)
May 16 2009 I made a D Tips page (1)
May 16 2009 Using DMD 1.045 breaks code (2)
May 16 2009 Combining D and C (4)
May 16 2009 [D2] How to start threads? (1)
May 16 2009 typo (1)
May 15 2009 asm trouble (3)
May 15 2009 Dual D2/D1 code base (6)
May 15 2009 Inlining Ref Functions (18)
May 15 2009 std.regex (2)
May 15 2009 JSON in D (10)
May 15 2009 asm code and an inout function argument (10)
May 15 2009 A case for opImplicitCast: making string search work better (9)
May 14 2009 Please Vote: Exercises in TDPL? (93)
May 14 2009 Info on doc pages (1)
May 14 2009 D1 and Phobos Fixes (9)
May 14 2009 std.string and std.algorithm: what to do? (44)
May 13 2009 Semantics of shared (21)
May 13 2009 bmp/png libs? (23)
May 13 2009 Fun with allMembers (12)
May 13 2009 Sharing on reddit (1)
May 13 2009 std.partition is fucked (25)
May 13 2009 Rationale for no opIndex, length in std.algorithm.map? (2)
May 13 2009 Wrt. threadlocal by default: shared module constructors (9)
May 12 2009 D2 Phobos Documentation (4)
May 12 2009 Article on Clojure (1)
May 12 2009 dmd osx page (1)
May 12 2009 D Development procedures (1)
May 12 2009 Migrating to Shared (29)
May 12 2009 Challenge: Automatic differentiation in D (6)
May 12 2009 project oriented (201)
May 11 2009 Overriding Private (6)
May 11 2009 More PLPlot stuff (2)
May 11 2009 When will D1 be finished? (99)
May 11 2009 Rebuild [Descent] (3)
May 11 2009 Rebuild [Descent] (3)
May 10 2009 D users in Munich, Rome, Venice, or Frankfurt? (44)
May 10 2009 DMD's Released Source, Great Stuff! (2)
May 10 2009 How to use C++ static library in d (2)
May 09 2009 Plotting Using PLPlot (15)
May 09 2009 Promoting D (17)
May 09 2009 assignment: left-to-right or right-to-left evaluation? (48)
May 08 2009 GC reuses memory? (2)
May 08 2009 Real Close to the Machine: Floating Point in D (36)
May 08 2009 when will D2 be stable? (11)
May 08 2009 htod for linux? (4)
May 07 2009 What's the current state of D? (126)
May 07 2009 Iterators Must Go (22)
May 07 2009 A possible GC optimization (1)
May 07 2009 Write/Writeln, etc (10)
May 07 2009 SCHEDULED for deprecation (8)
May 06 2009 DCat screw-up - sorry! (2)
May 06 2009 (DMD2.029)error, but (DMD2.028)OK! (3)
May 06 2009 D modules referenced by C source? (8)
May 06 2009 Massive loss for D on Tiobe (96)
May 06 2009 NetBeans (6)
May 06 2009 It is a bug ? (8)
May 06 2009 DCat version 0.04 (1)
May 05 2009 Memory safety, C#, D and more (1)
May 05 2009 Precedence of 'new' vs '.' (11)
May 04 2009 A Modest Proposal: Final class instances (8)
May 04 2009 Associative Arrays and Interior Pointers (7)
May 04 2009 Phobos2: iota, ranges, foreach and more (4)
May 04 2009 Self function (8)
May 04 2009 could someone check this on another system? (10)
May 04 2009 New regex: Find? (11)
May 03 2009 D compiler embedding (4)
May 03 2009 Many questions (41)
May 03 2009 For Leandro (6)
May 03 2009 Destructors and Deterministic Memory Management (20)
May 03 2009 =?ISO-8859-1?Q?can__ibm_support_the_d_language_=a3=bf?= (4)
May 02 2009 Error: xxx is not an lvalue (17)
May 02 2009 Absolutely horrible default string hashing (17)
May 02 2009 Problem with .deb packages (10)
May 02 2009 Re: database api? (3)
May 01 2009 Throwable, Exception, and Error (14)
May 01 2009 I can use D all the time (9)
May 01 2009 DCat now builds with DMD1 and DMD2 (1)
May 01 2009 =?ISO-8859-1?Q?database__api_=a3=bf?= (4)
May 01 2009 C tips (again) (25)
May 01 2009 Named arguments: solutions, ideas [Was: Re: d assigns name Philosophy] (1)
Apr 30 2009 Unofficial wish list status.(May 2009) (1)
Apr 30 2009 Fixing the imaginary/complex mess (34)
Apr 30 2009 d assigns name Philosophy (11)
Apr 29 2009 I wish I could use D for everything (68)
Apr 29 2009 Thread ctors/dtors (4)
Apr 29 2009 updates for phobos in svn (2)
Apr 29 2009 RFC: naming for FrontTransversal and Transversal ranges (104)
Apr 28 2009 Google Android (7)
Apr 28 2009 immutable, const, enum (18)
Apr 28 2009 struct vs. class, int vs. char. (21)
Apr 28 2009 D2 Multithreading Architecture (30)
Apr 27 2009 Metaprogramming in D (8)
Apr 27 2009 Weird std.stdio threading bug? (12)
Apr 27 2009 anyone able to use '-cov' at all on non-trivial project? (link error ....) (3)
Apr 27 2009 "try" functions (1)
Apr 27 2009 Phobos2: sorting and std.typecons.Tuple (10)
Apr 27 2009 std.algorithm.BinaryHeap (6)
Apr 27 2009 Keyword 'dynamic' of C#4 (7)
Apr 27 2009 DMD Mac installation instruction (3)
Apr 27 2009 Why is utf8 the default in D? (5)
Apr 27 2009 Bug? enum inside class limited to integral values (1)
Apr 26 2009 Splitter quiz / survey (56)
Apr 26 2009 Followed all installation instructions, still no luck (OS X 10.5) (8)
Apr 26 2009 GC Idea: "explicit types" (Repost) (1)
Apr 26 2009 Small iterators/algorithm usage feedback (5)
Apr 26 2009 isIterable(T) (8)
Apr 26 2009 Yet another strike against the current AA implementation (70)
Apr 26 2009 Simple D program - Segmentation Fault (3)
Apr 26 2009 member arguments in D? (14)
Apr 25 2009 Array Appending and DRuntime (7)
Apr 25 2009 Why not std.io instead of std.stdio? (7)
Apr 25 2009 If T[new] is the container for T[], then what is the container for (46)
Apr 24 2009 v2.029: "Type const(int) does not have an Unsigned counterpart"? (2)
Apr 23 2009 RangeExtra (9)
Apr 23 2009 -nogc (60)
Apr 22 2009 Scanner / Parser for D2 (6)
Apr 22 2009 [OT?] Web NG archives updates (5)
Apr 22 2009 Module system of D2: to be fixed still (8)
Apr 21 2009 proper bit fields in the D2 language? (13)
Apr 21 2009 Struct Flattening (12)
Apr 21 2009 Design Patterns in Dynamically-Typed Languages (4)
Apr 21 2009 Few mixed things (23)
Apr 21 2009 two semantic change proposals (4)
Apr 21 2009 error building on Syllable (6)
Apr 21 2009 Automated rebuilding on program startup: tools.remake (3)
Apr 20 2009 randomCover not so random? (3)
Apr 19 2009 DCat - a compact web application server in D. (1)
Apr 19 2009 temporary objects are not allowed to be pass by ref anymore (13)
Apr 18 2009 Arbitrary sized Variant (2)
Apr 18 2009 Second parameter with ranges (2)
Apr 18 2009 Templated Matrix class (2)
Apr 18 2009 GC object finalization not guaranteed (39)
Apr 16 2009 Fully dynamic d by opDotExp overloading (230)
Apr 16 2009 Status of ClassInfo.getMembers / TypeInfo_Struct.xgetMembers (2)
Apr 16 2009 TypeInfoEx and Variant: suggestions? (12)
Apr 16 2009 Shared Memory (2)
Apr 16 2009 Spec of align attribute is a mess (1)
Apr 15 2009 Java Factories Article (1)
Apr 15 2009 Vectors and matrices (13)
Apr 15 2009 D2 CAS, atomic increment (6)
Apr 15 2009 D2 weak references (47)
Apr 15 2009 OPTLINK and LARGEADDRESSAWARE (4)
Apr 14 2009 DWT for D2+Phobos help wanted (2)
Apr 15 2009 dmd.1.043.deb is for AMD 64 (3)
Apr 14 2009 Why Java Doesn't Need Operator Overloading (and Very Few Languages Do, Really) (31)
Apr 14 2009 Bug in std.socket (8)
Apr 14 2009 Navigate from ClassInfo to TypeInfo (18)
Apr 14 2009 Concepts, axioms (3)
Apr 13 2009 Why does readln include the line terminator? (35)
Apr 13 2009 properties using template mixins and alias this (1)
Apr 13 2009 Ternary Search Trees (23)
Apr 12 2009 The great inapplicable attribute debate (29)
Apr 12 2009 D2 porting notes: template function calls (1)
Apr 12 2009 Associative arrays with void values (12)
Apr 12 2009 invariant() (11)
Apr 11 2009 Linux Shared Library Fun (2)
Apr 10 2009 weird behavior for machine epsilon in float, double and real (4)
Apr 10 2009 Latest news (2)
Apr 10 2009 Std Phobos 2 and logging library? (58)
Apr 10 2009 d_time2FILETIME (5)
Apr 09 2009 demangle tool (15)
Apr 09 2009 bindings/win32, RAS, error 632 (7)
Apr 09 2009 bigfloat II (20)
Apr 09 2009 Stoping VS2005 from closing the damn console window (7)
Apr 09 2009 Compiler does some flow analysis with -O..? (7)
Apr 08 2009 dmd 0.149 (2)
Apr 08 2009 Associative Array implementation (1)
Apr 08 2009 bigfloat (62)
Apr 08 2009 Binding Qt API with templated containers (2)
Apr 08 2009 Contract programming syntax (32)
Apr 07 2009 htod not support stlport stuff good enough (4)
Apr 07 2009 Loading DLLs (3)
Apr 06 2009 two patches for dsss (2)
Apr 06 2009 Array Appenders (14)
Apr 06 2009 The new, new phobos sneak preview (100)
Apr 06 2009 cast a LinkSeq (2)
Apr 06 2009 D, so it happend... (33)
Apr 06 2009 minimal evaluation (9)
Apr 06 2009 Thread pause and resume (16)
Apr 06 2009 Nesting in pure functions (8)
Apr 06 2009 Freedom, D and Chapel language (1)
Apr 05 2009 silly question: why can't i alias an expression? (2)
Apr 05 2009 what are the most common bugs in your D apps? (21)
Apr 05 2009 Hg error checking out dwt-win (1)
Apr 05 2009 The version of dsss I use (4)
Apr 04 2009 Multithreaded I/O in the DMD compiler (DDJ article by Walter) (213)
Apr 04 2009 Mixin mystic (2)
Apr 04 2009 Variant[string] associative array ... fail? (4)
Apr 04 2009 VTable Benchmarking (1)
Apr 04 2009 An interview about Scala (1)
Apr 03 2009 Andrei's interface requests (20)
Apr 03 2009 design question (8)
Apr 02 2009 Help binding over to C++ (2)
Apr 02 2009 [OT] can't wait for tomorrow.... (7)
Apr 02 2009 Objective-D, reflective programming, dynamic typing (62)
Apr 02 2009 Multiple Alias This (2)
Apr 02 2009 Link Problem (11)
Apr 02 2009 Stack tracing on Linux ? (15)
Apr 03 2009 X11 binding, XGetWindowProperty, and different behaviour for similar (11)
Apr 01 2009 what prevents dynamic array from being deprecated & tuple being (5)
Apr 01 2009 scala traits == D classes with parameterized base (1)
Apr 01 2009 What Scala? (46)
Apr 01 2009 is(this == bug) ??? (3)
Apr 01 2009 Will D2 be backwards compatible with D (5)
Apr 01 2009 rename this to ctor (8)
Mar 31 2009 Online dmd source repositiries (6)
Mar 31 2009 Four things (4)
Mar 31 2009 Unofficial wish list status.(Apr 2009) (2)
Mar 31 2009 Dynamic loading of D modules (1)
Mar 30 2009 Declaring Ref Variables Inside Function Calls (51)
Mar 30 2009 OffsetTypeInfo (2)
Mar 30 2009 Shouldn't __traits return Tuples? (29)
Mar 30 2009 Licence question about Indemnification (14)
Mar 29 2009 Time to invent a different file format to hold meta data info (9)
Mar 29 2009 The Case for D (4)
Mar 29 2009 Keeping a list of instances and garbage-collection (18)
Mar 29 2009 How to define templates (8)
Mar 28 2009 Tango backtrace hack? (8)
Mar 27 2009 How about a compatibility list? (11)
Mar 27 2009 Eric S. Raymond on GPL and BSD licenses. & Microsoft coming to Linux (50)
Mar 27 2009 D's "accessors" are like abusing operator overloads (17)
Mar 26 2009 Signaling NaNs Rise Again (8)
Mar 26 2009 Is 2X faster large memcpy interesting? (13)
Mar 26 2009 build a project (18)
Mar 26 2009 State of Play (78)
Mar 25 2009 Building D compiler in MSVC IDE (9)
Mar 25 2009 Can't get Tango to work on WinXP, with DMD (2)
Mar 25 2009 Strategies for null handling in Java (1)
Mar 25 2009 Allowing relative file imports (31)
Mar 24 2009 Concurrency in D (3)
Mar 24 2009 What is the sub interface? (3)
Mar 24 2009 3d graphics float "benchmarks" (5)
Mar 23 2009 Benchmark of try/catch (25)
Mar 22 2009 crossplatform linking? (10)
Mar 22 2009 instance of sub class created in a dll has wired behaviour when cast (4)
Mar 22 2009 Slashdot article about multicore (5)
Mar 22 2009 Is DSSS still being developed? (12)
Mar 22 2009 What can you "new" (209)
Mar 21 2009 Licences issues with d runtime (5)
Mar 21 2009 =?utf-8?B?4oCYZmluYWzigJkgdmFyaWFibGVzOiBhbiBhbHRlcm5hdGl2ZSBkZWZpbmk=?= (1)
Mar 21 2009 Please integrate build framework into the compiler (31)
Mar 21 2009 Returning a struct by reference (10)
Mar 20 2009 Response files (41)
Mar 20 2009 Library for Linear Algebra? (10)
Mar 20 2009 New pragma lib - better handling of dynamic libs (1)
Mar 20 2009 opImplicitCast (2)
Mar 20 2009 Dynamic loading of D modules (2)
Mar 19 2009 DMC to Create C .lib ? (11)
Mar 19 2009 Object documentation (2)
Mar 19 2009 Immutable + goto? (5)
Mar 19 2009 foreach/opApply (5)
Mar 19 2009 for in D versus C and C++ (37)
Mar 19 2009 typeof reference to class object (2)
Mar 19 2009 opCall within with (6)
Mar 19 2009 No -c no main() (17)
Mar 19 2009 goto at end of block (2)
Mar 19 2009 What is throwable (17)
Mar 18 2009 Just a thought: pure functions & a compacting GC (1)
Mar 18 2009 reconsideration of header files (1)
Mar 18 2009 The Joy of Signalling NaNs! (A compiler patch) (14)
Mar 18 2009 A bug in the back-end? (1)
Mar 18 2009 D2 std_array is a dead link (1)
Mar 18 2009 class Exception (5)
Mar 18 2009 new D2.0 + C++ language (74)
Mar 17 2009 eliminate writeln et comp? (53)
Mar 17 2009 utf-8? (11)
Mar 17 2009 Struct constructors and opCall (4)
Mar 16 2009 .NET on a string (29)
Mar 15 2009 LG Post Asking about D (1)
Mar 15 2009 Bug? package attribute half working on child packages. (4)
Mar 15 2009 Associative Arrays and GC (1)
Mar 15 2009 Populating Date struct from d_time value (2)
Mar 14 2009 Proposal: adding condition variable to object monitors (8)
Mar 14 2009 Updated D Benchmarks (13)
Mar 14 2009 memcpy vs slice copy (24)
Mar 14 2009 Manual Deletion from Destructor (11)
Mar 14 2009 Proposal for fixing import("file") (5)
Mar 14 2009 Octal literals: who uses this? (18)
Mar 13 2009 Proposal: fixing the 'pure' floating point problem. (47)
Mar 12 2009 catchy phrase for this idiom? (53)
Mar 11 2009 winsamp sample crashed windbg (9)
Mar 11 2009 tango build on mac with dsss and dmd calls ar twice (3)
Mar 11 2009 Aliasing immutable and mutable data (2)
Mar 10 2009 Idea For Attributes/Annotations (4)
Mar 10 2009 Using dmd on older machines (11)
Mar 10 2009 Encapsulating Locked File Appends (8)
Mar 10 2009 DWT failed to build (5)
Mar 10 2009 DMD 1.033 - Linux initZ errors (5)
Mar 10 2009 std.date Cyclic dependency (3)
Mar 10 2009 setting version (4)
Mar 09 2009 Dynamic C++ (1)
Mar 09 2009 dmdfind (2)
Mar 09 2009 DMD Mac and linking with frameworks (8)
Mar 09 2009 Trivial benchmarking on linux (1)
Mar 09 2009 pmap (2)
Mar 08 2009 "Re-enabled auto interfaces"? (1)
Mar 08 2009 libpthread linker error (4)
Mar 08 2009 Getting started on a Mac (12)
Mar 08 2009 Non-D linkage with out and inout parameters. (3)
Mar 08 2009 pure functions without invariant (a get-rid-of-const proposal) (4)
Mar 07 2009 Returning const? -- A potential solution (28)
Mar 07 2009 Assignment and down-casting (4)
Mar 07 2009 harmful null dereference straight in dmd (3)
Mar 07 2009 in vs. const (9)
Mar 07 2009 D compiler benchmarks (30)
Mar 06 2009 important proposal: scope keyword for class members (25)
Mar 06 2009 Symbol Undefined:_D3dfl3all12__ModuleInfoZ on dmd1.041 (3)
Mar 06 2009 D programming practices: object construction order (1)
Mar 06 2009 typedef (5)
Mar 05 2009 compiling dmd on linux (7)
Mar 05 2009 Associating symbols with attributes (D 2.0) (7)
Mar 05 2009 Possible bug with ? : and const (3)
Mar 05 2009 thanks, Walter! (1)
Mar 05 2009 thanks, Walter! (2)
Mar 04 2009 rdmd (5)
Mar 04 2009 return in void functions (19)
Mar 03 2009 Compilation of .di needes the imported file and -J setup correctly (3)
Mar 03 2009 Null references (oh no, not again!) (186)
Mar 03 2009 Actor-based Concurrency (2)
Mar 03 2009 The Sweet With (11)
Mar 03 2009 LLVM updates (4)
Mar 02 2009 General reference types (3)
Mar 01 2009 const?? When and why? This is ugly! (126)
Mar 01 2009 Small Changes for Java JDK7 (8)
Mar 01 2009 std.locale (109)
Mar 01 2009 Installing dmd (12)
Mar 01 2009 Important contributions to D by non-guru programmers! (7)
Feb 28 2009 Unofficial wish list status.(Mar 2009) (10)
Feb 28 2009 Can D code be linked as part of C programs? (plug-ins in D?) (4)
Feb 28 2009 Need clear error msg. (template instantiating) (5)
Feb 28 2009 __FUNCTION__ (45)
Feb 28 2009 string-arguments of functions in 'std.file' (5)
Feb 28 2009 D for projects similar to Forth Interpreters? (10)
Feb 27 2009 Resizable Arrays? (15)
Feb 27 2009 Inlining Virtual Functions (2)
Feb 27 2009 First class lazy Interval (28)
Feb 26 2009 Promote D in wikipedia (8)
Feb 26 2009 Recent Phobos changes (7)
Feb 25 2009 Nick Sabalausky (2)
Feb 25 2009 Beginning with D (14)
Feb 25 2009 Anyone want to maintain Planet D? (2)
Feb 25 2009 Now it works. Weird (4)
Feb 24 2009 Inline Functions (4)
Feb 24 2009 Inline Functions (27)
Feb 23 2009 Mixin virtual functions -- overloads (6)
Feb 22 2009 std.bind documentation sucks hard (46)
Feb 22 2009 Is implicit string literal concatenation a good thing? (23)
Feb 21 2009 Tango: Out of Date Installation Instructions (9)
Feb 20 2009 assert or execption case program hang in multi thread (8)
Feb 20 2009 How to initialize immutable global static array? (1)
Feb 19 2009 primitive vector types (59)
Feb 18 2009 Is str ~ regex the root of all evil, or the leaf of all good? (86)
Feb 18 2009 core.sync? (2)
Feb 18 2009 problem with declaration grammar? (12)
Feb 17 2009 memory-mapped files (17)
Feb 17 2009 earthquake changes of std.regexp to come (55)
Feb 17 2009 Memory allocation failed (8)
Feb 17 2009 Algorithms in the std lib (8)
Feb 16 2009 range stuff (4)
Feb 16 2009 OPTLINK needs to die. (43)
Feb 16 2009 std.file.read implementation contest (15)
Feb 16 2009 DB/DBMS in D (6)
Feb 15 2009 DMD 1.039 slowness... (11)
Feb 15 2009 Is dmd-osx temporary unavailable? (3)
Feb 15 2009 ddbi died calculation (1)
Feb 15 2009 ref? (17)
Feb 15 2009 Why doen't this code work? (1)
Feb 14 2009 forward reference hell! (4)
Feb 14 2009 boxing, struct opAssign and constructors (2)
Feb 14 2009 Some Ideas for Dynamic Vtables in D (9)
Feb 13 2009 default random object? (96)
Feb 12 2009 nth_element implementation? (3)
Feb 12 2009 Templates at runtime (8)
Feb 11 2009 API Evolution (3)
Feb 11 2009 Version declaration proposal (13)
Feb 10 2009 Recursive discriminated unions [Phobos2] (8)
Feb 10 2009 Why version() ? (123)
Feb 10 2009 OpenCL bindings (7)
Feb 09 2009 Structs implementing interfaces in D1 (25)
Feb 09 2009 Proposal : allocations made easier with non nullable types. (51)
Feb 08 2009 (non)nullable types (84)
Feb 08 2009 escaping addresses of ref parameters - not (29)
Feb 08 2009 Will D programe run on google android? (3)
Feb 07 2009 Old problem with performance (148)
Feb 07 2009 std.string and ranges (267)
Feb 07 2009 Class extensions in D? (1)
Feb 06 2009 Tuples (4)
Feb 06 2009 The path to unity (61)
Feb 06 2009 If !in is inconsistent because of bool/pointer, then so is ! (17)
Feb 05 2009 goto (24)
Feb 05 2009 Non-fragile ABI in D? (7)
Feb 05 2009 property syntax problems (10)
Feb 04 2009 Inline assembler in D and LDC, round 2 (31)
Feb 04 2009 Lambda syntax, etc (61)
Feb 03 2009 History of C (2)
Feb 02 2009 Asmjit - JIT asm compiler (for C++) (18)
Feb 02 2009 Dual CPU code (13)
Feb 01 2009 Multiple Tuple IFTI Syntactic Sugar (7)
Feb 01 2009 Re: Question of delegate literals and return from (1)
Jan 31 2009 std.patterns: it was about time (11)
Jan 31 2009 Unofficial wish list status.(Feb 2009) (3)
Jan 31 2009 Question of delegate literals and return from function(GDC,DMD,LDC) (5)
Jan 31 2009 D versus Objective C Comparison (68)
Jan 30 2009 ch-ch-update: series, closed-form series, and strides (56)
Jan 30 2009 SlickEdit 9 will include support for D (2)
Jan 30 2009 Feasibility of std.range and std.algorithm in D 1.0 (4)
Jan 30 2009 Scientific computing with D (55)
Jan 29 2009 Let's do front, back, popFront, and popBack! (14)
Jan 29 2009 Heap: container or range? (30)
Jan 29 2009 Scripting in D on Windows (3)
Jan 29 2009 People speaketh (25)
Jan 29 2009 stdc autoconfig through precompiling (2)
Jan 28 2009 What's the deal with __buck? (13)
Jan 28 2009 Please vote once and for good: range operations (76)
Jan 27 2009 ch-ch-changes (109)
Jan 27 2009 IsValueType template? std.traits.hasAliasing? (8)
Jan 27 2009 Compiler as dll (48)
Jan 26 2009 Checked oveflows in C# (3)
Jan 25 2009 Where to report bugs in DM tools? (2)
Jan 25 2009 Nothrow, pure in druntime (11)
Jan 25 2009 proxies (2)
Jan 25 2009 ref returns and properties (30)
Jan 25 2009 D to C compiler? (22)
Jan 25 2009 Could we get a LP64 version identifier? (28)
Jan 24 2009 range and algorithm-related stuff (31)
Jan 24 2009 Getting D language patch into GDB (5)
Jan 24 2009 OT: Worthwhile *security-competent* web host? (7)
Jan 22 2009 Protection in BaseClassList: semantics undefined? (3)
Jan 21 2009 The magic behind foreach (was: Re: Descent 0.5.3 released) (16)
Jan 21 2009 volatile asm (3)
Jan 21 2009 Templates and virtual functions (9)
Jan 21 2009 Glibc hell (13)
Jan 21 2009 wiki 4 d (2)
Jan 19 2009 Can we get rid of opApply? (37)
Jan 19 2009 Ada Vs C (with some D mixed in) (6)
Jan 19 2009 Fan language (1)
Jan 18 2009 Pluggable type sytems (14)
Jan 18 2009 TempAlloc and druntime GC (7)
Jan 18 2009 dsource considered harmful (10)
Jan 17 2009 Idea: norecover blocks (6)
Jan 17 2009 Any chance to call Tango as Extended Standard Library (199)
Jan 17 2009 Patten matching example (1)
Jan 16 2009 Getting All Instances of a Thread-Local (4)
Jan 16 2009 Accessing extern variable in Ruby DLL (5)
Jan 16 2009 DWT+OpenGL crashing on Vista (22)
Jan 16 2009 Top 25 Programming Errors (1)
Jan 15 2009 broken import gcstats in dmd-v2.023 (2)
Jan 15 2009 Profiler Speed (21)
Jan 14 2009 SafeD and Nullable concept (3)
Jan 14 2009 Anomaly on Wiki4D GuiLibraries page (30)
Jan 14 2009 D2 (2)
Jan 14 2009 How to size optimize the executable? (3)
Jan 14 2009 Qt 4.5 to be LGPL (37)
Jan 13 2009 Overload by return type (6)
Jan 13 2009 Docs on dsource (9)
Jan 13 2009 const and mutable declarations in one union (5)
Jan 13 2009 scope as template struct (5)
Jan 13 2009 BRAINSTORM: return type of 'exception' (4)
Jan 12 2009 D on gamedev.net (9)
Jan 12 2009 lazy thoughts (59)
Jan 12 2009 N-D arrays in D2 (1)
Jan 12 2009 D2 Grammar (3)
Jan 12 2009 Warning: volatile does NOT do what you think it does. WRT. DS or (6)
Jan 11 2009 writef (7)
Jan 10 2009 LDC on LLVM projects page now (2)
Jan 10 2009 Interfaces and Template Specializations (11)
Jan 10 2009 DBC TM (6)
Jan 10 2009 DMD range support? (8)
Jan 09 2009 Slow DWT compiles (15)
Jan 09 2009 OT: Less-restrictive alternative to XML and XML visualizers? (9)
Jan 09 2009 Purity (D2 standard libraries / object.d) (49)
Jan 08 2009 Why isn't ++x an lvalue in D? (12)
Jan 08 2009 Suggestion: module opcall (6)
Jan 07 2009 Purity and C functions (3)
Jan 07 2009 Properties (109)
Jan 07 2009 Ddoc issues (2)
Jan 07 2009 meaning of 64bit: not only large memory, but large atomic integer (6)
Jan 06 2009 Question about associative arrays (2)
Jan 06 2009 Off subject (5)
Jan 06 2009 Arrays pass (1)
Jan 05 2009 Disallowing ?:?: syntax (8)
Jan 04 2009 alias example is incorrect on website (2)
Jan 04 2009 Randomness in built-in .sort (24)
Jan 03 2009 Portability of uint over/underflow behavior (13)
Jan 03 2009 final forbidden with value template parameters? (3)
Jan 03 2009 foreach ... else statement (48)
Jan 03 2009 Out-of-Memory recovery (2)
Jan 02 2009 Transactional Memory, localized procedural programming, etc (2)
Jan 02 2009 Possible D2 solution to the upcasting array problem, and a related (12)
Jan 01 2009 'naked' keyword (11)
Dec 31 2008 Unofficial wish list status.(Jan 2009) (1)
Other years:
2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004
|
http://www.digitalmars.com/d/archives/digitalmars/D/index2009.html
|
CC-MAIN-2017-04
|
refinedweb
| 11,578
| 64.24
|
This needs to be trained and to do that, we need a list of manually classified tweets.. I store those tweets in a Redis DB. Even with those numbers, it is quite a small sample and you should use a much larger set if you want good results.
Next is a test set so we can assess the exactitude of the trained classifier.
Test tweets:
- I feel happy this morning. positive.
- Larry is my friend. positive.
- I do not like that man. negative.
- My house is not great. negative.
- Your song is annoying. negative.
Implementation
The following list contains the positive tweets:
pos_tweets = [('I love this car', 'positive'), ('This view is amazing', 'positive'), ('I feel great this morning', 'positive'), ('I am so excited about the concert', 'positive'), ('He is my best friend', 'positive')]
The following list contains the negative tweets:
neg_tweets = [('I do not like this car', 'negative'), ('This view is horrible', 'negative'), ('I feel tired this morning', 'negative'), ('I am not looking forward to the concert', 'negative'), ('He is my enemy', 'negative')]
We take both of those lists and create a single list of tuples each containing two elements. First element is an array containing the words and second element is the type of sentiment. We get rid of the words smaller than 2 characters and we use lowercase for everything.
tweets = [] for (words, sentiment) in pos_tweets + neg_tweets: words_filtered = [e.lower() for e in words.split() if len(e) >= 3] tweets.append((words_filtered, sentiment))
The list of tweets now looks like this:
tweets = [ (['love', 'this', 'car'], 'positive'), (['this', 'view', 'amazing'], 'positive'), (['feel', 'great', 'this', 'morning'], 'positive'), (['excited', 'about', 'the', 'concert'], 'positive'), (['best', 'friend'], 'positive'), (['not', 'like', 'this', 'car'], 'negative'), (['this', 'view', 'horrible'], 'negative'), (['feel', 'tired', 'this', 'morning'], 'negative'), (['not', 'looking', 'forward', 'the', 'concert'], 'negative'), (['enemy'], 'negative')]
Finally, the list with the test tweets:
test_tweets = [ (['feel', 'happy', 'this', 'morning'], 'positive'), (['larry', 'friend'], 'positive'), (['not', 'like', 'that', 'man'], 'negative'), (['house', 'not', 'great'], 'negative'), (['your', 'song', 'annoying'], 'negative')]
Classifier
The list of word features need to be extracted from the tweets. It is a list with every distinct words ordered by frequency of appearance. We use the following function to get the list plus the two helper functions.
word_features = get_word_features(get_words_in_tweets(tweets))
def get_words_in_tweets(tweets): all_words = [] for (words, sentiment) in tweets: all_words.extend(words) return all_words
def get_word_features(wordlist): wordlist = nltk.FreqDist(wordlist) word_features = wordlist.keys() return word_features
If we take a pick inside the function get_word_features, the variable ‘wordlist’ contains:
<FreqDist: 'this': 6, 'car': 2, 'concert': 2, 'feel': 2, 'morning': 2, 'not': 2, 'the': 2, 'view': 2, 'about': 1, 'amazing': 1, ... >
We end up with the following list of word features:
word_features = [ 'this', 'car', 'concert', 'feel', 'morning', 'not', 'the', 'view', 'about', 'amazing', ... ]
As you can see, ‘this’ is the most used word in our tweets, followed by ‘car’, followed by ‘concert’…
To create a classifier, we need to decide what features are relevant. To do that, we first need a feature extractor. The one we are going to use returns a dictionary indicating what words are contained in the input passed. Here, the input is the tweet. We use the word features list defined above along with the input to create the dictionary.
def extract_features(document): document_words = set(document) features = {} for word in word_features: features['contains(%s)' % word] = (word in document_words) return features
As an example, let’s call the feature extractor with the document [‘love’, ‘this’, ‘car’] which is the first positive tweet. We obtain the following dictionary which indicates that the document contains the words: ‘love’, ‘this’ and ‘car’.
{'contains(not)': False, 'contains(view)': False, 'contains(best)': False, 'contains(excited)': False, 'contains(morning)': False, 'contains(about)': False, 'contains(horrible)': False, 'contains(like)': False, 'contains(this)': True, 'contains(friend)': False, 'contains(concert)': False, 'contains(feel)': False, 'contains(love)': True, 'contains(looking)': False, 'contains(tired)': False, 'contains(forward)': False, 'contains(car)': True, 'contains(the)': False, 'contains(amazing)': False, 'contains(enemy)': False, 'contains(great)': False}
With our feature extractor, we can apply the features to our classifier using the method apply_features. We pass the feature extractor along with the tweets list defined above.
training_set = nltk.classify.apply_features(extract_features, tweets)
The variable ‘training_set’ contains the labeled feature sets. It is a list of tuples which each tuple containing the feature dictionary and the sentiment string for each tweet. The sentiment string is also called ‘label’.
[({'contains(not)': False, ... 'contains(this)': True, ... 'contains(love)': True, ... 'contains(car)': True, ... 'contains(great)': False}, 'positive'), ({'contains(not)': False, 'contains(view)': True, ... 'contains(this)': True, ... 'contains(amazing)': True, ... 'contains(enemy)': False, 'contains(great)': False}, 'positive'), ...]
Now that we have our training set, we can train our classifier.
classifier = nltk.NaiveBayesClassifier.train(training_set)
Here is a summary of what we just saw:
The Naive Bayes classifier uses the prior probability of each label which is the frequency of each label in the training set, and the contribution from each feature. In our case, the frequency of each label is the same for ‘positive’ and ‘negative’. The word ‘amazing’ appears in 1 of 5 of the positive tweets and none of the negative tweets. This means that the likelihood of the ‘positive’ label will be multiplied by 0.2 when this word is seen as part of the input.
Let’s take a look inside the classifier train method in the source code of the NLTK library. ‘label_probdist’ is the prior probability of each label and ‘feature_probdist’ is the feature/value probability dictionary. Those two probability objects are used to create the classifier.
def train(labeled_featuresets, estimator=ELEProbDist): ... # Create the P(label) distribution label_probdist = estimator(label_freqdist) ... # Create the P(fval|label, fname) distribution feature_probdist = {} ... return NaiveBayesClassifier(label_probdist, feature_probdist)
In our case, the probability of each label is 0.5 as we can see below. label_probdist is of type ELEProbDist.
print label_probdist.prob('positive') 0.5 print label_probdist.prob('negative') 0.5
The feature/value probability dictionary associates expected likelihood estimate to a feature and label. We can see that the probability for the input to be negative is about 0.077 when the input contains the word ‘best’.
print feature_probdist {('negative', 'contains(view)'): <ELEProbDist based on 5 samples>, ('positive', 'contains(excited)'): <ELEProbDist based on 5 samples>, ('negative', 'contains(best)'): <ELEProbDist based on 5 samples>, ...} print feature_probdist[('negative', 'contains(best)')].prob(True) 0.076923076923076927
We can display the most informative features for our classifier using the method show_most_informative_features. Here, we see that if the input does not contain the word ‘not’ then the positive ration is 1.6.
print classifier.show_most_informative_features(32) Most Informative Features contains(not) = False positi : negati = 1.6 : 1.0 contains(tired) = False positi : negati = 1.2 : 1.0 contains(excited) = False negati : positi = 1.2 : 1.0 contains(great) = False negati : positi = 1.2 : 1.0 contains(looking) = False positi : negati = 1.2 : 1.0 contains(like) = False positi : negati = 1.2 : 1.0 contains(love) = False negati : positi = 1.2 : 1.0 contains(amazing) = False negati : positi = 1.2 : 1.0 contains(enemy) = False positi : negati = 1.2 : 1.0 contains(about) = False negati : positi = 1.2 : 1.0 contains(best) = False negati : positi = 1.2 : 1.0 contains(forward) = False positi : negati = 1.2 : 1.0 contains(friend) = False negati : positi = 1.2 : 1.0 contains(horrible) = False positi : negati = 1.2 : 1.0 ...
Classify
Now that we have our classifier initialized, we can try to classify a tweet and see what the sentiment type output is. Our classifier is able to detect that this tweet has a positive sentiment because of the word ‘friend’ which is associated to the positive tweet ‘He is my best friend’.
tweet = 'Larry is my friend' print classifier.classify(extract_features(tweet.split())) positive
Let’s take a look at how the classify method works internally in the NLTK library. What we pass to the classify method is the feature set of the tweet we want to analyze. The feature set dictionary indicates that the tweet contains the word ‘friend’.
print extract_features(tweet.split()) {'contains(not)': False, 'contains(view)': False, 'contains(best)': False, 'contains(excited)': False, 'contains(morning)': False, 'contains(about)': False, 'contains(horrible)': False, 'contains(like)': False, 'contains(this)': False, 'contains(friend)': True, 'contains(concert)': False, 'contains(feel)': False, 'contains(love)': False, 'contains(looking)': False, 'contains(tired)': False, 'contains(forward)': False, 'contains(car)': False, 'contains(the)': False, 'contains(amazing)': False, 'contains(enemy)': False, 'contains(great)': False}
def classify(self, featureset): # Discard any feature names that we've never seen before. # Find the log probability of each label, given the features. # Then add in the log probability of features given labels. # Generate a probability distribution dictionary using the dict logprod # Return the sample with the greatest probability from the probability # distribution dictionary
Let’s go through that method using our example. The parameter passed to the method classify is the feature set dictionary we saw above. The first step is to discard any feature names that are not know by the classifier. This step does nothing in our case so the feature set stays the same.
Next step is to find the log probability for each label. The probability of each label (‘positive’ and ‘negative’) is 0.5. The log probability is the log base 2 of that which is -1. We end up with logprod containing the following:
{'positive': -1.0, 'negative': -1.0}
The log probability of features given labels is then added to logprod. This means that for each label, we go through the items in the feature set and we add the log probability of each item to logprod[label]. For example, we have the feature name ‘friend’ and the feature value True. Its log probability for the label ‘positive’ in our classifier is -2.12. This value is added to logprod[‘positive’]. We end up with the following logprod dictionary.
{'positive': -5.4785441837188511, 'negative': -14.784261334886439}
The probability distribution dictionary of type DictionaryProbDist is generated:
DictionaryProbDist(logprob, normalize=True, log=True)
The label with the greatest probability is returned which is ‘positive’. Our classifier finds out that this tweets has a positive sentiment based on the training we did.
Another example is the tweet ‘My house is not great’. The word ‘great’ weights more on the positive side but the word ‘not’ is part of two negative tweets in our training set so the output from the classifier is ‘negative’. Of course, the following tweet: ‘The movie is not bad’ would return ‘negative’ even if it is ‘positive’. Again, a large and well chosen sample will help with the accuracy of the classifier.
Taking the following test tweet ‘Your song is annoying’. The classifier thinks it is positive. The reason is that we don’t have any information on the feature name ‘annoying’. Larger the training sample tweets is, better the classifier will be.
tweet = 'Your song is annoying' print classifier.classify(extract_features(tweet.split())) positive
There is an accuracy method we can use to check the quality of our classifier by using our test tweets. We get 0.8 in our case which is high because we picked our test tweets for this article. The key is to have a very large number of manually classified positive and negative tweets.
Voilà. Don’t hesitate to post a comment if you have any feedback.
Very nice example with detailed explanations. Good work, thank you.
Link | January 2nd, 2012 at 11:16 pm
Hi,
very good article. But I found two liitle errors:
1.) Your function get_word_features() does only need one argument.
2.)apply_features() needs to be called upon nltk.classify.util instead of only nltk.classify
Thanks for sharing.
Link | January 3rd, 2012 at 4:40 am
Thanks a lot. Very very useful coding stuff.
Link | January 3rd, 2012 at 6:12 am
Really great article ! You say in the real implementation, you use around 1200 tweets; is there a GUI, or is it possible to automate the process of adding all these tweets into their respective lists? It just seems kind of long to have to do it by hand.
Link | January 3rd, 2012 at 9:36 am
Thanks very much. Excellent timing for me as I was looking how to do this – but with Welsh language tweets. On a quick read I haven’t spotted anything that limits this to working only with English – or have I missed anything?
Link | January 3rd, 2012 at 1:34 pm
very nice posting! it makes me wondering what else is possible. i’ll definitely add your blog to my blogroll.
Link | January 4th, 2012 at 6:14 am
Excellent guide. Just what I was looking for as I’m just starting to explore sentiment analysis. Thanks very much.
I can’t see what ‘num_word_features’ is doing in the 3rd line of code under Classifier.
Link | January 5th, 2012 at 5:00 pm
@Patrick Thanks for the feedback. I fixed the call to get_word_features(). Regarding apply_features, it says nltk.classify.apply_features in the nltk online book:. It has been working for me so I am not sure what might have changed.
Link | January 7th, 2012 at 10:53 am
@Elliott I built a simple web interface to help me classifying the tweets. I also stored the tweets in a Redis DB in 2 different lists: positive and negative. For the article, I am using hard coded list to simplify things.
Link | January 7th, 2012 at 10:55 am
@Hywel You are correct. The same can apply to other languages. I am actually using that method to classify tweets written in French. I am not satisfied with the results yet because the sample I classified manually is too small. An issue with other languages than English is that it is difficult to find a large corpus of classified tweets.
Link | January 7th, 2012 at 11:08 am
@ Hywel I fixed the call to get_word_features. I am trimming the number of word features in the real application but I keep it simpler for the post.
Link | January 7th, 2012 at 11:13 am
@Laurent Luce The web interface is quite a good idea. If anyone is looking for a pre-classified test corpus, I found one here:.
Link | January 7th, 2012 at 3:53 pm
@Elliott Thanks for the link to that pre-classified test corpus. Here is another one using emoticons:
I had to manually classified the tweets on my side because they are in French and I didn’t find a pre-classified corpus in French.
Link | January 14th, 2012 at 11:12 am
@Laurent Luce Pour les liens en français, je sais que ça peut sembler un peu arbitraire, mais prendre un corpus en anglais et le faire passer par Google Translate, je vois pas de raison pour que ça marche pas!
Link | January 14th, 2012 at 9:48 pm
Very useful Laurent! thanks for your post
Link | February 18th, 2012 at 3:25 pm
Thanks so much for this, great blog. 🙂
Link | March 7th, 2012 at 1:24 pm
The algorithm used by NLTK is highly inefficient. With about 9000 positive tweets and 9000 negative tweets on a 8gb ram quad core server (2ghz each) it takes 45 minutes to be trained (running on pypy). I highly suggest to write your own implementation of the Baesian Classifier; mine takes about 1.5 minutes to train with 9000/9000 tweets on the same machine.
Link | March 13th, 2012 at 11:51 am
What would be a reasonable number of manually classified tweets to use in order to train the classifier in a real world system?
Would there be a point at which manually training more tweets becomes pointless or detrimental?
Link | March 26th, 2012 at 4:02 am
The clarity of your post and the brilliant explanation is just amazing ! Great work Sir 🙂 !
Link | April 15th, 2012 at 8:19 am
For those getting the error
nltk.classify has no module apply_features
make sure your nltk version is correct:
PyYAML==3.09
nltk==2.0.1rc1
Link | May 11th, 2012 at 4:01 am
@Mark. There are few papers out there on training sets. Size of 200k-300k is not uncommon. Not sure where the upper limit is.
Link | June 1st, 2012 at 6:09 pm
Hi…it’s great explanation. Thanks for sharing…
Right now I’m working in this kind of work, and having great challenge in converting tweet language to standard form. Any advice? Thanks 🙂
Link | June 10th, 2012 at 9:40 am
Thanks a lot for the tutorial. I followed it with some modifications for my purpose. But, like Luca wrote, it’s very inefficient, but it works efficiently. With your base I got 0.81 of accuracy. With a movie review database I got an 0.77. And with another tweet database selected by my group I got a lot less: 0,69 of accuracy. I calculated these accuracy using a 3-fold cross validation.
Link | July 2nd, 2012 at 10:39 am
Very good information, and detailed explanation. Thanks for sharing it.
Link | September 3rd, 2012 at 7:37 pm
Hi Laurent,
Can you please let me know some good references for Sentiment Analysis especially implementation issues e.g., choosing word features, feature extractions etc.
Thanks
Link | September 13th, 2012 at 2:25 pm
This is a really great walk through of sentiment classification using NLTK (especially since my Python skills are non-existent), thanks for sharing Laurent!
Just an FYI- the apply_features function seems to be really slow for a large number of tweets (e.g. 100,000 tweets have taken over 12 hours and still running). any tips to improve the performance?, this is not even half of my training set!
Link | September 14th, 2012 at 1:24 am
I don’t understand.What is the purpose of test_tweets??
Link | September 24th, 2012 at 6:02 am
@Deyan: test_tweets is our manually classified list of tuples: words + positive/negative.
Link | October 16th, 2012 at 9:20 am
nice blog ! it’s very easy to follow. I wrote my own naive bayes python classifier before but think it’s time to move on to playing with other libraries.
btw does the theme you’re using make it easy to share code? or did you do the highlighting yourself? I’ve been meaning to write some posts where I can share code for people.
Link | October 19th, 2012 at 11:39 am
Really nice article. @luca can you share the links or blog post for your implementation ?
Link | November 9th, 2012 at 10:00 pm
@Bonnie: I use SyntaxHighlighter Evolved.
Link | November 21st, 2012 at 7:35 am
Very helpful article.
Helped me to solve exactly the problem I had.
Link | January 8th, 2013 at 3:59 am
Really useful article!
Link | March 3rd, 2013 at 1:24 pm
Thanks for this excellent tutorial. I do have to ask though, is there an alternative to the classifier you use? I tried using it, but my dataset is 1.5 million tweets and I just don’t think it’s feasible. It’s taking far too long.
Is there other ready-build libraries you know of that I could substitute?
Link | August 9th, 2013 at 3:45 pm
Thanks.
You are inspiring me for writing my bachelor degree project.
Link | May 9th, 2014 at 6:04 pm
Hi Laurent
Is there a corpus of classified tweets in Brazilian Portuguese?
Thanks in advance.
Paulo
Link | May 19th, 2014 at 7:42 am
great job. Thanks for article.
Link | November 26th, 2014 at 7:31 am
Laurant,
Thanks so much for sharing this. You make it very clear. I am only now trying to tackle this type of challenge…. Hope I will still get a response, since you posted this 3 years ago.
When I look at word_features after running, word_features = get_word_features(get_words_in_tweets(tweets))
the list is in random order. That makes sense to me since it is the result of .keys() method applied to a dictionay.
Yours is in order of frequency though. So I am wondering what I am missing?
Link | April 11th, 2015 at 3:01 am
Hi, guys! Thank you for the amazing tutorial!
I have just one question. What’s the call that gets the accuracy of 0.8. I found the method, the first parameter is the classifier, but which is the second parameter?
Link | May 15th, 2015 at 2:41 am
@Gretel: Which version of ntlk are you using?
Link | June 2nd, 2015 at 4:13 pm
Merci Laurent pour cet exposé.
J’ai beaucoup apprécié !
Link | July 22nd, 2015 at 4:41 pm
|
http://www.laurentluce.com/posts/twitter-sentiment-analysis-using-python-and-nltk/
|
CC-MAIN-2016-44
|
refinedweb
| 3,419
| 66.54
|
If a picture is worth a thousand words, then how about a thousand pictures -- or a graphical animation? Getting the message across depends a lot on the tools at hand. It's one thing to stand in front of a classroom with a blackboard, a projector, and a room full of PCs, but it's quite another thing describing a set of ideas to multiple people who are standing in front of a kiosk, in a hall filled with competing noises and distractions. Wouldn't it be great to create animated screen shots without first taking a course in multimedia and purchasing expensive development applications? Of course it would!
Let's set the stage: our objective is to create a screen-shot movie that demonstrates how to use a certain feature in an application. In addition, we should be able to provide commentary within the movie to give background as to what's going on.
Ideally, we can make a movie with tools that don't take long to learn and use. The technique demonstrated in this article shows how to capture screen shots in rapid succession. These screen shots are then converted into a single file that can be read by nothing more complicated than a browser.
We need the following tools:
xwininfo, the window information utility for X.
bashscript to record multiple screen captures.
Launch the application you want to record. Use
xwininfo to
obtain the target window's hexadecimal ID number. In this example, the ID
number,
0x140001d, comes from the fifth line in the output
below:
bernier@wolf:~/tmp/animate$ xwininfo xwininfo: Please select the window about which you would like information by clicking the mouse in that window. xwininfo: Window id: 0x140001d "making movies.sxw - OpenOffice.org 1.1.0 " Absolute upper-left X: 4 Absolute upper-left Y: 18 Relative upper-left X: 0 Relative upper-left Y: 0 Width: 920 Height: 630 Depth: 16 Visual Class: TrueColor Border width: 0 Class: InputOutput Colormap: 0x1400001 (installed) Bit Gravity State: ForgetGravity Window Gravity State: StaticGravity Backing Store State: NotUseful Save Under State: no Map State: IsViewable Override Redirect State: no Corners: +4+18 -100+18 -100-120 +4-120 -geometry 920x630+4+18 bernier@wolf:~/tmp/animate
Use the
import command to capture the window. If this succeeds,
you'll hear two beeps from the PC's speaker. If you want a frame around the
screen capture, add the
-frame switch. By the way, the
import
command can capture screens to any file format, but the MIFF format is very
fast:
import -window 0x140001d openoffice.miff
View the screen capture with
display:
display openoffice.miff
If you prefer a different format, use
convert to, well, convert
the image:
convert openoffice.miff openoffice.png
With all of those steps explained, here's a simple
bash script to collect the screen captures for our movie. It takes two command-line arguments: the window ID and
the amount of shots to capture:
>#!/bin/sh # A simple bash script to screen capture # # Supply two arguments, the window id and number of captures let x=1 # loop until it has captured the number of captures requested while [ "$x" -le "$2" ] do import -window $1 "capture$x.miff" # uncomment the line below # if you want more time in between screen captures # sleep 2s let x+=1 done
Invoking the script is straightforward. Make it executable, then type:
./capture.sh w_id no_capt
where
w_id is the hexadecimal ID obtained from
xwininfo, and
no_capt is the number of screen shots to
record.
Use
animate to animate the captured images:
animate -delay 20 *.miff
The
delay number controls how much time to wait between the
individual screen captures. The units are in hundredths of a second.
Finally, converting the animated images to a more
convenient, single-file format is accomplished by using the
convert utility.
There are several likely formats:
MNG is a license free, multi-image file format, similar to PNG but with more bells and whistles. This format is not yet widely used, but it is very neat and there are plug-ins for all the major browsers.
convert -delay 20 *.miff capture.mng
GIF, you should know.
convert -delay 20 *.miff capture.gif
MPEG encoding requires you to download and compile the
mpeg2encode
utility source code, but it does allow you to add sound.
convert -delay 20 *.miff capture.mpg
It's not always enough to replicate a series of keystrokes and pop-up
menus. Sometimes, you need details that help explain what's going on better.
For that matter, it's nice to make a plain screen look fantastic with all sorts
of graphical, special effects. That's where
mogrify comes in.
mogrify -fill blue -pointsize 25 -draw 'text 10,20 "Hello World" ' capture1.miff
The above command adds the phrase "Hello World" to the
capture1.miff image. The words will be colored blue, with a
point size of 25. The words are placed relative to the top left corner of the
image in terms of
x (10 pixels to the right) and
y (20 pixels down)
coordinates.
The
montage command makes the additions on a copy of the
original. Remember to use the
-geometry switch with the current
window size (for example,
-geometry 920x630); otherwise, the copy
will have a size of 120 by 120 pixels.
montage -fill black -pointsize 50 \ -draw 'text 100,300 "Robert Bernier" ' \ -geometry 920x630 capture1.miff capture1a.miff
Drawing a box behind the words will make them stand out:
montage -fill yellow \ -draw 'Rectangle 80,250 400,400' \ -fill black -pointsize 20 \ -draw 'text 100,300 "@instruction1.txt"' \ -geometry 920x630 capture1.miff capture1a.miff
Remember that you read the switches from left to right. Why? Because each set of options can be changed by the next option to its right. You won't be able to see the words in this next example because the yellow box covers them, as drawing text occurs before drawing the colored box:
montage -fill black -pointsize 20 \ -draw 'text 100,300 "@instruction1.txt"' \ -fill yellow -draw 'Rectangle 80,250 400,400' \ -geometry 920x630 capture1.miff capture1a.miff
By the way, did you notice the
@instruction1.txt? The
@ token instructs the utility to place the contents of a text
file, instruction1.txt in this case, into the image.
The best way to figure out what looks good is by experimentation. Here are a few things I've learned that may save you some time.
The more instructions and options on the
import command, the longer the capture will take.
Do the screen captures using the MIFF file format. Capturing to any other format can slow down the capture process.
The screen-shot capture rate depends on the window size.
Listen to the beeps; this will give you an idea how quickly or slowly you should navigate through your window.
Use the
root flag to capture the entire screen of your desktop:
import -window root capture.miff
Identify and rehearse the steps that you want to record. This is known as a
making a storyboard. The
screen-capturing process will save every action, good and bad, so practice the
steps before invoking the
bash script.
Inserting a
sleep command inside of the
bash script can give you
much-needed time to prepare the application for the next screen shot without
feeling rushed.
Reviewing each shot after capture helps identify the good and bad images.
Simplify this tedious operation by using
display to load and edit
an entire series of images with one command.
You may have noticed that listing files with numbers doesn't always sort the way you want. Here's a shell trick to ensure that the files are never out of sequence:
display capture?.miff capture??.miff
Clicking once on a displayed image will put the ImageMagick program into editing mode, and clicking the image again will make the menu disappear. The editing tools have many of the same options that are available on the command line. Pressing the space bar advances to the next image.
Improve the flow of your presentation by adjusting the speed and duration of the frame rates of your animation. You can do this while converting the individual images.
Add comments and special effects to the images only after you've established their respective animation frame rates. Make a backup of your animation first.
Either download the demo or run it directly from your browser using
animate. You can also convert it into another file format to
better suit your hardware. Be patient. MNG files must be first uncompressed
and then cycled once before they will behave properly; this can take up to a
minute. However, if you convert it to a GIF, the file will larger, but it will respond
instantly in the browser. Converting the file to an MPEG permits play with any
multimedia application, such as
mplayer, without any startup time
delays.
Developing the small screen-capture movie demo for this article made it painfully obvious to me that you should exercise discretion when choosing your file format. I experimented with GIFs, MPGs, and MNGs. For example, the resulting uncompressed GIF was 14MB. (ImageMagick binaries do not use the proprietary compression algorithm unless you enable that option and recompile.) The MPG was 4.6MB. The MNG was the smallest at 500K. The fancy screen-capture demo was substantially larger, with the GIF at 26MB, the MPG at 6.7MB, and the MNG again the smallest at 4.7MB. However it was interesting to note that converting the fancy movie MNG to PCX files and then converting it back reduced the size to 1.6MB, 65% smaller.
The MNG format is clearly superior to GIF, but it is not very well
supported, and takes quite a long time to decompress before it's ready to play.
Mozilla's MNG plug-in failed to bring up the small demo. The ImageMagick
utility
animate worked fine. KDE's Konqueror could only run the
small demo, where an HTML tag embedded the MNG file with the
<src> tag.
The GIF version of the demos played well in
animate, Mozilla,
and Konqueror.
Another factor to consider is that files such as GIF and MNG don't really
"stream," so players need to load the entire file into RAM before you can see
it. Large graphics may consume too much RAM, crashing your application. One
easy solution is to enlarge the virtual memory by using a swap file. This issue
may also come up when using
convert.
MPG is a good choice when resources are at a premium and it's not possible
to add a swap file. You may need to experiment with the
convert options
to prevent color loss, though.
animate is good because the presentation is so exact and easy
to control. Viewing GIF format guarantees that any browser can read it fast. Beware,
though -- both of these formats are RAM hungry.
ImageMagick is a very sophisticated graphics manipulation package. This article has covered only its barest capabilities. Anybody who decides to use it as a development platform can increase his or her productivity by using scripts. ImageMagick has an API with a complete set of language bindings for over 16 languages, including Perl, Java, C, C++, and Cold Fusion.
For those who follow the articles of a certain FreeBSD girl, you can also do some very interesting things such as Hide Secrets with Steganography inside of your movie.
Several Linux multimedia development applications are available. Many of them have taken inspiration from ImageMagick.
xwinfoman page
mpeg2encodesource
Robert Bernier is the PostgreSQL business intelligence analyst for SRA America, a subsidiary of Software Research America (SRA).
Return to the LinuxDevCenter.com.
|
http://www.onjava.com/lpt/a/4602
|
CC-MAIN-2014-42
|
refinedweb
| 1,948
| 64.3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.